w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML Categories
Run mongo index command from terminal, solutions for a very large index
You should use the background option. db.collection.ensureIndex({ text: 'text', background: true }) From mongodb's documentation: Builds the index in the background so that building an index does not block other database activities. More information here http://docs.mongodb.org/manual/reference/method/db.collection.ensureIndex/

Categories : Mongodb

Mongo db large number of documnets in a collection
You can do those operations on both models, but I really don't see why embed rows in a array. If each row is unique and represents an unique entry in your data model, there is no motivation to embed it in a array. Embed arrays/objects are often used to store data that were on distinct tables, forcing to use a JOIN on each read operation. The classic example is tags entries on a blog post: theres is a posts table, a tags table and a relationship table called post_tags. In a document fashion, you just embed tags on post document. Create a separated document for each row. It will save you deal with array index operations.

Categories : Mongodb

Algorithm for mongo objectid filed in mongodb and how to get last 24 hour data from mongo collection
Is there any way to get objectids that are genrated in last 24 hours in mongo collection based on time stamp on objectid You can call JavaScript code such as: date = new Date() date.setDate(date.getDate() - 1) yesterday = Math.floor(date.getTime()/1000).toString(16) db.coll.find({_id : {$gt : new ObjectId(yesterday + "0000000000000000")}} , {_id:1}) The first and second statements are straightforward: yesterday's date. Third row creates a 4-Byte Hex String of yesterday. Which are the 4 leftmost bytes of ObjectId. Then, you pad the 8 rightmost bytes of the ObjectId with zero's as you don't care about those. These are mac address(3), pid(2) and a running counter(3) All you have to now is query your collection (coll in the example) and return the _id's what is the algorithm to

Categories : Mongodb

Update Mongo Collection Using hadoop-mongo & PIG
The solution is to use MongoUpdateStorage: https://github.com/alabid/mongo-hadoop/blob/issues/pig/mongo-update-storage/pig/README.md Works like a charm

Categories : Hadoop

Mongo Mapper & Rails - Queries suddenly taking > 100 ms
Son of a @#(%&@ it was my mongo.yml file that had a default database configured which WASN'T my localhost. I have no idea when that changed - I was so focused on GIT history that I didn't bother checking in this file.

Categories : Ruby On Rails

how to export last 15 mins data from mongo collection using mongo export
You can check out this question: mongoDb return results based on time interval You have to escape the spec chars. I could not make it run with the ISODate helper but like this "{date:{$gt:{ "$date" : 1370935140000 }}}" given as a -q parameter to the mongoexport works fine. This case 1370935140000 is the unix timestamp in mili secs. So you have to calculate the unixtimestamp 15 mins before than add 000 at the and and go on with this. I will try to resolve also with the ISODate.

Categories : Mongodb

Renaming a Mongo Collection in PHP
Updates: Removed my old map/reduce method since I found out (and Sammaye pointed out) that this changes the structure Made my exec version secondary since I found out how to do it with renameCollection. I believe I have found a solution. It appears some versions of the PHP driver will auth against the admin database even though it doesn't need to. But there is a workaround where the authSource connection param is used to change this behavior so it doesn't auth against the admin database but instead the database of your choice. So now my renameCollection function is just a wrapper around the renameCollection command again. The key is to add authSource when connecting. In the below code $_ENV['MONGO_URI'] holds my connection string and default_database_name() returns the name of the dat

Categories : PHP

Merging into large collections with Mongo
MapReduce is not meant for incremental updates. It's a one shot tool to churn lots of data and return a result. I would actually advice you to see whether using the aggregation framework is just fast enough to do this in real time. It is a lot faster than M/R. For example, your above example would be represented with the following call to aggregate: db.collName.aggregate( { $group: { '_id': '$_id', watts: { $sum: '$watts' } } } ); A/F is faster than M/R as it's implemented in C and can run concurrent jobs. A down side at the moment is that it doesn't output its results to a new collection, but that's currently being worked on: https://jira.mongodb.org/browse/SERVER-3253

Categories : Mongodb

Expire a Collection in Mongo using Casbah EnsureIndex
There are a couple things to check: Were you just following the docs to a T and tried to create an index on a status field which doesn't actually exist in your documents? (had to atleast ask...) Does the status field contain JUST dates? It can theoretically be mixed, but only documents with a date type would be considered for expiration. Have you checked your collection indexes to make sure the index was properly created? To check for the index from the console run: db.collection.getIndexes(). If the index was created successfully, then double check you have the corresponding status fields in your documents and that they are proper dates. Adding the index alone, doesn't create the date field for you - you would need to add it to the documents or use an existing date field that is n

Categories : Mongodb

How do you tell Mongo to sort a collection before limiting the results?
The problem is that you need to sort on date instead of $date. myCollection.find().sort({date: 1}).limit(50, callback); Mongo applies the sort before limiting the results regardless of the order you call sort and limit on the cursor. Proof in docs: link

Categories : Mongodb

NodeJS Mongo - Mongoose - Dynamic collection name
Collection name logic is hard coded all over the Moongose codebase such that client side partitioning is just not possible as things stands now. My solution was to work directly with the mongo driver - https://github.com/mongodb/node-mongodb-native This proved great, the flexibility working with the driver directly allows for everything required and the Moongose overhead does not seem to add much in any case.

Categories : Node Js

connect-mongo sessions collection is empty
You need to use express.session instead of express.cookieSession: app.use(express.session({ secret : 's3cr3t', store : new MongoStore({ mongoose_connection : mongoose.connection }) });

Categories : Node Js

Is it possible to undo a drop operation on a Mongo Collection?
With a stand alone (no replicaset) I am afraid that you may not be able to recover your data. Read this post How to recover a dropped MongoDB database?

Categories : Mongodb

Error at BFS search .Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index
When the user enters something that is greater than 220 as a starting position, this line trace[start] = -1; will throw an exception, since start is indeed out of trace bounds. Therefore you need to force user to enter something you can handle. Like this: Console.Write("Please Input the Starting Node : "); starting_position = Convert.ToInt32(Console.ReadLine()); while (starting_position < 0 || starting_position >= trace.Count) { Console.Write("Starting Node is invalid. Should be between 0 and 220. Please enter another one : "); starting_position = Convert.ToInt32(Console.ReadLine()); } This is just an idea, point is - you should think about user input validation so it does not break your program. Update. Actually I did not realize that trace in the code above conta

Categories : C#

Very large $sum and $average value in Mongo Aggregate doesn't work
You can do this by coercing your NumberLong values into floats before trying to sum them by adding a floating point 0 to them in a $project like this: pipeline = [ {$match: {///conditions here}}, {$project: {type: 1, duration: {$add: ['$duration', 0.0]}}}, {$group: {_id: 'type', 'total': {'$sum':'$duration'}, 'avg': {'$sum':'$duration'}}}, ]

Categories : Mongodb

How to get the distinct list of ids from the Mongo Collection by datetime ordering?
This is tricky, because each distinct value for _stdid can have a different dateTime field value. Which one do you want to pick? The first, the last, the average? You will also need to use the aggregation framework, and not a straight distinct. On the MongoDB shell, you would use (if you wanted the first of all the dateTime values that map to a single _stdid field): db.messages.aggregate( [ { $group: { _id: '$_stdid', dateTime : { $first: '$dateTime' } } }, { $sort: { dateTime : -1 } } ] ); In Java, this looks like: // create our pipeline operations, first with the $group operation DBObject groupFields = new BasicDBObject( "_id", "$_stdid" ); groupFields.put( "dateTime", new BasicDBObject( "$first", "$dateTime" ) ); DBObject group = new BasicDBObject( "$group", groupFields );

Categories : Mongodb

Removing Non-Collection Embedded Document via Mongo Shell
If you want to remove one of the log entries, then you want the $pull operator. The format would be something like: db.collection.update({_id:<id-of-document-to-update>}, {$pull:{"log._id":<id-of-log-entry-to-remove>"}} ) This says, find document with certain _id and remove from log array an entry with certain sub_id.

Categories : Mongodb

Persisting only unique values in a Mongo collection -- sub arrays
If you require only one sub-business and one group, then you would be better off restructuring your document to something like this: {_id:"business_val", sub_business:"sub_business_val", group:"group_val"} Your code should look like: new BasicDBObject().append("_id", "business_val").append("sub_business", "sub_business_val").append("group", "group_val"); Additionally, you need to ensure unique index on the collection using: db.[your_collection].ensureIndex({_id:1,sub_business:1,group:1},{unique:true})

Categories : Java

Meteor: populate a form select with objects from the mongo collection
I solved this by using the rendered callback: Template.addPage.rendered = function() { Activities.distinct("genre", function(error, result){ result.sort(); var genreList = document.getElementById('genreList'); for(var i in result){ var option=document.createElement("option"); option.text=result[i]; genreList.add(option, null); } }); }

Categories : Meteor

How to query mongo using virtual attributes from Collection.transform in meteor
Here's how you would get all "contacts" out of this, perhaps in an inefficient way: var allContacts = []; customers.find().forEach(function(customer){ var contacts = customer.neverContacted(); contacts.forEach(function(contact){ allContacts.push(contact); //You will want to have an if here to check if it already contains that contact already. }); }); Another option: setupSearches(customers.find()).contacts() setupSearches = function(input){ input.contacts = function () { contacts = input.find({customer_id: this._id}).fetch(); return contacts; } return input; }

Categories : Mongodb

How to get paginated/sliced data of subdocument array in mongo collection?
I may not understand your question in full depth, but seems like $slice is the droid your are looking for: > db.page.find() { "_id" : ObjectId("51f4ad560364f5490ccebe26"), "fiTpcs" : [ "uuid1", "uuid2", "uuid3", "uuid4", "uuid5" ], "fiTpcsCnt" : 2 } > db.page.find({}, {"fiTpcs" : {$slice : 3}}) { "_id" : ObjectId("51f4ad560364f5490ccebe26"), "fiTpcs" : [ "uuid1", "uuid2", "uuid3" ], "fiTpcsCnt" : 2 } > db.page.find({}, {"fiTpcs" : {$slice : [1,3]}}) { "_id" : ObjectId("51f4ad560364f5490ccebe26"), "fiTpcs" : [ "uuid2", "uuid3", "uuid4" ], "fiTpcsCnt" : 2 }

Categories : Mongodb

Is this a valid mongo command format? db.[database].[collection].find()
The correct syntax is use <dbname>; db.<collname>.find() I've not used MongoHub, but if it's a log you're looking at maybe it puts the database name in the string / log for reference? http://docs.mongodb.org/manual/reference/method/

Categories : Mongodb

Why is my mongo query not using index only?
You are using arrays and subdocuments. Covered Indexes dont work with either of these. From the mongo docs: An index cannot cover a query if: any of the indexed fields in any of the documents in the collection includes an array. If an indexed field is an array, the index becomes a multi-key index index and cannot support a covered query. any of the indexed fields are fields in subdocuments. To index fields in subdocuments, use dot notation. For example, consider a collection users with documents of the following form: http://docs.mongodb.org/manual/tutorial/create-indexes-to-support-queries/

Categories : Windows

fetch documents from mongodb collection by querying nested dictionary in mongo
You can use the $exists operator and dot notation to do this, but you need to build up your query dynamically like this (in the shell): var user = 'abc'; var query = {}; query['user_details.' + user] = { $exists: true }; db.coll.find(query);

Categories : Python

Programmatically enable sharding + choosing shard key on a collection using casbah with Mongo 2.4
For completeness and to help others - enableSharding is a command (see the enableSharding docs) and you can run any command from casbah by using db.command. import com.mongodb.casbah.Imports._ // Connect to MongoDB val conn = MongoClient() val adminDB = conn("admin") // Enable sharding adminDB.command(MongoDBObject("shardCollection" -> "<database>.<collection>", "key" -> <shardkey>)) The part should be a MongoDBObject defining the shardkey. As Asya mentions this might not be the right solution for your use case but its certainly possible to do pragmatically using casbah.

Categories : Mongodb

How can i exclude a mongo index from a query?
You can't exclude indexes, you can only specify the use of one. However, MongoDB empirically tests indexes with your query by checking the search speed of the query against all indexes. It then determines what index to use based on these results. Can you please run the query with .explain(true) to show all the query plans. Regards, Charlie

Categories : Mongodb

Optimizing Compound Mongo GeoSpatial Index
I played with this for a number of days and got the result I was looking for. Firstly, given that action types other than "PLAY" CAN NOT have a location the additional query parameter "actionType==PLAY" was unnecessary and removed. Straight away I flipped from "time-reverse-b-tree" cursor to "Geobrowse-polygon" and for my test search latency improved by an order of 10. Next, I revisited the 2dsphere as suggested by Derick. Again another latency improvement by roughly 5. Overall a much better user experience for map searches was achieved. I have one refinement remaining. Queries in areas where there are no plays for a number of days have generally increased in latency. This is due to the query looking back in time until it can find "some play". If necessary, I will add in a time range

Categories : Mongodb

Mongo DB sorting exception - too much data for sort() with no index
Try creating a compound index instead of two indexes. db.collection.ensureIndex( { 'loc':'2d','lastActiveTime':-1 } ) You can also suggest the query which index to use: db.collection.find(...).hint('myIndexName')

Categories : Mongodb

How to serialize a large collection
Write the data to the disk , and do not use memory stream . read using StreamReader so you will not have to keep a large ammount from that data in memory if you need to load all the data as the same time to do processing then do it in SQL server by storying them in temprory table . memory is not a place to store large data.

Categories : C#

PreparedStatement ignores parameters in the query: java.sql.SQLException: Parameter index out of range (1 > number of parameters, which is 0)
Right way is: String sql = "SELECT * FROM `as` WHERE `as`.ip_range LIKE ?"; statement.setString(1, "%" + clinetIP + "%"); Parameters binding doesn't work inside a literal!

Categories : Java

Updating large number of records in a collection
Let me give you a couple of hints based on my global knowledge and experience: Use shorter field names MongoDB stores the same key for each document. This repetition causes a increased disk space. This can have some performance issue on a very huge database like yours. Pros: Less size of the documents, so less disk space More documennt to fit in RAM (more caching) Size of the do indexes will be less in some scenario Cons: Less readable names Optimize on index size The lesser the index size is, the more it gets fit in RAM and less the index miss happens. Consider a SHA1 hash for git commits for example. A git commit is many times represented by first 5-6 characters. Then simply store the 5-6 characters instead of the all hash. Understand padding factor For updates happening in

Categories : Mongodb

how to extract data from mongo collection for data warehouse use
give a try to pentaho kettle. https://anonymousbi.wordpress.com/2012/07/25/creating-pentaho-reports-from-mongodb/

Categories : Mongodb

Selenium WebDriver analyze large collection of links quickly
You seem to be using Webdriver purely to execute Javascript rather than access the objects. A couple of ideas to try IF you drop using javascript (Excuse the java but you get the idea); //We have restricted via xpath so will get less links back AND will not haveto check the text within loop List<WebElement> linksWithText = driver.findElements(By.xpath("//a[text() and not(text()='')]")); for (WebElement link : linksWithText) { //Store the location details rather than re-get each time Point location = link.getLocation(); Integer x = location.getX(); Integer y = location.getY(); if (x < windowX && y < windowY) { ///Insert all info using webdriver commands; }

Categories : Javascript

Clustering: finding groups of close items within a large collection
What I understand the problem is: "A-G" stand for the words. "#" stands for the distance between the two words. And you need to find out the pair which the distance <= 2. A B C D E F G A * # # # # # # B * # # # # # C * # # # # D * # # # E * # # F * # G * Basically it needs 2-level loop, what we can do is to reduce comparison times, only compare "#" part in matrix above. Here is the code in PHP: $result = array(); while ( ($word = array_shift($arrWordList)) !== NULL ) { foreach ($arrWordList as $otherWord ) { if ( calc_dist($word, $otherWord) <= 2 ) { $result[] = array($word, $otherWord); } } } And you can use $result to continue doing something.

Categories : PHP

JVM hotspot options for large graph measure calculation:garbage collection
Try to change heap size (-Xmx) parameter If you don't use some items in your HashMap, use HashMap.remove method. If there are no more references to these objects, they will be collected by GC.. Use Trove collections: http://trove.starlight-systems.com/overview

Categories : Java

TTL index for users collection
Regarding correctness: According to the documentation, when the field indexed by the TTL index is not a valid BSON date, the document will never expire, so setting it to true would prevent it from expiring. So this would work. But keep in mind that when the document is expired it will disappear without a trace, so you have no way to tell if the user who tries to confirm expired or never existed in the first place. But there is an error in your update command. Besides the fact that you placed a closing } wrong, this update will replace the whole document with a new one containing only the field confirm:true. When you want to keep all other fields of the document, use the $set operator: db.users.update({email: req.param('email')}, {$set:{confirm:true}}); Also keep in mind that this updat

Categories : Node Js

WPF Binding Collection with Index
where index is an integer in my viewmodel, does not work. Does anyone know how I can associate the index in my xaml with the property in my viewmodel? One simple option would be to just expose a CurrentLocation property within your ViewModel, which was effectively Location[index]. You could then bind to it directly.

Categories : Wpf

Adding an index to a large table
Well, the answer to this one is easy (but you probably won't like it): You can't. SQL Server requires the index key to be less then 800 bytes. It also requires the key to be stored "in-row" at all times. As a NVARCHAR(MAX) column can grow significantly larger then 800 bytes (up to 2GB) and is also most often stored outside of the standard row-data-pages SQL Server does not allow an index key to include a NVARCHAR(MAX) column. One option you have is to make this GUID column an actual UNIQUEIDENTIFIER datatype (or at least a CHAR(32). Indexing GUIDs is still not recommended because they cause high fragmentation, but at least with that it is possible. However, that is not a quick nor simple thing to do and if you need the table to stay online during this change, I strongly recommend you get

Categories : SQL

spatial index for large objects
Your question is more over related to a topic called Geographic information system which is huge and has a set of its own rules of how to create databases and manage them. One example of GIS-Geographic information system is Google Maps. GIS projects have many tools to work on the project , examples Java is not the language of choice when dealing with GIS for managing data , GIS projects normally work on C language refer

Categories : Java

Do we need to set the index on the collection for the order by fields also?
I think so. Refer: http://docs.mongodb.org/manual/tutorial/sort-results-with-indexes/ It says: In MongoDB sort operations that sort documents based on an indexed field provide the greatest performance. Indexes in MongoDB, as in other databases, have an order: as a result, using an index to access documents returns in the same order as the index.

Categories : Mongodb



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.