w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Find documents that DON'T have a string in a collection
You need to create an index for this type of query. In the index you should flatten your OptOut collection so you can create queries on it. More on this here: How to query items from nested collections in Raven DB? EDIT It seems that this can be answered with a simple LuceneQuery w/o having to explicitly create and index. var users = session.Advanced .LuceneQuery<Person>() .Where("OptOut:* AND -OptOut:" + newsLetterType) .ToList(); EDIT 2 You will need to create this index: from doc in docs.People select new { OptOut = doc.OptOut.Count==0 ? "" : doc.OptOut} To include person documents having no OptOut values. It is frustrating how this is not available in the Query (typed client) but we can continue with that discu

Categories : Linq

How to find documents by some conditions for its linked documents
As JohnnyHK commented, the type of query you want to do is a relational type query and document database such as mongodb simply do not support them. Fix your schema to put that tag data directly in the link schema (nesting or "denormalizing" from a relational standpoint, which is OK in this case), then you can query it: var LinkSchema = new Schema({ name: String, tags: [String] }); With that Schema, your query will work as you expect. To address the comments below. This is a document database. It's not relational. There are trade-offs. Your data is de-normalized and it gives you some scalability and performance and you trade off flexibility of queries and data consistency to get them. If you wanted to rename a tag, a relatively rare occurrence, you'd have to do a whopping 2 databas

Categories : Node Js

Load all documents from RavenDB
I figured it out: I have to wait for non staled results. So if I change my Query out with this: session.Query<Model.Category>().Customize(cr => cr.WaitForNonStaleResults()).ToList(); it works just fine.

Categories : C#

Can RavenDB persist documents to a filesystem?
You don't need RavenDB for that. Just use System.IO.File and related concepts. Raven won't work with individual files on disk. It keeps it's own set of files for its index and data stores. Access from other programs is not allowed. What happens in RavenDB, stays in RavenDB. Most people store big files on disk, and then just store a file path or url reference to them in their RavenDB database. Raven also supports the concept of attachments, where the file is uploaded into Raven's database - but then it wouldn't be available as a single file on disk the way you are thinking.

Categories : File

PHP MongoDb, find all referenced documents when collection is indexed array
I don't quite see why you have such a complicated structure. Particularily the "0" and "1" keys are problematic, especially dealing with PHP as it doesn't really like arrays with numerical string keys. The $ref/$id fields come from MongoDBRef, which should avoid as they don't provide you with any functionality. You should just have: { "_id": ObjectId("5188deba4c2c989909000000"), "_type": "Model_Discs", "title": "really cool cd", "referencedBy": [ ObjectId("4e171cade3a9f23359e98552"), ObjectId("5045c3222b0a82ec46000000") ] } Then you can simply query with: db.collection.find( { referencedBy: new ObjectId("5045c3222b0a82ec46000000") } );

Categories : PHP

Mongodb : find documents including references to sub-documents [node-mongodb-native]
MongoDB doesn't support joins so you have to query for the User details separately. If you're using node.js, Mongoose provides a populate feature to help simplify this so you can do something like the following to pull in the user details: Files.find().populate('Users_id')

Categories : Mongodb

MongoDB merge 2 very similar collections. Existing documents - update, new documents - insert
The most efficient in terms of queries would be to bulk update all the dates that need to be updated in one go per date and bulk insert all those documents that need inserting. Given you have 95% the same documents where you want to update A.dateLastSeen to be B.dateLastSeen. With single updates that would be: ~66,500 updates. Leaving ~3,500 inserts. Loading all B and A in memory - then processing is one possibility. You could create a bulk insert list and append anytime a doc from B is missing from A. Also a bulk update dictionary keyed by dateLastSeen containing a list of Documents to update. Depends on the probability of any matching dateLastSeen values to see if this is really worth it. Alternatively, simplify it an accept the high query cost and start processing B in batches o

Categories : Mongodb

How to update only the non-existing documents and delete the old documents which doesnot exist now using mongoimport
Basically you want to keep only collection in your export right ? If so, take a look at the following options: --drop Modifies the restoration procedure to drop every collection from the target database before restoring the collection from the dumped backup. --upsert Modifies the import process to update existing objects in the database if they match an imported object, while inserting all other objects. If you do not specify a field or fields using the --upsertFields mongoimport will upsert on the basis of the _id field. More information on mongorestore

Categories : Mongodb

Get empty collection of embedded documents
I had exactly the same problems: saving was no problem but fetching the embedded document was. Turned out it was a "Symfony" caching problem (also on app_dev.php). Have you tried to remove your cache, this worked for me!

Categories : Mongodb

How to get a collection with some some fields excluded from some documents?
I would set up two subscriptions, one for a person's characters and another for all characters (which removes the fields you don't want published). The results will get merged. A little more information per the DDP spec: The client maintains one set of data per collection. Each subscription does not get its own datastore, but rather overlapping subscriptions cause the server to send the union of facts about the one collection's data. For example, if subscription A says document x has fields {foo: 1, bar: 2} and subscription B says document x has fields {foo: 1, baz:3}, then the client will be informed that document x has fields {foo: 1, bar: 2, baz: 3}. If field values from different subscriptions conflict with each other, the server should send one of the possible fi

Categories : Mongodb

Finding first 20 documents from collection in mongodb
On the MongoDB shell you can do: db.collectionName.find( { city: "London" } ).skip( 20 ).limit( 20 ); To show the results from document 21 to 40. Please look at limit and skip: http://docs.mongodb.org/manual/core/read/#limit-the-number-of-documents-to-return I also strongly suggest you go over a tutorial: http://docs.mongodb.org/manual/tutorial/getting-started/

Categories : Mongodb

Mongodb delete documents without associated documents
So each chat has a field visitor_id and you want to delete only visitors whose _id does not appear as a visitor_id in a chat? You would have to loop over all visitors, check last_activity for each, and if it is a candidate for deletion, do a find() on chats with that visitor's _id. If it turns up no documents, you can delete that visitor. When you iterate over all visitors, you do that with a MongoDB cursor (the result of a find()). The cursor is implemented in such a way that you can safely delete documents from the underlying collection while iterating over it. The trick is that you don't attempt to express everything in a single remove() call. You iterate, check, and delete as part of the iteration. You want to make sure that the find() inside the loop is very fast, by adding an ind

Categories : Mongodb

ECM : Migration of documents referencing other documents
(Disclaimer: I work for a company which have a lot of experience in these kind of migrations and we have special tools for them.) You need a mapping between the old (SharePoint) and new (LiveLink) paths of the documents. A simple two-step migration process is the following: Copy the documents from SharePoint to LiveLink and fill the mapping table during the migration. The links in LiveLink could be changed to point to a dummy LiveLink node, left empty. Fix the broken links in LiveLink with the mapping table.

Categories : Sharepoint

Is it possible to add multiple documents in meteor through collection.insert()?
Batch insertion isn't yet possible with Meteor. Though you could make an iterator to help you insert documents in an array var docs = [{docNumber: 1},{docNumber: 2}]; _.each(docs, function(doc) { myCollection.insert(doc); }); It might be possible to do it on the server end, albeit with a bit of modifications to expose a bulk insertion method. The problem with this though is this code wouldn't work on the client end.

Categories : Database

Finding duplicates of multi-page documents on distinct IDs
Assuming SQL Server 2005+: ;WITH CTE AS ( SELECT *, N=COUNT(*) OVER(PARTITION BY ID, Global_ID, document, subtitle) FROM YourTable ) SELECT * FROM CTE WHERE N > 1

Categories : SQL

not able to insert documents into collection using mongodb java driver
The 'n' value from the getlasterror of an insert is always zero. (The 'n' value is what the WriteResult.getN() returns.) See this MongoDB Jira ticket: https://jira.mongodb.org/browse/SERVER-4381. Which has been closed in preference to a new insert, update, remove mechanism: https://jira.mongodb.org/browse/SERVER-9038 Long story short. You are not mad or missing anything. It is a "feature" of MongoDB that will hopefully finally be fixed with the 2.6 release. Rob. Edit: I modified your example slightly to print the saved document. Can you try running this version in your environment? import java.net.UnknownHostException; import com.mongodb.BasicDBObject; import com.mongodb.DB; import com.mongodb.DBCollection; import com.mongodb.DBObject; import com.mongodb.MongoClient; import com

Categories : Java

Is it possible to include other documents from the same collection in MongoDB's aggregate() function?
As at MongoDB 2.4, the Aggregation Framework does not support fetching additional documents into a pipeline or referencing documents relative to the current document. You will have to implement these sort of calculations in your application logic. You may want to upvote and watch SERVER-4437 in the MongoDB Jira issue tracker; this feature suggestion is to add support for windowing operations on pipelines.

Categories : Mongodb

Multi-tenant full-text search of XML documents, SOLR
I would go for one-core per tenant approach. Some Reasons, off the top: indexing and re-indexing can be isolated. You could shard cores depending on the tenant load, thus enabling you to scale better for high volume clients. (Possibly work your payment model on this) Unsubscribing means you would just need to delete/rename cores. you could enable client specific configuration that requires core reloading without having to disrupt other services.

Categories : PHP

How do I fill a collection with millions of dummy documents without a million inserts?
While you could just write code to insert documents using your favorite programming language (as many of the drivers offer techniques to insert documents in batches via an array structure), I'd suggest you consider creating either a JSON file or a CSV file containing the structure of your documents (maybe in multiple files if necessary for the import to work), and then using mongoimport, import all of the data. http://docs.mongodb.org/manual/reference/program/mongoimport/ This way, you can create the file(s) once, and run the import directly from the database server without installing extra software/platforms/node/etc. If you wanted to use Node.JS, you can use insert (documentation) and simply pass in an array of documents. I'd suggest doing it batches of some size far less than one mil

Categories : Node Js

fetch documents from mongodb collection by querying nested dictionary in mongo
You can use the $exists operator and dot notation to do this, but you need to build up your query dynamically like this (in the shell): var user = 'abc'; var query = {}; query['user_details.' + user] = { $exists: true }; db.coll.find(query);

Categories : Python

Meteor: Embed documents inside a document or separate them into each collection object and link them?
Separate collections for students and classrooms seems more straightforward. I think just keeping a 'classroom' or 'classroomId' field in each student document will allow you to join the two collections when necessary.

Categories : Mongodb

Find first and last documents matching a query?
You could do 2 queries - one sorted by date ascending and the other sorted by date descending. On each of the queries, limit the result to 1. If the date field is indexed that should be a pretty quick query.

Categories : Mongodb

MongoDB find by max value in array of documents
If Host is unique, the following code should do the job. Otherwise, you cam simply replace host by _id in the grouping operation coll.aggregate([ {$unwind: "$ips"}, {$project:{host:"$host",ip:"$ips.ip", ts:"$ips.timestamp"} }, {$sort:{ts:1} }, {$group: {_id: "$host", IPOfMaxTS:{$last: "$ip"}, ts:{$last: "$ts"} } } ]) I hope it helps.

Categories : Mongodb

MongoDB/Java: Finding unique documents based on value
Try the aggregation framework: > db.foodle.find() { "_id" : ObjectId("52323c61fd99d220e24eef53"), "domainName" : "www.example-domain-0.com", "updateDate" : ISODate("2013-09-12T22:12:49.933Z"), "uniqueIdentifier" : "375d7219-828c-4f81-a1fc-3692aa68d110" } { "_id" : ObjectId("52323c64fd99d220e24eef54"), "domainName" : "www.example-domain-1.com", "updateDate" : ISODate("2013-09-12T22:12:52.877Z"), "uniqueIdentifier" : "f96bb647-5dcb-4cc1-8a66-105177a45474" } { "_id" : ObjectId("52323c67fd99d220e24eef55"), "domainName" : "www.example-domain-0.com", "updateDate" : ISODate("2013-09-12T22:12:55.550Z"), "uniqueIdentifier" : "14f6yu43-20eb-42c6-bb06-26b77c0bf0cb" } { "_id" : ObjectId("52323c6afd99d220e24eef56"), "domainName" : "www.example-domain-2.com", "updateDate" : ISODate("2013-09-12T22:12

Categories : Java

MongoDB count number of new documents per minute based on _id
You can do this with M/R indeed. getTimestamp() works in M/R as it runs in JavaScript on the server, it doesn't matter whether your client language is PHP or Python: map = function() { var datetime = this._id.getTimestamp(); var created_at_minute = new Date(datetime.getFullYear(), datetime.getMonth(), datetime.getDate(), datetime.getHours(), datetime.getMinutes()); emit(created_at_minute, {count: 1}); } reduce = function(key, values) { var total = 0; for(var i = 0; i < values.length; i++) { total += values[i].count; } return {count: total}; } db.so.mapReduce( map, reduce, { out: 'inline' } ); db.inline.find(); W

Categories : Mongodb

MongoDB Update Multiple Documents based on ObjectID (_id)
I believe you need to make a couple changes: BasicDBList list = new BasicDBList(); list.add( new ObjectId("123") ); // Add the rest... DBObject inStatement = new BasicDBObject( "$in", list ); column.updateMulti( new BasicDBObject( "_id", inStatement ), new BasicDBObject( "$set", new BasicDBObject( "field", 59 ) ); Otherwise, with your current query, you're doing an equality comparison of the _id property against a list of _ids - not actually using the $in operator.

Categories : Java

Traverse multiple XML documents to find particular attributes using R
If you look at the HTML value that is returned, rather than just the greppish value you can find: $body$div$div$fieldset$div$ul$li$a$.attrs title href class "Patient Package Insert" "#nlm42230-3" "nlmlinkfalse" ... but the item above it has a class value of "nlmlinktrue". So maybe you will need to go through all the unfortunately unnamed $body$div$div$fieldset$div$ul$li$a$.text nodes to find the "Patient Package Insert" item and then see what its $body$div$div$fieldset$div$ul$li$a$.attrs class value is. When I do that by hand on the third item I get: Data$insert[[3]]$body[14]$div[12]$div[2]$fieldset[3]$div[2]$ul[27]$li[2] $a $a$text [1] "Patient Package Insert" $a$.attrs title

Categories : R

Inverted Index: Find a phrase in a set of documents
I don't know if this is the most efficient, but you could start with the documents/positions of words[0]. Then go to words[1] and find intersecting documents with positions equal to words[0].position + words[0].length + 1 for the same documents. Then likewise iterate over the rest of words. It should narrow down pretty quickly for longer phrases?

Categories : C++

many small documents or less big documents
Using keys as values, like you do in: 'username1':['user1','user2','user3'], is a bad idea as you can not do a indexed query where you look for documents that have a specific sender. This works: db.messages.find( { 'username1' : { $exists: true } } ); But it is not going to be fast. It is probably wise to keep your first option, with one document per message and sender. Then you can just do: db.messages.find( { sender: 'username1' } ); Adding a new recipient to this document can be done with: db.messages.update( { 'msgid' : '867896', sender: "username1" }, { 'recipient': { $push: "user4" } } ); You can make MongoDB use the same index for both queries as well, by having: db.messages.ensureIndex( { sender: 1, msgid: 1 } ); Other hints You need to be aware that you

Categories : Mongodb

Mongo DB: Query for documents currently "live" based on start and end date
Like this: var currentTime = new Date(); items.find({ 'time.start': {$lt: currentTime}, 'time.end': {$gt: currentTime} }); Which will find the docs where the current time is between the start and end times in the doc.

Categories : Mongodb

Find maximum date from multiple embedded documents
Your question is very very similar to this question. Check it out. As with that solution, your issue can be handled with MongoDB's aggregation framework, using the $project and $cond operators to repeatedly flatten your documents while preserving a max value at each step.

Categories : Mongodb

Find difference between 2 documents on mongoDB from the mongo shell
Just declare native javascript function that can compare two objects in a way you need, then write a code like this: obj1 = db.test.findOne({"_id" : ObjectId("5176f80981f1e2878e840888")}) obj2 = db.test.findOne({"_id" : ObjectId("5176f82081f1e2878e840889")}) difference(obj1, obj2) Some native javascript difference functions can be found here or here P.S. You can also load some third party js libs from shell like this: load("D:difference.js") Hope this help.

Categories : Mongodb

How can I modify the documents on an object-level in find() before I publish them?
var Docs = new Meteor.Collection('docs', { transform: function(doc) { ... return anythingYouWant; }, }); or var docs = Docs.find({...}, { transform: function(doc) { ... return anythingYouWant; }, }); See http://docs.meteor.com/#meteor_collection and http://docs.meteor.com/#find .

Categories : Javascript

How do I find documents with an element at a particular position in an array using MongoDB?
I don't see any way to achieve this using simple array. However here is what you could do using an array of hashes: > db.collections.find() { "_id" : ObjectId("51c400d2b9f10d2c26817c5f"), "ingredients" : [ { "value1" : "apple" }, { "value2" : "orange" } ] } { "_id" : ObjectId("51c400dbb9f10d2c26817c60"), "ingredients" : [ { "value1" : "mint" }, { "value2" : "apple" } ] } { "_id" : ObjectId("51c400e1b9f10d2c26817c61"), "ingredients" : [ { "value1" : "apple" }, { "value2" : "lemon" } ] } > db.collections.find({ ingredients: { $elemMatch: { value1: 'apple' }}}) { "_id" : ObjectId("51c400d2b9f10d2c26817c5f"), "ingredients" : [ { "value1" : "apple" }, { "value2" : "orange" } ] } { "_id" : ObjectId("51c400e1b9f10d2c26817c61"), "ingredients" : [ { "value1" : "apple" }, { "value2" : "lemon

Categories : Mongodb

counting documents in category children of Tree-based categories structure
If the documents are in ActiveRecord, then you may want to do an SQL query to efficiently select all associated docs, then perform count on those. Otherwise commands involving iteration on arrays of ruby objects could take quite a while. Else, you may want to try something like the following: category.children.map{|category| category.associated_docs}.flatten.count The map commands would return an array containing many sub-arrays, each containing the associated documents for each category. The flatten commands merges these into a single-level array, from which it is fairly trivial to count them using count.

Categories : Ruby

Is there a tool to find errors in Excel documents created with the OpenXML SDK?
Though the post is old, but I was stuck with the same situation & so I created a windows application which shall accept the file using a file dialog & parse it to display the errors within. The first function just takes up the generated file name using the dialog box & the second methods displays all the errors that are observed within the generated file. The generated output is as shown in the image http://s18.postimg.org/60rqf78gp/Parse_Open_XML_Generated_File.jpg private void button1_Click(object sender, EventArgs e) { lblError.Text = ""; if (openFileDialog1.ShowDialog() == DialogResult.OK) { textBox1.Text = openFileDialog1.SafeFileName; } } private void button2_Click(object sender, EventArgs e)

Categories : C#

Displaying the result of find / replace over multiple documents on bash
You can do something like: find -wholename "*.txt" | xargs sed -n '/foo/p;s/foo/bar/gp' What this will do is print the line that you wish to substitute and print the substitution in the next line. You can use awk and get filename as well: find -wholename "*.txt" | xargs awk '/foo/{print FILENAME; gsub(/foo/,"bar");print}' To print entire file remove print and add 1 find -wholename "*.txt" | xargs awk '/foo/{print FILENAME; gsub(/foo/,"bar")}1' Regex will have to be modified as per your requirement and changes in-file is only available in gawk version 4.1 Test: $ head file* ==> file1 <== ,,"user1","email" ,,"user2","email" ,,"user3","email" ,,"user4","email" ==> file2 <== ,,user2,location2 ,,user4,location4 ,,user1,location1 ,,user3,location3 $ find . -name "file*"

Categories : Bash

Find and replace a part of several lines in multiple documents using python?
Consider using the fileinput module. It's a good fit for this problem. Example: import fileinput import glob import sys path = "/home/stig/test/*.tex" search = "/home/stig/hfag/oppgave/figs_plots/" replace = "/home/stig/forskning_linux/oppgave_hf2/figs_plots/" for line in fileinput.input(glob.glob(path), inplace=1): sys.stdout.write(line.replace(search, replace)) See also the inplace and backup parameters, which allow you to do the replacement in place (with the safety of backup in case of a mistake).

Categories : Python

mongodb - add column to one collection find based on value in another collection
In MongoDB, the simplest way is probably to handle this with application-side logic and not to try this in a single query. There are many ways to structure your data, but here's one possibility: user_document = { name : "User1", postsIhaveLiked : [ "post1", "post2" ... ] } post_document = { postID : "post1", content : "my awesome blog post" } With this structure, you would first query for the user's user_document. Then, for each post returned, you could check if the post's postID is in that user's "postsIhaveLiked" list. The main idea with this is that you get your data in two steps, not one. This is different from a join, but based on the same underlying idea of using one key (in this case, the postID) to relate two different pieces of data. In gene

Categories : Mongodb

How to find documents that has field some field empty in solr 4.3.1
For type string, I had 2 documents - One with <str name="gender"/> The other with <str name="gender">male</str> When I query with: http://localhost:8080/solr/mycore/select?q=gender%3A%22%22&wt=xml&indent=true I get the document where gender is empty. Isn't that what you want? On the other hand, to get the document where it is not empty, I used this: http://localhost:8080/solr/mycore/select?q=gender%3A%5B%22%22+TO+*%5D&wt=xml&indent=true

Categories : Solr



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.