w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML Categories
why closing down the primary doesn't make replica to vote a new primary in mongodb replica set
To elect a new primary in MongoDB replica set the majority of the members must vote for the new primary. That's majority of the original (or configured) members. You configured two members - one is not a majority. You need to add a third member, either a regular node or an arbiter, if you want your failover to happen automatically.

Categories : Mongodb

ElasticSearch: Replicas of shards only?
AFAIK - Master-slave cannot be done just by using Elastic Search. Elastic search by design uses a different strategy for resilience (node-node). Here is a document explaining the difference - http://translate.google.com/translate?hl=en&sl=zh-CN&u=http://www.elasticsearch.cn/guide/concepts/scaling-lucene/replication/&prev=/search%3Fq%3Dmaster%2Bslave%2Belasticsearch Note - the original document is in elasticsearch.cn, couldn't locate the corresponding English document. The master-slave concept is something that Solr supports. That being said, if you need a master-slave setup, I would think of using something like a load-balancer to isolate the 'master' and 'replica' instance of ES. Also note - you can configure ES to have just worker nodes (that do not hold data, but ju

Categories : Elasticsearch

How does cell updates get propagated to its corresponding replicas?
You are correct, only the one new column will be set to replicas. Cassandra is designed to do writes with no disk seeks, so cannot do reads to propagate writes. (The exception is for counters and some operations on collections, when reads are made on the coordinator for the update. But still, the column propagated is only the column being updated.)

Categories : Cassandra

Shards / Replicas settings for high availability
After a good bit of research and talking to ES-gurus; As long as the shard size is small enough, the most efficient way of setting up this cluster would indeed be 1 shard only, with 13 replicas. I have not been able to pinpoint the threshold size of the shard for this starting to perform worse.

Categories : Elasticsearch

Concurrent writes to cassandra replicas - Is duplication possible?
There are two possible causes of the 15% extra space that I can think of. One is because sometimes a replica will store two copies of a column temporarily. If you write a column twice in Cassandra at slightly different times, the two copies may go into separate memtables so end up in separate SSTables on disk. At some point later, when the SSTables get merged through the compaction process, the older value will be discarded, freeing up the space. In your test you could run nodetool compact to force compaction and see if the space usage goes down. Another possible cause depends on how you did the test when you didn't write to both nodes. If you did this at consistency level ONE, it is possible some of the writes were dropped by the other replica, so it doesn't have all the keys yet.

Categories : Cassandra

how to setup AWS/mongodb with replicas so that if an instance crashes there is nothing to do?
I guess that Amazon instantiates a new machine for us and starts up the processes that were running. With EBS things should be fine. Downing of members can happen for many reasons on a network like AWS. I recommend strongly that you do not create an autoscaling group/cloud template for replicas. Instead I would simply handle bringing up new replicas manually. With EBS things should be fine. Not always, you could have an edge case whereby the storage layer for that dc also goes down. Infact it is normally more likely to be both than just one or the other. when restarting this, how can we add back the new machine to the replica set As stated in the document on bringing members back up, this is mostly a manual process. You tell the mongod through the --replicaSet param what

Categories : Mongodb

Temporarily switch MongoMapper to read from slave replicas
MM doesn't natively support this, but it wouldn't be too hard to do it on a per-model basis via a plugin. module MongoMapper module Plugins module ReadPreference extend ActiveSupport::Concern included do class << self attr_accessor :read_preference end end module ClassMethods def query(options={}) options.merge!(:read => read_preference) if read_preference super options end def with_read_preference(preference) self.read_preference = preference begin yield ensure self.read_preference = nil end end end end end end MongoMapper::Document.plugin(MongoMapper::Plugins::ReadPreference) And testing it

Categories : Ruby

Add new member to replicas set in mongodb with python and subprocess.call
The mongo process is being invoked with a parameter called --port 27072 and a value of --eval .. because of the way you are passing your parameters to subprocess.call. If you change the subprocess.call invocation to the following, then it should work : subprocess.call(["/usr/bin/mongo", "--port", str(port), "--eval", task])

Categories : Mongodb

Remove all replicas of a string more than x characters long (regex?)
regexps are not the right tool for that task. They are based on the theory of context free languages, and they can't match if a string contains duplicates and remove the duplicates. You may find a course on automata and regexps interesting to read on the topic. I think Josay's suggestion can be efficient and smart, but I think I got a more simple and pythonic solution, though it has its limits. You can split your string into a list of lines, and pass it through a set(): >>> s = """I would like this ... text to be ... ... reduced ... I would like this ... text to be ... ... reduced""" >>> print " ".join(set(s.splitlines())) I would like this text to be reduced >>> The only thing with that solution is that you will loose the original order of the lines (the

Categories : Python

Not reachable/healthy Replica Set
similar problem i had , the solution was to have a keyfile. http://docs.mongodb.org/manual/tutorial/deploy-replica-set-with-auth/#create-the-key-file-to-be-used-by-each-member-of-the-replica-set

Categories : Mongodb

mongodb replica set unreachable
You did not mention which version of mongodb you are using, but I assume it is post-2.0. I think the problem with your forced reconfiguration is that after this reconfiguration, you still need to have the minimum number of nodes for a functioning replica set, i.e. 3. But since you originally had 3 members and lost 2, there is no way you could turn that single surviving node into a functioning replica set. Your only option for recovery would be to bring up the surviving node as a stand-alone server, backup the database, and then create a new 3-node replica set with that data.

Categories : Mongodb

Mongo Replica Sets
You don't need to run rs.initiate() on the other node. Once the node is added to the config, running rs.initiate() on the first node will initiate the entire replica set. The error is caused because the replica set has already been initiated with the config version you set up when you run rs.initiate() a second time.

Categories : Mongodb

Add shard replica in SolrCloud
You can go to the core admin at the solrcloud Web GUI, unload the core that has been automatically assigned to that node and then create a new core, specifying the collection and the shard you want it to be assigned at. After you create that core you should see at the cloud view , that your node has been adeed to that specific shard and after some time that all documents of that shard have been sychronized with your node.

Categories : Solr

Mongo DB Cluster Replica set
No, it's not possible to make a primary of one replica set be a secondary in another replica set with MongoDB itself. Each node can only be a member of one replica set and that's specified when you start it up. What you could do instead is implement your own version of replication - reading the oplog.rs collection in the local database of that other cluster to get data into this replica set's primary. Luckily there is an implementation of that in this project. This is a blog post that describes the basic functionality and of course since it's open source you can adjust it for your needs or translate the implementation to your language of choice, etc.

Categories : Mongodb

cassandra replica exception HUnavailableException
If you have only two nodes and your data would be placed on the node that is actually down, when the consistency is required, you may not be able to achieve full write availability. Cassandra would be achieving that with Hinted Handoff, but for the QUORUM consistency level the UnavailableException will be thrown anyway. The same is true when requesting data belonging to the down node. However it seems like your cluster is not well balanced. Your node 111.111.111.111 owns 100% and then 111.111.111.222 seem to own 0%, looking at your tokens they seem to be a reason for that. Checkout how to set initial token here : http://www.datastax.com/docs/0.8/install/cluster_init#token-gen-cassandra Additionally you may want to check Another Question, which contains answer with more reasons, when th

Categories : Cassandra

MongoDB Replica Sets Not Syncing?
Well that's embarrassing. Turns out that I was looking at the incorrect database on my side. The way that I'd configured, I needed to be using a database named test but I had been doing inserts on local. Looking back at the setup instructions for Replica Sets, I'm not quite sure where I configured MongoShell to connect to test, but connecting to that database and making a change on the Primary seemed to fix everything. Here's how my MongoShell connected to the Primary looked like: MongoDB shell version: 2.4.5 connecting to: 127.0.0.1.27017/test Maybe the connection to test was configured when I first set up MongoDB? Not too sure right now. Doing a little more looking around, it seems like when you connect with MongoShell, it's automatically connecting to the test database. No

Categories : Mongodb

mongodb 3rd node replica set crashes
One possible solution is perform a "cold backup" You have 3 nodes primary secondady crashed Steps: Conect to crashed one go to /data/ folder and clean all files but mongo.lock and journal folder Stop mongod on the secondary copy all files from /data/ folder from secondary to crashed one. Do not copy journal folder, mongo.lock start mongod on secondary node configure 'mongod' ownership for the updated files and 600 permissions cross your finguers start mongod on crashed one

Categories : Mongodb

Using mongos with non-sharded replica sets
The downside is that you've likely introduced a single point of failure into your application. If the mongos goes down then the mongods are no longer accessible - you've defeated the high availability of replica sets. You could set up multiple mongos instances but then you're back in the same boat. You could theoretically set up a load balancer in front of your mongos instances so that you only have to list one address and can still have high availability.

Categories : Mongodb

How does mongodb replica compare with amazon ebs?
Let me clarify a couple of things. EBS is essentially a SAN Volume if you are used to working within existing technologies. It can be attached to one instance, but it still has a limited IO throughout. Using RAID can help maximize the IO, provisioned IOPS can help you maximize the throughput. Ideally however, with MongoDB, you want to have enough memory where indexes can be completely accessed within memory, performance drops if the disk needs to be hit. Mongo can use Replicas, which is primarily used for failover and replication (You can send reads to a slave, but all writes need to hit the primary), and sharding which is used to split a dataset to increase performance. You will still need to do these things anyway even if you are using EBS for storage.

Categories : Mongodb

Configure an Openshift MongoDB Replica Set
OpenShift currently doesn't support MongoDB Replica Sets. Please vote on the feature here: https://www.openshift.com/content/support-replica-sets-for-mongodb

Categories : Mongodb

Standalone replica sets mongodb
Most of the command line parameters you specify are settable in the configuration file - you can see how here: http://docs.mongodb.org/manual/reference/configuration-options/ Specifically, notice that you can set, port, replSet, and dbPath from the configuration file. There is also a good article on Replica set configuration here: http://docs.mongodb.org/manual/reference/replica-configuration/

Categories : Mongodb

mongodb replica set on azure vms - configure to run as a service
On machine reboots, the mongod process will stop and you have to restart it. In system scripts, I am not sure if on box restarts, mongod restart is automatically taken care of or not. But you can have service scripts for mongod process, which you get automatically, when you install using mongodb apt-get/yum packages.

Categories : Mongodb

MongoDB replica set to stand alone backup and restore
No matter how many nodes you have in a replica set, each of them holds the same data. So getting the data is easy - just use mongodump (preferably against the secondary, for performance reasons) and then mongorestore into a new mongod for your development stand-alone system. mongodump does not pick up any replication related collections (they live in database called local). If you end up taking a file system snapshot of a replica node rather than using mongodump, be sure to drop the local database when you restore the snapshot into your production stand-alone server and then restart mongod so that it will properly detect that it is not part of a replica set.

Categories : Mongodb

mongodb - How is a replica set contacted if the configured instance is down?
The mongos in this case would hold an internal mapping of the relpica set within memory which is refreshed once every so often (just like drivers), at which point when the members start to come online it will do some checks and what not to detect which member it should contact. It cannot write to a member until there is, of course, a primary. A seed list is good if you restart the mongos and the configured member is offline, at which point the mongos can take another member as the seed.

Categories : Mongodb

Play! does not create JPA config on read only replica
I figured out another workaround, still not perfect but less messy than the previous. The idea is to use DB.getDBConfig("replica").executeQuery(query) Instead of JPA.getJPAConfig("replica").getJPAContext().em().createNativeQuery(query) If you already have some code expecting the List<Object[]> output of createNativeQuery(query).getResultList(); I made a quick function to transform the ResultSet you now receive : public static List<Object[]> formatResult(ResultSet rs) { List<Object[]> resultList = new ArrayList<Object[]>(); try { while(rs.next()) { Object[] array = new Object[rs.getMetaData().getColumnCount()]; for (int i = 0; i < array.length; i++) { array[i] = rs.getObject(i+1); }

Categories : Java

Should MongooseJS be emitting events on replica set disconnection?
I've been having similar problems with Mongoose, asked on SO also. More recently, I've found this issue on Mongoose GitHub repository which led to this issue on the driver repository. The Mongo driver wasn't emitting any of these events more than once, and today this has been fixed for single connections on v1.3.19. It seems that it's a "won't fix" for now.

Categories : Node Js

MongoDB replica set preventing queries to secondary
rs.slaveOk() run in the mongo shell will allow you to read from secondaries. Here is a demonstration using the mongo shell under MongoDB 2.4.3: $ mongo --port 27017 MongoDB shell version: 2.4.3 connecting to: 127.0.0.1:27017/test replset:PRIMARY> db.foo.save({}) replset:PRIMARY> db.foo.find() { "_id" : ObjectId("51bf5dbd473d5e80fc095b17") } replset:PRIMARY> exit $ mongo --port 27018 MongoDB shell version: 2.4.3 connecting to: 127.0.0.1:27018/test replset:SECONDARY> db.foo.find() error: { "$err" : "not master and slaveOk=false", "code" : 13435 } replset:SECONDARY> rs.slaveOk() replset:SECONDARY> db.foo.find() { "_id" : ObjectId("51bf5dbd473d5e80fc095b17") } replset:SECONDARY> db.foo.count() 1

Categories : Mongodb

Replica set and MongoDB, does the option {w: 1} make the system AP in terms of CAP?
Looking at Mongodb replication guide it look like, by default all query goes to the primary server. It you want the 'A' you also need to read on secondaries server, this is requiere to be AP. And then you loose the C because the results may be different from one server to another. The question also look like this one, the answer could be helpful.

Categories : Database

HDFS file locality / replica placement hints
It should be possible with this. It allows you to write Java code that specifies how HDFS should allocate replicas of blocks of a file. HTH

Categories : Hadoop

Does a bitmap index create a replica of the original table?
Every row in the table is represented in a single bit (i.e. either 0 or 1), for at least one distinct value1. I'm not sure that could be considered a replica of the whole table, as that implies that all the data is replicated, and data in other columns is obviously not present. But it does contain data for the whole table, as every row is represented (probably multiple times, all but one with the bit set to zero). The concepts guide explains what's happening: Each bit in the bitmap corresponds to a possible rowid. If the bit is set, then the row with the corresponding rowid contains the key value. A mapping function converts the bit position to an actual rowid, so the bitmap index provides the same functionality as a B-tree index although it uses a different internal represent

Categories : Oracle

node-mongodb-native: multiple Db connections for the same replica set?
Turns out I was calling conn1.open() and then conn2.open(), and the error was getting thrown on that second, conn2.open() call. Apparently when re-using a replica set, the connection will already be open. Checking for conn2.state === 'connected' did the trick.

Categories : Node Js

MongoDB C++ driver handling replica set connection failures
That's indeed sort of how you need to handle it. Perhaps instead of having two try/catch blocks I would use the following strategy: keep a count of how many times you have tried create a while loop with as terminator (count < 5 && lastOpSucceeded) and then sleep with pow(2,count) to sleep more in every iteration. And then when all else fails, bail out.

Categories : C++

Encountered a MongoDB warning after converting a replica set to stand alone server
The local database contains replica set information among other things. You can drop the local database without ill effect. Use the following comments to drop the local database. use local db.dropDatabase() The warning messages should go away after restarting mongod

Categories : Mongodb

Moped::Errors::ConnectionFailure Could not connect to any secondary or primary nodes for replica set
I was able to get it working by removing the mongod.lock file and restarting mongodb service sudo rm /var/lib/mongodb/mongod.lock sudo service mongodb start

Categories : Misc

How can MongoDB java driver determine if replica set is in the process of automatic failover?
I don't know the Java driver implementation itself, but I'd do catch all MongoExceptions, then filter them on getCode() basis. If the error code does not apply to replica sets failures, then I'd rethrow the MongoException. The problem is, to my knowledge there is no error codes reference in the documentation. Well there is a stub here, but this is fairly incomplete. The only way is to read the code of the Java driver to know what code it uses…

Categories : Mongodb

Mongodb replica set auto reconect don't work after down and up for nginx + uwsgi with several processes
After a change in your replica set (no primary, new primary, etc.), the next operation will throw an AutoReconnect exception. After that one failed operation, the underlying PyMongo MongoReplicaSetClient will reconnect to the replica set, and future operations may succeed. If there is a new primary, MongoReplicaSetClient will find it and future operations will succeed. If there is no primary, no operations can succeed unless you set your ReadPreference to PRIMARY_PREFERRED. See the docs here: http://mongoengine-odm.readthedocs.org/en/latest/guide/connecting.html#replicasets The reconnection process must happen once per uwsgi process. So if there is a change to your replica set, you can expect one AutoReconnect exception per uwsgi process.

Categories : Python

Web API keep-alive header
You can use a HttpMessageHandler to global make changes to every request/response. The header you are looking for is the Connection header. This header has been exposed a little differently for some reason. You cannot set the Connection header directly, you need to set the ConnectionClose property to true instead. Create a class like this: public class CloseConnectionHeader : DelegatingHandler { protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { return base.SendAsync(request, cancellationToken).ContinueWith(t => { var response = t.Result; response.Headers.ConnectionClose = true; retu

Categories : C#

Tcp connection Keep alive
The comment ("whrn there is no datapacket means no TCP connection") in your code is placed where you receive a disconnect (0 bytes) packet from the other side. There is no way to keep that connection alive because the other side choses to close it. If the connection is being closed due to network issues, you would either get an exception, or it would seem as if the connection is valid but quiet. Keep-alive mechanisms always work alongside with timeouts - the timeout enforces "if no data was received for x seconds, close the connection" where the keep-alive simply sends a dummy data packet to keep the timeout from occurring. By implementing a protocol yourself (you're operating on the TCP/IP level) you only need to implement a keep-alive if you already have a timeout implemented on the

Categories : C#

Keep a MemoryMappedFile Alive after Dispose
No, disposing a MemoryMappedFile opened by calling OpenExisting() will not destroy the underlying MMF. The process that called the Windows API CreateFileMapping() controls the lifetime of the MMF and OpenExisting() calls OpenFileMapping() instead.

Categories : C#

CLLocationManager is alive even when not needed
I'd go with the "flowed app design" one. You don't showed any code, so it's hard to say, but UITabBarController instantiates all its View Controllers at once, so your CCLocation class is probably being initialized with your TabBar. What you could do is: initialize your CCLocation stuff only on the -viewWillAppear method on the View Controller that you actually use it.

Categories : IOS



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.