w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Amazon EC2: load balancing / way to sync files / EC2 + CF
Technically, the load belancer will work, it's just that it'll only balance the traffic to one instance. Correct. You register the instances with the elastic load balancer, and whilst those instances are healthy - it will respond to them. There's many different ways to sync files - it all depends on what you want to sync. Cloud Architecture is a little different to traditional architecture. For example, rather than loading the images onto the EBS volume, you'd try and offload them (and serve them) from S3. Therefore the only things you'd need to "sync" would be the webserver files themselves. You could use CloudFormation to roll out updates, post commit hooks and rsync are also good options. The challenge is to remember that it can scale / fail almost at will - so you need to ensure

Categories : Amazon

AWS EC2 Instances with Load Balancing during very high traffic
Reserved instances are there to save money for those instances you regularly run. I would suggest using reserved instances for your 'master'. The only advantage of keeping that one on-demand is the advantage that you could scale up or down as soon as your constant flow of traffic changes up or down. Make sure you choose the right use for your reserved instance; an 'always-on reserved instance' should have an heavy-use reserved purchase. Those 'peak instances' do best as an on-demand instance.

Categories : Amazon

Amazon OpsWorks Custom Cookbooks not updating when using Load-based instances
According to the opsworks documentation: http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-installingcustom-enable-update-.html To manually update custom cookbooks Update your repository with the modified cookbooks. AWS OpsWorks uses the cache URL that you provided when you originally installed the cookbooks, so the cookbook root file name, repository location, and access rights should not change. For Amazon S3 or HTTP repositories, replace the original .zip file with a new .zip file that has the same name. For Git or Subversion repositories, edit your stack settings to change the Branch/Revision field to the new version. On the stack's page, click Run command and select the update custom cookbooks command.

Categories : Amazon

How to install/configure new software in newly created amazon instances using Amazon SDK in java?
The Amazon API primarily controls infrastructure. It does not have any control as to what happens inside the instance. There are a couple of ways you can bootstrap your instance and install software. You can use user-data to pass a script that will run on first launch. You could use a provisioning system like chef or puppet. You could roll your own if it works better for you.

Categories : Amazon

Node.js with AWS Load Balancing
to answer 1 + 3 (cant tell about 2): we are using aws elb's for more than a year now, and never had any issues. our application needs an uptime close to 100% node.js eventing is for events in the SAME process (unless you use a messaging-solution like axon). so this never should be an issue, even if you run multiple webservers. start your server listening on the some port, and the load-balancer will distribute the request to any server bound to the elb. a request/response-cycle itself is atomic, which means that the same server that gets the request has to send the response when he is done with his work for this request (write the chat into the db). your code basically will look something like this: http.createServer(function (request, response) { // do something with the request,

Categories : Mysql

load balancing using weblogic
Zack, You could have this batch application (which I imagine would be a plain Java application for your description) pooling for those files in that shared location, once it finds something you can have it invoke a EJB or RMI Object that is load balanced on your X instances of Weblogic, or even populate a JMS queue to process this file for you (in a clustered environment) It's not at all something unusual to do with Weblogic's clustering features, and you use different load balancing algorithms (such as round-robin, weight-based and random) There are different ways to set it up depending on your approach and preferred algorithm, check out Weblogic Load Balancing documentation and this section of Weblogic Definitive Guide Book - Using JNDI in a Clustered Environment

Categories : Java

Apache Camel and load balancing
As long as you don't write to HTTP session you don't have to care about anything - just put some number of Tomcat nodes behind load balancer. If you write to HTTP session, that's still simple, but you probably have (depending on configuration chosen) configure session replication. I've been working on two similar system integration projects working under heavy requests load. As a deployment environment we've chosen clustered Tomcat instances standing behind Apache servers (communicating through AJP connectors) and BigIP loadbalancer (after some time we switched to Nginx). Both these applications accepted HTTP requests. One of them was completely stateless (proxy-like) and the other one had to keep some session-specific information. For the latter, we had to make sure that all the objec

Categories : Spring

Tomcat load-balancing with mod_jk
Try this: JkMount /dept1/* tomcatbase JkMount /dept2/* tomcat1 The first parameter of the JkMount directive is a URL prefix, not a local path. JkMount [URL prefix] [Worker name] Please see http://tomcat.apache.org/connectors-doc/webserver_howto/apache.html for more details.

Categories : Apache

mp4 player is not working in FF while using load-balancing
I tried with HTML 5 tag with flash fallback support. The code is ` <source src="https://#CGI.SERVER_NAME#/video.mp4" type="video/mp4"> <source src="https://#CGI.SERVER_NAME#/video.webm" type="video/webm"> <!--- Flash player code to play video in HTML5 non support browser ---> <object type="application/x-shockwave-flash" data="http://releases.flowplayer.org/swf/flowplayer-3.2.1.swf" width="1000" height="600"> <param name="movie" value="http://releases.flowplayer.org/swf/flowplayer-3.2.1.swf"> <param name="allowFullScreen" value="true"> <param name="wmode" value="transparent"> <param name="flashVars" value="config={'playlist':[{'url'

Categories : Firefox

HTTP load balancing behavior
The behaviour depends on how the load-balancer is configured, the error that you're getting from the tomcat server and the behaviour of your application. A load-balancer will health-check the servers it is monitoring periodically (every few seconds); so it is entirely possible for a single server to crash between user requests and get noticed by the load-balancer. That server is then taken out of the group and when the user next refreshes they are directed to one of the remaining servers with no idea that anything has gone wrong in the middle. This depends on your application being stateless however. If any state is being stored on the single server (which is implied by use of sticky sessions), then when the user redirects to another server they may get a session timeout or other e

Categories : Tomcat

Hashing strings for load balancing
Your might consider something simple, like the adler32() algo, and just mod for bucket size. import zlib buf = 'arbitrary and unknown string' bucket = zlib.adler32(buf) % 30 # at this point bucket is in the range 0 - 29

Categories : Python

How to lock a object when using load balancing
This is a tricky problem - you need a distributed lock, or some sort of shared state. Since you already have the database, you could change your implementation from a "static C# lock" and instead the database to manage your lock for you over the whole "transaction". You don't say what database you are using, but if it's SQL Server, then you can use an application lock to achieve this. This lets you explicitly "lock" an object, and all other clients will wait until that object is unlocked. Check out: http://technet.microsoft.com/en-us/library/ms189823.aspx I've coded up an example implementation below. Start two instances to test it out. using System; using System.Data; using System.Data.SqlClient; using System.Transactions; namespace ConsoleApplication1 { class Program {

Categories : C#

Tomcat load balancing in Azure
An approach would be to host memcached on the servers, or Windows Azure Caching, and leverage memcached-session-manager to share session data between the Tomcat servers. "memcached-session-manager is a tomcat session manager that keeps sessions in memcached, for highly available, scalable and fault tolerant web applications. It supports both sticky and non-sticky configurations, and is currently working with tomcat 6.x and 7.x. For sticky sessions session failover (tomcat crash) is supported, for non-sticky sessions this is the default (a session is served by default by different tomcats for different requests). Also memcashed failover (memcached crash) is supported via migration of sessions. There shall also be no single point of failure, so when a memcached fails the session will not be

Categories : Tomcat

server load balancing script
You can check the monit system management. You can add this into your configuration file: # Monitoring the apache2 web services. # It will check process apache2 with given pid file. # If process name or pidfile path is wrong then monit will # give the error of failed. tough apache2 is running. check process apache2 with pidfile /var/run/apache2.pid start program = "/etc/init.d/apache2 start" stop program = "/etc/init.d/apache2 stop" # Admin will notify by mail if below of the condition satisfied. if cpu is greater than 60% for 2 cycles then alert if cpu > 80% for 5 cycles then restart if totalmem > 200.0 MB for 5 cycles then restart if children > 250 then restart if loadavg(5min) greater than 10 for 8 cycles then stop if

Categories : Ubuntu

Load balancing web sockets and long-polling
As described in basic description of websockets protocol on Wikipedia after a handshake is done over HTTP protocol, other communication is performed over persistent TCP connection with the exact server. So, there will be no "requests" in usual HTTP meaning, just 2 endpoints (server and client) exchanging data. That's why load balancing in usual meaning will not be possible (and usually not needed as there are quite a lot of simultaneous TCP connections that server is able to handle at once - good discussion on this point here). And that's why data send via websocket will always be received by the correct server machine (the one that is expecting it). If you really need this, you could use L3 load-balancer as described in the post mentioned above.

Categories : Http

how to write a powershell script for load balancing using NLB
You don't mention what load balancing technology you are using, so I am going to assume Windows Network Load Balancing (NLB). There are a collection of cmdlets that can help you control the load balancing state of your server. The cmdlets are described here: Network Load Balancing cmdlets in Windows PowerShell To use the commandlets you'll need to first import the module. Import-Module NetworkLoadBalancingClusters Then you can see all the commands available to you: Get-Command –module NetworkLoadBalancingClusters

Categories : Powershell

How to Handle return session in load balancing
What you have described (and illustrated in your deployment diagram) seems overly complicated, but I will not pretend to know your ultimate use-case for this implementation. Also, you did not mention what you are currently using for the Load Balancer (HAproxy, Amazon ELB, F5, etc) but I would look into configuring "sticky sessions" on the load balancer. Sticky sessions will ensure that particular clients with a specific session are balanced/routed to the proper application server. Hope this helps!

Categories : Linux

Load balancing Apache with 2 machines total
As far as I am aware, There is no master/slave configuration in webservers unlike the dns servers. You can reduce the load on each machine by sharing the websites to equal. That means migrate half of the websites to the new server to reduce the load/effort of the old server. To work as a backup if one fails, You can clone the new server as old and create 2 new any custom name servers pointing them to the new server IP. add that name servers to all domains from corresponding registrar end. So if one fails websites will load from the other. :)

Categories : Apache

Configuration of Apache and Tomcat for load balancing
maybe if you try with tomcat session replication. I found this interesting post: http://www.bradchen.com/blog/2012/12/tomcat-auto-failover-using-apache-memcached . You could try too with redis: http://shivganesh.com/2013/08/15/setup-redis-session-store-apache-tomcat-7/ Let me know your experience please.

Categories : Apache

Apache HTTP load balancing based on URL pattern
I'm not aware of such a load algorithm being already present. However, keep in mind that the most common loadbalancing outcome (especially when you have server-side state as you obviously have) is a sticky session: You're only balancing the initial request. After that, all requests are typically directed to the same server. I typically recommend against distributing the session data as it adds some commonly unnecessary performance hit onto each request, negating the improved performance that you can get with clustering. This is subject to be changed in actual installations though and just a first rule of thumb. You might be able to create your own loadbalancing algorithm with mod-proxy-load-balancing (you'll need to configure the algorithm in the config file), but I believe your time is

Categories : Apache

Why doesn't Nginx load balancing balance bandwidth?
It sounds like the configuration is doing exactly what you asked it to do. You configured a proxy on the first server IP, right? So data has to go from the user to the proxy, then to the server, then the reply from the server back to the proxy and then to the user. It's triple because the first server sees three flows (both servers' output from the proxy and the second server's input to the proxy) while the second server sees one (its output to the proxy). It is perfectly balancing the traffic into equal flows, the first server just sees three flows and the second just one. As for how you fix it, it depends what's wrong with it and what you're trying to accomplish, which you haven't told us.

Categories : Nginx

Apache Load Balancing Algorithm Wrong Behaviour
Just because a single server can process all the requests doesn't mean it should. If one server would process most of the requests, it would wear out to failure much faster than others, and that's generally not a desired condition. Such a strategy also imposes reliability risks: i.e., you could have the "server 3" unnoticedly misconfigured, just until the system load raises high enough to put it into the game — and it will fail right at the critical moment.

Categories : Algorithm

MongoDB load balancing and failover of query routers
Everyone says it's not relevant and I don't think so. And your right so. Failover of mongos instances is very important, without proper architecture to deal with this you could have a serious failure in your app. It also breaks the high availability of MongoDB. What happens if the query router dies? This is where you should really be putting in a seed list into the connection string in your driver, in this case the driver will do something along the lines of what it does with replicas and try and connect to other members on the list to resume as normal. Is there an intended way for automatic failover to a second query router? Provided you supply more mongos instance IPs to your applications configuration this should be pretty automated. Or for load balancing between 2

Categories : Mongodb

How is load balancing achieved while sending data to the reducers in Hadoop
How is the data split to be transferred to the reducers i.e. how is the partition size decided and by what process is it decided as the data is transferred using a pull mechanism instead of a push mechanism. An interesting challenge to counter here would have been to determine the overall size of the data as the data resides on multiple nodes (I am guessing that the job tracker/master process may be aware of the size and location of data for all the nodes, but I am not sure on that too). Splitting of data into partitions is governed by the logic written inside getPartition(KEY k, VALUE v, int numOfReducers) present in the Partitioner abstract class. The default Hadoop partitioner is the HashPartitioner. The behavior is to make use of the Object.hashCode() method of the key and performs a

Categories : Sorting

load balancing IBM Liberty profile with apache http server
Currently, you'll have to generate a plugin-cfg.xml from each liberty server (the license has info about how many servers you can aggregate in this way for load balancing and failover) and merge the result to make it appear like a cluster to the WAS Plugin. Other editions provide a merge tool, if you have access to them. The WAS plugin installation has an XSD file for the plugin-cfg.xml. 1) note the http and https transports in both plugin configurations 2) make a copy of one of the XML's to edit 3) Find the <ServerCluster <Config... <ServerCluster CloneSeparatorChange="false" GetDWLMTable="false" IgnoreAffinityRequests="true" LoadBalance="Round Robin" Name="cluster1" PostBufferSize="64" PostSizeLimit="-1" RemoveSpecialHeaders="true" RetryInterval="60" ServerIOTimeoutRetr

Categories : Apache

installing php and apache on Amazon EC2 instances
Have you read the error message ? Read again: Error: httpd24-tools conflicts with httpd-tools-2.2.25-1.0.amzn1.x86_64 Error: php54-common conflicts with php-common-5.3.27-1.0.amzn1.x86_64 You are trying to install HTTPD 2.4 when you appear to have HTTPD 2.2 installed and same thing with php, namely, you have PHP 5.3 installed and you are trying to install 5.4. A simple way to confirm this is to type in the following into bash: php -v httpd -V If you want to install newer versions then remove the older versions. yum remove httpd-tools-2.2.25-1.0.amzn1.x86_64 php-common-5.3.27-1.0.amzn1.x86_64

Categories : PHP

node.js performance for amazon EC2 instances with ECU's
Amazon ECU are virtual core because they are basically virtual sever with 2 real core of a xeon processor.So a box with real 2 core will perform better than a virtual ECU.And also I dont think than amazon ECU are the same thing like the real cpu core Amazon elastic cloud. As far i am working with nodejs and EC2 on my production server i simply set up two thing to utilize the max cpu power of a m1.large instance. Multiple node process behind the nginx as reverse proxy load-balencer a. I set up different nodejs process on different port. b. On front of them an nginx load balancer o distribute the load across each thread. I have also tries hands with forking child process with main thread using Cluster module . I finally ends up with nginx and coz its have less management than cluster a

Categories : Node Js

How to estimate the number of instances in Amazon EMR?
I know if you use the CLI tool to create your Job Flow and add the steps, then you can run both of the steps one after another on the same job flow and they will be counted within the same hour. I believe if you use the GUI then you can not re-use the job flow and so you may get charged one hour for each job. I haven't tried this though so may be wrong there. Check this article which is where I got the information: https://cwiki.apache.org/confluence/display/MAHOUT/Mahout+on+Elastic+MapReduce

Categories : Hadoop

What are the sizes of cache memory in amazon ec2 instances?
You are talking about memory available to the instance, memcached can be configured to use as much or as little memory as you want it to. If your application caching needs are still small, you might be able to do all the caching on the application server. On a micro instance you have 613 MB of memory total. If you want memcached to behave effectively, you need to keep the entire application in memory. Being that some of the memory is needed to run the system, you probably have only about 213MB that you can effectively use to run memcached. Use too much, and it will push some of the memory into swap, and slow down the system.

Categories : Caching

do we need 3*N instances on amazon ec2 to host N mongodb shards?
So there is some really good documentation which is the recommended cluster setup in terms of phisycal instance separation. There should be considered two things (at least) separately. One is replication and for this one see this documentation : http://docs.mongodb.org/manual/core/replica-set-members/ Which means you have to have at least two data nodes (due to HA) in a replicaset and can have one arbiter which is not holding data just participate in election as it is described in the docs linked above. You need an odd number of setmembers due to the primary has to be elected by a majority inside the replicaset. The other aspect is sharding. Sharding needs some additional metadata maintaining layer which is achived through additional processes these are configuration servers and mongos

Categories : Mongodb

Amazon S3 - hostname does not match the server certificate (OpenSSL::SSL::SSLError) + rails
Problem is with naming of bucket, in this case : myproject.de, which is format that Amazon S3 services do not consider as valid.(no dot in the name). I have changed the name of bucket from myproject.de into myprojectde and it works now. Rules for Bucket Naming In all regions except for the US Standard region a bucket name must comply with the following rules. These result in a DNS compliant bucket name. Bucket names must be at least 3 and no more than 63 characters long Bucket name must be a series of one or more labels separated by a period (.), where each label: Must start with a lowercase letter or a number Must end with a lowercase letter or a number Can contain lowercase letters, numbers and dashes Bucket names must not be formatted as an IP

Categories : Ruby On Rails

Amazon S3 Bucket Policy: How to lock down access to only your EC2 Instances
I have investigated this further and come up with a simpler solution for security through obscurity. I am using really long UserAgent strings as Auth. this makes the bucket policy look like: { "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::[REPLACE WITH YOUR BUCKET NAME]/*", "Condition": { "StringEquals": { "aws:UserAgent": "deployPackages-4d8ae3a6-6efc-40dc-9a7c-bb55284b10cc-4d8ae3a6-6efc-40dc-9a7c-bb55284b10cc" } } } ] } more details: http://www.diaryofaninja.com/blog/2013/07/24/scorched-earth-deployments-on-amazon-ec2-teamcity-amp-web-deploy-ndash-part-1-amazon

Categories : Amazon

Amazon web service load balancer unable to balance the load equally
There could be a number of reasons for this. Its without doing more digging, its hard to know which one you are experiencing. Sticky sessions can result in instances traffic becoming unbalanced. Although this depends heavily on usage patterns and your application. Cached DNS resolution. Part of how the ELB works is to direct traffic round robin on a DNS level. If a large number of users are all using the same DNS system provided by an ISP, they might all get sent to the same zone. Couple this with sticky sessions and you will end up with a bunch of traffic that will never switch. Using Route 53 with ALIAS records may reduce this somewhat. If you can't get the ELB to balance your traffic better, you can set up something similar with vanish cache or other software load balancer. Not as c

Categories : Amazon

Getting "The certificate for this server is invalid." on iPad when loading an image from Amazon S3 (HTTPS), but no errors on simulator
I was getting the same certificate error from S3, and found adding this to the NSURLConnectionDelegate fixed the problem: -(void)connection:(NSURLConnection *)connection willSendRequestForAuthenticationChallenge:(NSURLAuthenticationChallenge *)challenge { if ([challenge.protectionSpace.authenticationMethod isEqualToString:NSURLAuthenticationMethodServerTrust] && [challenge.protectionSpace.host hasSuffix:@"example.com"]) { // accept the certificate anyway [challenge.sender useCredential:[NSURLCredential credentialForTrust:challenge.protectionSpace.serverTrust] forAuthenticationChallenge:challenge]; } else { [challenge.sender continueWithoutCredentialForAuthenticationChallenge:challenge]; } } NOTE: You'll need to change 'example.com' t

Categories : Amazon

Scaling only with AWS EC2 with Elastic Load Balancing and Auto-Scaling
ELB will pass through TCP and HTTP, you just need to set the ports you want. Autoscaling will scale the servers based on a set of parameters. So yes - you can - provided that your server can handle multiple instances of itself.

Categories : Amazon

Why is my certificate not valid unless I put the Sub CA certificate in the trusted root certificate authorities?
To elaborate on Erik's comment, trusting the Root CA certificate means that you will trust what the Root CA directly signs. If you have an intermediate Sub CA in the middle, its certificate is signed by the Root CA, and the Sub CA signs your certificate directly. Root CA ---signs/verifies---> Sub CA ---signs/verifies---> End user certificate As Erik said, if you do not have the Sub CA certificate present, then there is no way to link the Root CA to the End user certificate. The Root can verify the Sub CA certificate, and the Sub CA can verify the End user certificate, but there is no way for the Root to skip over the Sub CA and verify the End user certificate because the root did not sign the End user certificate. 2 ways to resolve this are: include the Sub CA cert in your tru

Categories : C#

how do i load a certificate into visual studio?
I tried looking the error message on google. http://www.1-script.com/forums/security-microsoft/issuing-code-signing-certificate-with-private-key-22023-.htm These people seem to think that you need to make a v2 certificate. So sorry but I have no idea what any of that means, I just was bored. Have a good day.

Categories : Dotnet

Windows Phone 7 - can not load certificate in C# code
A certificate contains binary content, you shouldn't use a StreamReader to read it (it's meant to be used only for text). Instead, read directly the content from the stream: var resourceStream = Application.GetResourceStream(new Uri("myCert.der", UriKind.Relative)); var content = new byte[resourceStream.Stream.Length]; resourceStream.Stream.Read(content, 0, content.Length); X509Certificate cert = new X509Certificate(content);

Categories : C#

Amazon EC2 Load Testing
A couple of quick points; Set the environment up exactly like it's supposed to run. If there's a database involved, you'll want to involve that in the testing too. Synthetic <?php echo "ok"; CPU based benchmarks won't help you much since normally very little of the time spent replying to HTTP requests is actual CPU time. A recommendation is to use a service for the benchmarking. Setting load testing up is not without its complexities, and unless you consider benchmarking your core business, you're probably better off using something like Neustar to load and measure your site (there are many services, they're not necessarily what fits you best, just pulled one out of memory) Of course you can set a load test up yourself, but getting that done right is not anything that can be describ

Categories : Testing

Secure Transport: Load server certificate from file
If both files can be clubbed and converted into pkcs 12 format then SecPKCS12Import method can be used. But SecPKCS12Import does not work properly in root context. I do not know reason of this misbehaviour. OSStatus extractIdentityAndTrust(CFDataRef inPKCS12Data, SecIdentityRef *outIdentity, SecTrustRef *outTrust, CFStringRef keyPassword) { OSStatus securityError = errSecSuccess; const void *keys[] = { kSecImportExportPassphrase }; const void *values[] = { keyPassword }; CFDictionaryRef optionsDictionary = NULL; optionsDictionary = CFDictionaryCreate( NULL, keys, values, (keyPassword ? 1 : 0),

Categories : C++



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.