w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Amazon web service load balancer unable to balance the load equally
There could be a number of reasons for this. Its without doing more digging, its hard to know which one you are experiencing. Sticky sessions can result in instances traffic becoming unbalanced. Although this depends heavily on usage patterns and your application. Cached DNS resolution. Part of how the ELB works is to direct traffic round robin on a DNS level. If a large number of users are all using the same DNS system provided by an ISP, they might all get sent to the same zone. Couple this with sticky sessions and you will end up with a bunch of traffic that will never switch. Using Route 53 with ALIAS records may reduce this somewhat. If you can't get the ELB to balance your traffic better, you can set up something similar with vanish cache or other software load balancer. Not as c

Categories : Amazon

Amazon Elastic Load Balancer (ELB) url not resolved by instance attached to it
This is normal, if I correctly understand your testing framework. The way that ELB is scaling, it starts out running on a very small machine, and as traffic increases, it's directed to even larger and larger machines. However, ELB is not configured to handle flash traffic, especially from a small number of hosts, as is the case with a load testing scenario. This is because the DNS record is changed whenever ELB scales, and it sometimes takes a while to propagate. Load testing frameworks sometimes cache the DNS lookup, making things even slower. The official ELB documentation (http://aws.amazon.com/articles/1636185810492479) states that traffic should not be increased by more than 50% every 5 minutes. I found that scaling takes even longer if you're looking to get over 150-200k RPM.

Categories : Wcf

how to configure Elastic Beanstalk to deploy code to an instance but not add it to the load balancer
For the most part, though not straight forward, you could provide a .config file in .ebextensions to run your script files. This example of speeding up a deploy shows running some scripts and moving data back and forth. Better yet, the author describes the sequence and deployment process. I'm just embarking on this type of container customization. I have read of others dropping files in the /opt/elasticbeanstalk/hooks/appdeploy/pre and /opt/elasticbeanstalk/hooks/appdeploy/post directories, much of which can be derived by reading the post linked above. Also note that you can include the content of a script in the yaml .config file such as this which I found yesterday: files: "/opt/elasticbeanstalk/hooks/appdeploy/post/99_restart_delayed_job.sh": mode: "000755" owner: root

Categories : Ruby On Rails

how to internally load balance multiple instances of web/worker roles in Azure?
There is no internal load balancer in Windows Azure. The only load balancer is the one that has the public IP Addresses. If you want to load balance only internal addresses (workers) you have to maintain it yourself. Meaning you have to install some kind of a load balancer on Azure VM, which is part of the same VNet. That load balancer may be of your choice (Windows or Linux). And you have to implement a watchdog service for when topology changes - i.e. workers are being recycled, hardware failure, scaling events. I would not recommend this approach unless this is absolutely necessary. Last option is to have a (cached) pool of IP Endpints of all the workers and randomly chose one when you need.

Categories : Asp Net

Why does less css behave differently when served over port 80 than other ports?
For anyone else interested... The less.js will be in development mode if the request is on any other port than port 80. In development mode the generated css is put in the standard browser cache as you'd expect. In non-development mode, the css is put in a secret mystical cache that will not be affected by ctrl-r, shift-F5 etc.

Categories : CSS

Free PaaS without / with less port forwarding to bind to custom ports?
I have found one way! That is to use the DIY cartridge (Do it yourself) in Openshift, install Python and run "Websockets". Of course this still means that the transmissions should be of HTTP. The other option is to move to IaaS (infrastructure as a service) rather than PaaS.

Categories : Python

Change Apache2 ports.conf in order to localhost interface to listen to port 80
Have you symlinked the /etc/apache2/sites-available/foo.com to /etc/apache2/sites-enabled/000-foo.com. The default apache configuration only reads virtual hosts from the sites-enabled directory and not sites available. I am guessing that if you go to /etc/apache2/sites-enabled and type ls -la you will only see a configuration for default site and not your new site. If this is the case then symlink your foo.com into the sites enabled directory as follows ln -s ../sites-available/foo.com 001-foo.com

Categories : Apache

What to care about when using a load balancer?
The biggest issue that you are going to run into is going to be related to php sessions. By default php sessions maintain state with a single server. When you add the second server into the mix and start load balancing connections to both of them, then the PHP session will not be valid on the second server that gets hit. Load balancers like haproxy expect a "stateless" application. To make PHP stateless you will more than likely need to use a different mechanism for your sessions. If you do not/can not make your application stateless then you can configure HAProxy to do sticky sessions either off of cookies, or stick tables (source IP etc). The next thing that you will run into is that you will loose the original requestors IP address. This is because haproxy (the load balancer) termina

Categories : PHP

load balancer won't remove itself from dns
When I nslookup your domain I get app-lb-west-650828891.us-west-2.elb.amazonaws.com: DNS server handling your query: localhost DNS server's address: 127.0.0.1#53 Non-authoritative answer: Name: codepen.io Address: 54.245.121.59 It is possible that the DNS change just needed a little time to propagate.

Categories : Amazon

Camel and load balancer
If you need to have that setup so that each server might receive the same record - then you need an idempotent route. And you need to make sure your idempotent repository is the same between your machines. Using a database as the repository is an easy option. If you do not have a database, a hazelcast repo might be an option. What can be an issue is to determine what is unique in your records - such as an order number or customer + date/time or some increasing transaction ID number.

Categories : Apache

WCF security with load balancer
The client and server binding will be different. The client binding will use username auth in eitehr message or transport level with transport security (ssl): <bindings> <basicHttpBinding> <binding name="NewBinding0"> <security mode="Message" /> </binding> </basicHttpBinding> </bindings> then the server config will use the same config but without the transport security. If you chose to use message security then check out WCF ClearUsernameBinding. If you use trasnport security (basic http) then set mode="TransportCredentialOnly".

Categories : Wcf

Using Laravel behind a load balancer
We use a load balancer where I work and I ran into similiar problems with accessing cPanel dashboards where the page would just reload every time I tried accessing a section and log me off as my IP address was changing to them. The solution was to find which port cPanel was using and configure the load balancer to bind that port to one WAN. Sorry, I am not familiar with laravel and if it just using port 80 then this might not be a solution.

Categories : PHP

Load Balance wso2 ESB
You can add following parameter to http and https transport receivers of WSO2 ESB . <parameter name="WSDLEPRPrefix" locked="false">[load-balancer-url]</parameter> For example: <parameter name="WSDLEPRPrefix" locked="false">http://esb.cloud-test.wso2.com:8280</parameter> You need to edit following file. <WSO2-ESB-HOME>repository/conf/axis2/axis2.xml This step is necessary for configuring WSO2 ELB also. See following ELB doc for more information. http://docs.wso2.org/wiki/pages/viewpage.action?pageId=26839403

Categories : Wso2

Configuring Apache Load Balancer
Make sure you follow the advice in section stickyness: ProxyPass / balancer://mycluster stickysession=JSESSIONID|jsessionid scolonpathdelim=On (not only for the /test directory) Furthermore, for the JBoss application server, you need to supply route=web1 / route=web2 etc. in the Apache config and furthermore jvmRoute="web1" in the JBoss configuration of the <Engine name="jboss.web"... element (the location depends on the JBoss version you are using, for v4.2 it is server/default/deploy/jboss-web.deployer/server.xml) See also this tutorial

Categories : Apache

aws elastic load balancer not distributing
You need to have the route53 domain direct traffic to the ELB. If you have example.com and are trying to route that to the load balancer you need to associate the apex with the load balancer. To do this, go to the route53 tab. Click your hosted zone and go to record sets. then create a new zone and click yes for alias You then need to associate the hosted zone with your ELB. Now to get the traffic to fail over correctly you need to be running both instances behind the load balancer (preferably in multiple availability zones) and the ELB will take care of the failover. To do this, go to the elb section of ec2. Click your load balancer and add instances to it.

Categories : Amazon

Load Balancer between 5 network cards
There are several ways to do this and are all relatively easy. A VERY simple solution is to simply bind mssql to all 5 interfaces and give all interfaces a different network address. You can then configure some clients to point to one interface, others to the next etc. Depending on your network infrastructure you can also "bond" your network interfaces together so that they act like 1 interface on the OS. If you have a single switch and all of the interfaces are plugged into the single switch then bonding is an option. If they are plugged into two different switches then your switches will have to support lacp or something similar. You can also look at using a load balancer infront of your sql server. This could be problematic depending on your database, replication, sticky sessions etc.

Categories : Sql Server

Can you open access to UDP ports on an EC2 instance with Python?
Yes, there is. The specific AWS EC2 API functions are AuthorizeSecurityGroupIngress and RevokeSecurityGroupIngress. In boto, they map to the authorize and revoke methods of a boto.ec2.securitygroup.SecurityGroup instance. If you're in a VPC you need to add Egress methods, too.

Categories : Python

AWS : S3FS AMI and load balancer high I/O Issue
I would like to recommend to take a look at the new project RioFS (Userspace S3 filesystem): https://github.com/skoobe/riofs. This project is “s3fs” alternative, the main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “testing” state, but it's been running on several high-loaded fileservers for quite some time. We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features. Regarding your issue: I'm not quite sure how does S3FS works with cached files, but in our project we try to avoid performing additional I/O operation. Please give it a try and let me know how RioFS works for you !

Categories : Amazon

Setting a trace id in nginx load balancer
In our production environment we do have a custom module like this. It can generate a unique trace id and then it will be pushed into http headers which send to the upstream server. The upstream server will check if the certain field is set, it will get the value and write it to access_log, thus, we can trace the request. And I find an 3rd party module just looks the same: nginx-operationid, hope it is helpful.

Categories : Nginx

Can I use Amazon ELB instead of nginx as load balancer for my Node.Js app?
Yes, but there are a few gotcha to keep in mind: If you have a single server, ensure you don't return anything except 200 to the page that ELB uses to check health. We had a 301 from our non-www to www site, and that made ELB not send anything to our server because of it. You'll get the ELB's IP instead of the client's in your logs. There is an ngx_real_ip module, but it takes from config hacking to get it to work.

Categories : Node Js

Load Balance: Node.js - Socket.io - Redis
Redis only syncs from the master to the slaves. It never syncs from the slaves to the master. So, if you're writing to all 3 of your machines, then the only messages that will wind up synced across all three servers will be the ones hitting the master. This is why it looks like you're missing messages. More info here. Read only slave Since Redis 2.6 slaves support a read-only mode that is enabled by default. This behavior is controlled by the slave-read-only option in the redis.conf file, and can be enabled and disabled at runtime using CONFIG SET. Read only slaves will reject all the write commands, so that it is not possible to write to a slave because of a mistake. This does not mean that the feature is conceived to expose a slave instance to the internet or

Categories : Node Js

How to get the actual URL in case of load balancer proxy server
I'm assuming you are using mod proxy (mod_jk/mod_ajp preserves the proxy host) Retrieve the "X-Forwarded-Host" header from request, which is the original host requested by the client. See http://httpd.apache.org/docs/2.2/mod/mod_proxy.html

Categories : Java

Azure Traffic Manager Load Balance Options
Traffic manager comes into picture only when your application is deployed across multiple cloud services within same data center or in different data centers. If your application is hosted in a single cloud service (with multiple instances of course) , then the instances are load balanced using Round Robin pattern. This is the default load balancing pattern and comes to you without any extra charge. You can read more about traffic manager here: https://azure.microsoft.com/en-us/documentation/articles/traffic-manager-overview/

Categories : Azure

Why doesn't Nginx load balancing balance bandwidth?
It sounds like the configuration is doing exactly what you asked it to do. You configured a proxy on the first server IP, right? So data has to go from the user to the proxy, then to the server, then the reply from the server back to the proxy and then to the user. It's triple because the first server sees three flows (both servers' output from the proxy and the second server's input to the proxy) while the second server sees one (its output to the proxy). It is perfectly balancing the traffic into equal flows, the first server just sees three flows and the second just one. As for how you fix it, it depends what's wrong with it and what you're trying to accomplish, which you haven't told us.

Categories : Nginx

Azure windows virtual machine load balance
You need both the virtual machines to be under the same cloud service. Only then you get the option to load balance them. There is no way add existing VMs to the same network. there are operations in the Service management API (usable through powershell) to create a new VM. You can use that to create a fresh VM from your existing image and connect it to the same service as your first VM. Then you'llhave the necessary options enabled for load balancing.

Categories : Azure

Secure a specific page by client IP behind AWS Elastic Load Balancer
The client IP is actually being passed via header (X-Forwarded-For). This header may include other load balancer IPs in addition to the client IP. If you can configure filtering based on headers, you should be able to do what you are attempting to do.

Categories : Asp Net

What method(s) do I use to configure/update an Elastic Load Balancer via Java?
My bad. Forgot to execute the requests against the ELB variable. The code below creates the health check and assigns the instances associated with the ELB. Hope this helps the next person asking this question. ConfigureHealthCheckResult healthResult = myELB.configureHealthCheck(healthCheckReq); RegisterInstancesWithLoadBalancerResult registerResult = myELB.registerInstancesWithLoadBalancer(regInst);

Categories : Java

Why am I getting a 502 bad gateway when sending a redirect from behind amazon elastic load balancer?
It turns out that ELB is very picky about what it considers a 'valid' response and will return 502 Bad Gateway if it isn't happy. I fixed it by making sure the response from my server had the following headers: eg. if I was listening at http://example.com I send back the following response: HTTP/1.1 301 Moved Permanently Content-Type: */*; charset="UTF-8" Location: https://example.com/ Content-Length: 0 This makes ELB happy and everything works. For interest, here's the code (Java, using Simpleframework): private static void startHttpsRedirector() throws IOException { org.simpleframework.http.core.Container container = new org.simpleframework.http.core.Container() { @Override public void handle(Request request, Response response) { Path path = reques

Categories : Http

Does SQL Server Service Broker load balance when External Activator is used?
To utilise the built-in load balancing you would need to deploy the service to more than one sql server instance. I suspect that isn't quite what you are planning, so you will have to come up with a custom method, such as having an internal activated procedure that forwards your arriving messages into one of your four queues that the external activation processes look at.

Categories : Sql Server

How to make restfull service truely Highly Available with Hardware load balancer
70-80 servers is a very horizontally scaled implementation... good job! Better is a very relative term, hopefully one of these suggestions count as "better". Implement an intelligent health check for the application with the ability to adjust the health check while the application is running. What we do is have the health check start failing while the application is running just fine. This allows the load balancer to automatically take the system out of rotation. Our stop scripts query the load balancer to make sure that it is out of rotation and then shuts down normally which allows the existing connections to drain. Batch multiple groups of systems together. I am assuming that you have 70 servers to handle peak load. This means that you should be able to restart several at a time. A st

Categories : Rest

Running nginx infront of a unicorn or gunicorn under Elastic Load Balancer
In a word: yes. Amazon's ELB service is wonderful, but it is solely a load balancer. Running nginx on your own server gives you a locus of control and a place to do rewrites, redirects, compression, header munging, caching, and more. Furthermore it allows you to serve static files in the fastest possible way, rather than using a slot on your more heavyweight appserver.

Categories : Nginx

How to simulate Amazon's 60 sec timeout on Elastic Load Balancer (ELB) on a local xAMP environment?
This timeout has to do with a request taking more than 60 seconds, without sending any data as a response. Long HTTP requests like this should really be avoided. Even browsers may give up if they dont receive any response within a timeout period. If you can't get rid of long HTTP requests and you need to simulate it. You can use an fcgi/php-fpm configuration of php and provide a timeout of 60 seconds for which the fcgi process will wait for PHP to respond.

Categories : Apache

How does one setup two non-load-balanced VM web servers in Azure, capable of communicating on private ports?
A single cloud service is a security boundary and the only way into it is through a public (input) endpoint on the unique public VIP of the service. A Virtual Network (VNET) can be used to host multiple cloud services and allow private visibility among them without going through a public endpoint. A typical model would be to put an IIS website in a PaaS cloud service with a public VIP and the backend SQL Server in an IaaS cloud service with a public VIP but NO public endpoints declared on it. Both these cloud services would be hosted in the same VNET. This allows the front end web role instances access to the backend SQL Server instance over the private VNET. There is a hands-on lab in the Windows Azure Training Kit that describes precisely how to implement this. In this case I would rec

Categories : Azure

Multiple ports and threading
My question is should I use one thread to receive data from all different ports or should I create own thread for every port, each timed to run at 100ms interval? What is the good practice in these cases? It does not really matter that much. If you create one thread, you will have to keep track of the different ports. If you create multiple thread you have to keep track of all these threads. Since cpu's are usually multi-threaded nowadays I would go for multiple threads. As for the 100 ms timer interval, you can create one timer that loops through all threads and collects data from these threads. Make sure you lock it, so that if the timer elapses while the former event is still busy collecting data, these two don't interfere with each other.

Categories : Java

nginx as load balancer server out 404 page based on HTTP response from app server
Simply set the proxy_intercept_errors option to on and create an nginx error page for it. error_page 404 /404.html; proxy_intercept_errors on; To ensure that nginx will serve the static file from it’s document root you also have to specify the following: location /404.html { internal; } I'm assuming here that nginx is configured to talk with your app servers as proxy, because that is the usual way and your question does not mention any of this.

Categories : Nginx

Linux TCP Server - listen on multiple ports in C++
Roughly here are the steps: You can have multiple TCP servers (aka server sockets) listen for each port. Next, you can use a select() and pass file descriptors for each of these server sockets. If you get a connection on any of these, then select would return a read event and mark the fd of the server socket that has the connection. YOu would need to call accept() on that server fd. YOu cannot make a single TCP socket listen on multiple ports.

Categories : C++

Multiple Serial Ports in C# / Trouble using List <>
The simplest approach would be to use a lambda expression which would capture the port you're using. A lambda expression is a way of building a delegate "inline" - and one which is able to use the local variables from the method you declare it in. For example: foreach (var port in portNames) { // Object initializer to simplify setting properties SerialPort sp = new SerialPort(port, 19200, Parity.None, 8, StopBits.One) { Handshake = Hanshake.None, ReadTimeout = 500, WriteTimeout = 500 }; sp.DataReceived += (sender, args) => { Thread.Sleep(500); // Not sure you need this... string data = sp.ReadLine(); Action action = () => { MessageBox.Show(data.Trim()); sp.Close(); }; Beg

Categories : C#

How to open a web server port on EC2 instance
Follow the steps that are described on this answer just instead of using the drop down, type the port (8787) in "port range" an then "Add rule".

Categories : Amazon

Joining Multiple Multicast Groups With 1 Socket but Different Ports?
I found my answer: Receiving multicast data from different groups on the same socket in linux I can't delete my post...so I guess I have to answer myself :(

Categories : C

Elastic Beanstalk's Elastic Load Balancer name
Unfortunately the answer is no to the first 2. The last one, you are able to do it, but it more or less goes against the flow of Elastic beanstalk. You would need to create your own ELB with whatever name you like and then put it in front ot the instance that is created by beanstalk. You would need need to delete the ELB that beanstalk created so that it's not sitting there costing you money. I can't remember if beanstalk boots its environments via an AutoScaling group, but if it does, you'll need to associate that AutoScaling group with your new ELB. After creating and syncing all of that up, you need to point your CNAME to your new custom made ELB. That should work.

Categories : Amazon



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.