w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Can I use Amazon ELB instead of nginx as load balancer for my Node.Js app?
Yes, but there are a few gotcha to keep in mind: If you have a single server, ensure you don't return anything except 200 to the page that ELB uses to check health. We had a 301 from our non-www to www site, and that made ELB not send anything to our server because of it. You'll get the ELB's IP instead of the client's in your logs. There is an ngx_real_ip module, but it takes from config hacking to get it to work.

Categories : Node Js

Setting a trace id in nginx load balancer
In our production environment we do have a custom module like this. It can generate a unique trace id and then it will be pushed into http headers which send to the upstream server. The upstream server will check if the certain field is set, it will get the value and write it to access_log, thus, we can trace the request. And I find an 3rd party module just looks the same: nginx-operationid, hope it is helpful.

Categories : Nginx

What method(s) do I use to configure/update an Elastic Load Balancer via Java?
My bad. Forgot to execute the requests against the ELB variable. The code below creates the health check and assigns the instances associated with the ELB. Hope this helps the next person asking this question. ConfigureHealthCheckResult healthResult = myELB.configureHealthCheck(healthCheckReq); RegisterInstancesWithLoadBalancerResult registerResult = myELB.registerInstancesWithLoadBalancer(regInst);

Categories : Java

Running nginx infront of a unicorn or gunicorn under Elastic Load Balancer
In a word: yes. Amazon's ELB service is wonderful, but it is solely a load balancer. Running nginx on your own server gives you a locus of control and a place to do rewrites, redirects, compression, header munging, caching, and more. Furthermore it allows you to serve static files in the fastest possible way, rather than using a slot on your more heavyweight appserver.

Categories : Nginx

how to configure Elastic Beanstalk to deploy code to an instance but not add it to the load balancer
For the most part, though not straight forward, you could provide a .config file in .ebextensions to run your script files. This example of speeding up a deploy shows running some scripts and moving data back and forth. Better yet, the author describes the sequence and deployment process. I'm just embarking on this type of container customization. I have read of others dropping files in the /opt/elasticbeanstalk/hooks/appdeploy/pre and /opt/elasticbeanstalk/hooks/appdeploy/post directories, much of which can be derived by reading the post linked above. Also note that you can include the content of a script in the yaml .config file such as this which I found yesterday: files: "/opt/elasticbeanstalk/hooks/appdeploy/post/99_restart_delayed_job.sh": mode: "000755" owner: root

Categories : Ruby On Rails

How to configure nginx + Unicorn to avoid timeout errors?
Is there a way to handle this kind of problems? Do the job in background. You should have a separate process that gets jobs from queue one by one and processes them. And since it doesn't work with user requests, it can do its job as long as needed. You don't need unicorn for this, just a separate daemon.

Categories : Ruby On Rails

nginx as load balancer server out 404 page based on HTTP response from app server
Simply set the proxy_intercept_errors option to on and create an nginx error page for it. error_page 404 /404.html; proxy_intercept_errors on; To ensure that nginx will serve the static file from it’s document root you also have to specify the following: location /404.html { internal; } I'm assuming here that nginx is configured to talk with your app servers as proxy, because that is the usual way and your question does not mention any of this.

Categories : Nginx

NGINX: upstream timed out (110: Connection timed out) while reading response header from upstream
Might be worth the look http://howtounix.info/howto/110-connection-timed-out-error-in-nginx (he put the proxy_read_timeout in the location block

Categories : Nginx

How can I configure Spring 3 MVC to handle Errors similar to Exceptions?
The pattern we have used is to have a BaseController that all our Controllers extend using the following format to have specific errors mapped to specific HTTP status and a catch for more the most generic in Exception: @Controller public class BaseController { @ExceptionHandler (Exception.class) @ResponseStatus (HttpStatus.INTERNAL_SERVER_ERROR) public ModelAndView handleAllExceptions(Exception ex) { return new JsonError(ex.getMessage()).asModelAndView(); } @ExceptionHandler (InvalidArticleQueryRangeException.class) @ResponseStatus (HttpStatus.NOT_FOUND) public ModelAndView handleAllExceptions(InvalidArticleQueryRangeException ex) { return new JsonError(ex.getMessage()).asModelAndView(); } }

Categories : Java

Nginx upstream configuration
Then these can help, check the proxy_next_upstream These directive determines in what cases the request will be transmitted to the next server. Your server block should look like for example: server { location / { proxy_pass http://appcluster; proxy_next_upstream error timeout http_404; } }

Categories : Linux

Web.py routing when added nginx as upstream
I figured out what was going on just before posting but I hope I can help someone else by posting this anyway. web.py is sending the matched group to the GET and therefor using (.*)/data/(.*) sent both the start and the end of the path to the GET which is why it failed. Setting the route to .*/data/(.*) gave me what I was after and only sent the part after data to the GET function.

Categories : Python

nginx proxy pass to sslv3 upstream
Have you tried "ssl_ciphers ALL;"? Although that's not recommended (because that allows weak ciphers), that shall narrow down the scope of the problem. If that doesn't work, most likely the cause of your problem is that the openssl you use doesn't have the suitable ciphers to complete the SSL handshake with your Go server. Note that Go's tls package is only "partially" implemented and the supported ciphers is very limited. There are two solutions: You'll have to upgrade your openssl version that supports what Go's tls package already implemented. And then, of course, recompile your nginx. You'll have to patch tls package to support whatever your current openssl ciphers provides by adding the appropriate suite ids to cipherSuites in tls/cipher_suites.go (I think)

Categories : Ssl

Redirect request to two Upstream server in Nginx
Here's how i think your config could be, you can create multiple upstreams upstream main_upstream { server IP1 server IP2 server IP3 } upstream process_upstream { server IP2 server IP3 } server { location /process { proxy_pass http://process_upstream; } location / { proxy_pass http://main_upstream; }

Categories : Nginx

How to configure git push to automatically set upstream without -u?
Since I don't think this is possible using git config, here is what you can do in bash: [[ $(git config "branch.$(git rev-parse --abbrev-ref HEAD).merge") = '' ]] && git push -u || git push If the current branch has a remote tracking branch, it calls git push otherwise it calls git push -u

Categories : GIT

NGINX Reverse Proxy for upstream Django/Gunicorn Backend
Proxy Buffering Generally proxy buffering is only going to help you if you're generating very big web pages, or sending large files. regardless, it's pretty easy to set up, but you will need to tune the buffer sizes to about the size of your largest pages +20% (Any page that doesnt fit in the buffer gets written to disk), or selectively enable proxy buffering on your largest pages. docs: http://wiki.nginx.org/HttpProxyModule#proxy_buffering Caching I don't know much about your app and how dynamic it's content is, but setting up correct Cache Control/ETAG header generation on your app will be the first thing you'll want to look at. This is what will let Nginx know what is safe to proxy. Also, you may wish to setup multiple cache zones to manage the amount of space your caches take on di

Categories : Django

Trying to setup argumentless git push - how to configure origin as default upstream?
Follow this tutorial , it will guide through the process, one step at a time . http://try.github.io/levels/1/challenges/1

Categories : GIT

Amazon web service load balancer unable to balance the load equally
There could be a number of reasons for this. Its without doing more digging, its hard to know which one you are experiencing. Sticky sessions can result in instances traffic becoming unbalanced. Although this depends heavily on usage patterns and your application. Cached DNS resolution. Part of how the ELB works is to direct traffic round robin on a DNS level. If a large number of users are all using the same DNS system provided by an ISP, they might all get sent to the same zone. Couple this with sticky sessions and you will end up with a bunch of traffic that will never switch. Using Route 53 with ALIAS records may reduce this somewhat. If you can't get the ELB to balance your traffic better, you can set up something similar with vanish cache or other software load balancer. Not as c

Categories : Amazon

How to configure Maven so that a plugin in an upstream artifact is executed in the downstream project
With your constraints: No, it's not possible. The only way to make this work is by making A the parent project of B or by moving this check into a new parent POM which both A and B inherit from. But as long as you refuse to change B's setup, it can't be done.

Categories : Maven

Using Laravel behind a load balancer
We use a load balancer where I work and I ran into similiar problems with accessing cPanel dashboards where the page would just reload every time I tried accessing a section and log me off as my IP address was changing to them. The solution was to find which port cPanel was using and configure the load balancer to bind that port to one WAN. Sorry, I am not familiar with laravel and if it just using port 80 then this might not be a solution.

Categories : PHP

load balancer won't remove itself from dns
When I nslookup your domain I get app-lb-west-650828891.us-west-2.elb.amazonaws.com: DNS server handling your query: localhost DNS server's address: 127.0.0.1#53 Non-authoritative answer: Name: codepen.io Address: 54.245.121.59 It is possible that the DNS change just needed a little time to propagate.

Categories : Amazon

WCF security with load balancer
The client and server binding will be different. The client binding will use username auth in eitehr message or transport level with transport security (ssl): <bindings> <basicHttpBinding> <binding name="NewBinding0"> <security mode="Message" /> </binding> </basicHttpBinding> </bindings> then the server config will use the same config but without the transport security. If you chose to use message security then check out WCF ClearUsernameBinding. If you use trasnport security (basic http) then set mode="TransportCredentialOnly".

Categories : Wcf

What to care about when using a load balancer?
The biggest issue that you are going to run into is going to be related to php sessions. By default php sessions maintain state with a single server. When you add the second server into the mix and start load balancing connections to both of them, then the PHP session will not be valid on the second server that gets hit. Load balancers like haproxy expect a "stateless" application. To make PHP stateless you will more than likely need to use a different mechanism for your sessions. If you do not/can not make your application stateless then you can configure HAProxy to do sticky sessions either off of cookies, or stick tables (source IP etc). The next thing that you will run into is that you will loose the original requestors IP address. This is because haproxy (the load balancer) termina

Categories : PHP

Camel and load balancer
If you need to have that setup so that each server might receive the same record - then you need an idempotent route. And you need to make sure your idempotent repository is the same between your machines. Using a database as the repository is an easy option. If you do not have a database, a hazelcast repo might be an option. What can be an issue is to determine what is unique in your records - such as an order number or customer + date/time or some increasing transaction ID number.

Categories : Apache

Load Balancer between 5 network cards
There are several ways to do this and are all relatively easy. A VERY simple solution is to simply bind mssql to all 5 interfaces and give all interfaces a different network address. You can then configure some clients to point to one interface, others to the next etc. Depending on your network infrastructure you can also "bond" your network interfaces together so that they act like 1 interface on the OS. If you have a single switch and all of the interfaces are plugged into the single switch then bonding is an option. If they are plugged into two different switches then your switches will have to support lacp or something similar. You can also look at using a load balancer infront of your sql server. This could be problematic depending on your database, replication, sticky sessions etc.

Categories : Sql Server

aws elastic load balancer not distributing
You need to have the route53 domain direct traffic to the ELB. If you have example.com and are trying to route that to the load balancer you need to associate the apex with the load balancer. To do this, go to the route53 tab. Click your hosted zone and go to record sets. then create a new zone and click yes for alias You then need to associate the hosted zone with your ELB. Now to get the traffic to fail over correctly you need to be running both instances behind the load balancer (preferably in multiple availability zones) and the ELB will take care of the failover. To do this, go to the elb section of ec2. Click your load balancer and add instances to it.

Categories : Amazon

Configuring Apache Load Balancer
Make sure you follow the advice in section stickyness: ProxyPass / balancer://mycluster stickysession=JSESSIONID|jsessionid scolonpathdelim=On (not only for the /test directory) Furthermore, for the JBoss application server, you need to supply route=web1 / route=web2 etc. in the Apache config and furthermore jvmRoute="web1" in the JBoss configuration of the <Engine name="jboss.web"... element (the location depends on the JBoss version you are using, for v4.2 it is server/default/deploy/jboss-web.deployer/server.xml) See also this tutorial

Categories : Apache

AWS : S3FS AMI and load balancer high I/O Issue
I would like to recommend to take a look at the new project RioFS (Userspace S3 filesystem): https://github.com/skoobe/riofs. This project is “s3fs” alternative, the main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “testing” state, but it's been running on several high-loaded fileservers for quite some time. We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features. Regarding your issue: I'm not quite sure how does S3FS works with cached files, but in our project we try to avoid performing additional I/O operation. Please give it a try and let me know how RioFS works for you !

Categories : Amazon

How to get the actual URL in case of load balancer proxy server
I'm assuming you are using mod proxy (mod_jk/mod_ajp preserves the proxy host) Retrieve the "X-Forwarded-Host" header from request, which is the original host requested by the client. See http://httpd.apache.org/docs/2.2/mod/mod_proxy.html

Categories : Java

Why am I getting a 502 bad gateway when sending a redirect from behind amazon elastic load balancer?
It turns out that ELB is very picky about what it considers a 'valid' response and will return 502 Bad Gateway if it isn't happy. I fixed it by making sure the response from my server had the following headers: eg. if I was listening at http://example.com I send back the following response: HTTP/1.1 301 Moved Permanently Content-Type: */*; charset="UTF-8" Location: https://example.com/ Content-Length: 0 This makes ELB happy and everything works. For interest, here's the code (Java, using Simpleframework): private static void startHttpsRedirector() throws IOException { org.simpleframework.http.core.Container container = new org.simpleframework.http.core.Container() { @Override public void handle(Request request, Response response) { Path path = reques

Categories : Http

Amazon Elastic Load Balancer (ELB) url not resolved by instance attached to it
This is normal, if I correctly understand your testing framework. The way that ELB is scaling, it starts out running on a very small machine, and as traffic increases, it's directed to even larger and larger machines. However, ELB is not configured to handle flash traffic, especially from a small number of hosts, as is the case with a load testing scenario. This is because the DNS record is changed whenever ELB scales, and it sometimes takes a while to propagate. Load testing frameworks sometimes cache the DNS lookup, making things even slower. The official ELB documentation (http://aws.amazon.com/articles/1636185810492479) states that traffic should not be increased by more than 50% every 5 minutes. I found that scaling takes even longer if you're looking to get over 150-200k RPM.

Categories : Wcf

Secure a specific page by client IP behind AWS Elastic Load Balancer
The client IP is actually being passed via header (X-Forwarded-For). This header may include other load balancer IPs in addition to the client IP. If you can configure filtering based on headers, you should be able to do what you are attempting to do.

Categories : Asp Net

How to make restfull service truely Highly Available with Hardware load balancer
70-80 servers is a very horizontally scaled implementation... good job! Better is a very relative term, hopefully one of these suggestions count as "better". Implement an intelligent health check for the application with the ability to adjust the health check while the application is running. What we do is have the health check start failing while the application is running just fine. This allows the load balancer to automatically take the system out of rotation. Our stop scripts query the load balancer to make sure that it is out of rotation and then shuts down normally which allows the existing connections to drain. Batch multiple groups of systems together. I am assuming that you have 70 servers to handle peak load. This means that you should be able to restart several at a time. A st

Categories : Rest

How to configure nginx with multiple server
This should be a good lead for you towards the solution: upstream tornado { server 127.0.0.1:8000; server 127.0.0.1:8001; } upstream geoserver{ server 127.0.0.1:8080; server 127.0.0.1:8081; } server { server_name _; listen 80; location = /tornado { proxy_pass http://tornado; } location = /geoserver { proxy_pass http://geoserver; } } Hope it helps!

Categories : Nginx

How to configure PhpMyAdmin on NGINX (Windows)
server { listen 80; server_name localhost; location / { root C:MHServerwww; index index.php index.html index.htm; } location = / { root C:MHServerwww; index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root C:MHServerwww; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ .php$ { root C:MHServerwww; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $request_filename; in

Categories : C#

Configure Nginx to use cherrypy framework
You can use uwsgi for serve the cherry application. check this link: http://nileshgr.com/2012/08/27/getting-cherrypy-working-with-uwsgi

Categories : Nginx

How to simulate Amazon's 60 sec timeout on Elastic Load Balancer (ELB) on a local xAMP environment?
This timeout has to do with a request taking more than 60 seconds, without sending any data as a response. Long HTTP requests like this should really be avoided. Even browsers may give up if they dont receive any response within a timeout period. If you can't get rid of long HTTP requests and you need to simulate it. You can use an fcgi/php-fpm configuration of php and provide a timeout of 60 seconds for which the fcgi process will wait for PHP to respond.

Categories : Apache

How to configure Phalcon in the Nginx config file
This is the link to the official phalcon nginx configurations. http://docs.phalconphp.com/en/latest/reference/nginx.html

Categories : PHP

How to configure proxy servers with puppet nginx?
Use nginx::resource::vhost The source of the repo you are using gives a breakdown of the commands you will need to use: https://github.com/puppetlabs/puppetlabs-nginx/blob/master/manifests/resource/vhost.pp

Categories : Nginx

How to correctly configure Nginx for PHP (Yii framework and Zurmo)
I don't think you need the if() statement in your *.php block. In my nginx setups that's all i ever needed: # Process PHP files location ~ .php$ { fastcgi_split_path_info ^(.+.php)(/.+)$; # Include the standard fastcgi_params file included with nginx include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; }

Categories : PHP

Configure Nginx to serve "/" from a directory already inside root
server{ root /var/www; location = / { root /var/www/home; } location / { } } Reference: http://nginx.org/r/root http://nginx.org/r/location How nginx processes a request Understanding the Nginx Configuration Inheritance Model

Categories : Nginx



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.