w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Varnish Error: vcl.load /etc/varnish/default.vcl failed
The output is clear: Connection failed (localhost:1234) So make sure you can access varnish CLI on that host:port combination and you haven't started the daemong with a "secret" (-S) option: varnishadm -T localhost:1234 You can find out if the Varnish daemon is actually attached to the port issuing: netstat -lpn And the daemon options in use with: ps aux | grep varnish

Categories : Linux

Why doesn't Nginx load balancing balance bandwidth?
It sounds like the configuration is doing exactly what you asked it to do. You configured a proxy on the first server IP, right? So data has to go from the user to the proxy, then to the server, then the reply from the server back to the proxy and then to the user. It's triple because the first server sees three flows (both servers' output from the proxy and the second server's input to the proxy) while the second server sees one (its output to the proxy). It is perfectly balancing the traffic into equal flows, the first server just sees three flows and the second just one. As for how you fix it, it depends what's wrong with it and what you're trying to accomplish, which you haven't told us.

Categories : Nginx

nginx behind haproxy behind varnish
Haproxy does not appear to really consume the x-forwarded-for header. It appears that it simply replaces it. If you are running on a later version of 1.5 (I think 17 or greater) then you can actually do variable concatenation which means that you can set the x-forwarded-for header yourself without using option forwardfor. I am doing this in a very large haproxy implementation and it is working very well. Another option is to change the haproxy option forwardfor header to use a different header. This means that on the nginx server you would have to look at two headers. The one from varnish would have the end user IP address, the one from haproxy would have the varnish servers IP address. To do this, the haproxy config looks like this: option forwardfor header varnish-x-forwarded-for

Categories : Nginx

Varnish with Nginx for a Rails application (issue with Devise authentication)
You are preventing your backend to delete your session cookie, so you can't log out unless you explicitly delete your browsers' cookies. Looking at your fetch VCL (Comment inline): sub vcl_fetch { # This prevents server from deleting the cookie in the browser when loging out if(req.url ~ "logout" || req.url ~ "sign_out"){ unset beresp.http.Set-Cookie; } if (req.request == "GET") { unset beresp.http.Set-Cookie; set beresp.ttl = 360m; } if (req.url ~ "images/" || req.url ~ "javascripts" || req.url ~ "stylesheets" || req.url ~ "assets"){ set beresp.ttl = 360m; } } So your backend can't delete client's cookie unless as result of a POST request. IMHO you shouldn't mess with backend's Set-Cookie headers unless you know (and test well) posible side effects

Categories : Ruby On Rails

Amazon web service load balancer unable to balance the load equally
There could be a number of reasons for this. Its without doing more digging, its hard to know which one you are experiencing. Sticky sessions can result in instances traffic becoming unbalanced. Although this depends heavily on usage patterns and your application. Cached DNS resolution. Part of how the ELB works is to direct traffic round robin on a DNS level. If a large number of users are all using the same DNS system provided by an ISP, they might all get sent to the same zone. Couple this with sticky sessions and you will end up with a bunch of traffic that will never switch. Using Route 53 with ALIAS records may reduce this somewhat. If you can't get the ELB to balance your traffic better, you can set up something similar with vanish cache or other software load balancer. Not as c

Categories : Amazon

Load Balance wso2 ESB
You can add following parameter to http and https transport receivers of WSO2 ESB . <parameter name="WSDLEPRPrefix" locked="false">[load-balancer-url]</parameter> For example: <parameter name="WSDLEPRPrefix" locked="false">http://esb.cloud-test.wso2.com:8280</parameter> You need to edit following file. <WSO2-ESB-HOME>repository/conf/axis2/axis2.xml This step is necessary for configuring WSO2 ELB also. See following ELB doc for more information. http://docs.wso2.org/wiki/pages/viewpage.action?pageId=26839403

Categories : Wso2

Page very slow @first load (using magento with varnish)
That is because your non-cached is too slow and it is warming (likely you are on $20-50/mth servers), Varnish will not help you, unless your non-cached is 2-3s Google will penalise you and most visitors will abandon their carts. Hosting for Magento should be 1% of revenue, anything less and it will not work, technology can only take you so far, the rest is actually doing it correctly, and that needs the hosting budget.

Categories : Performance

Load Balance: Node.js - Socket.io - Redis
Redis only syncs from the master to the slaves. It never syncs from the slaves to the master. So, if you're writing to all 3 of your machines, then the only messages that will wind up synced across all three servers will be the ones hitting the master. This is why it looks like you're missing messages. More info here. Read only slave Since Redis 2.6 slaves support a read-only mode that is enabled by default. This behavior is controlled by the slave-read-only option in the redis.conf file, and can be enabled and disabled at runtime using CONFIG SET. Read only slaves will reject all the write commands, so that it is not possible to write to a slave because of a mistake. This does not mean that the feature is conceived to expose a slave instance to the internet or

Categories : Node Js

Azure windows virtual machine load balance
You need both the virtual machines to be under the same cloud service. Only then you get the option to load balance them. There is no way add existing VMs to the same network. there are operations in the Service management API (usable through powershell) to create a new VM. You can use that to create a fresh VM from your existing image and connect it to the same service as your first VM. Then you'llhave the necessary options enabled for load balancing.

Categories : Azure

Azure Traffic Manager Load Balance Options
Traffic manager comes into picture only when your application is deployed across multiple cloud services within same data center or in different data centers. If your application is hosted in a single cloud service (with multiple instances of course) , then the instances are load balanced using Round Robin pattern. This is the default load balancing pattern and comes to you without any extra charge. You can read more about traffic manager here: https://azure.microsoft.com/en-us/documentation/articles/traffic-manager-overview/

Categories : Azure

Does SQL Server Service Broker load balance when External Activator is used?
To utilise the built-in load balancing you would need to deploy the service to more than one sql server instance. I suspect that isn't quite what you are planning, so you will have to come up with a custom method, such as having an internal activated procedure that forwards your arriving messages into one of your four queues that the external activation processes look at.

Categories : Sql Server

how to internally load balance multiple instances of web/worker roles in Azure?
There is no internal load balancer in Windows Azure. The only load balancer is the one that has the public IP Addresses. If you want to load balance only internal addresses (workers) you have to maintain it yourself. Meaning you have to install some kind of a load balancer on Azure VM, which is part of the same VNet. That load balancer may be of your choice (Windows or Linux). And you have to implement a watchdog service for when topology changes - i.e. workers are being recycled, hardware failure, scaling events. I would not recommend this approach unless this is absolutely necessary. Last option is to have a (cached) pool of IP Endpints of all the workers and randomly chose one when you need.

Categories : Asp Net

How to get HTML5 and Maxmind gelocation features working with varnish?
In order to get the Nginx Http GeoIP module working behind a proxy, you will need to pass the IP address of your proxy server to the geoip_proxy directive. I don't know if we have enough information to speculate about why the W3C Geolocation function is not working. As you suggest, there is no obvious reason why it should not work when your site is behind a proxy server. As an aside, you might want to check out MaxMind's GeoIP2 JavaScript service, which tries to use W3C Geolocation and then falls back on the web service if that is not available. MaxMind provides a free option that might meet your needs.

Categories : HTML

Android database working with currency/balance
Since this is just a single-user android application, I'm going to skip the bit about caching calculated values (because an SQLite lookup is not very time-consuming). If you think you need that extra bit of performance, though, you should definitely consider having a second table to cache calculated values that is updated when you re-run your calculations. Now then, on to your real question: Let's say that you want to write a basic function to get the total balance based on your transactions and some base value. It might look a bit like this: public int getBalance(int baseValue) { String[] cols = new String[] { "SUM(" + TRANSACTION_AMOUNT + ")" }; try { Cursor data = SqlDatabase.query(DATABASE_TABLE, cols, null, null, null, null, null); return data.ge

Categories : Java

Run two webservers with twisted
An instance of twisted.application.internet.TCPServer represents one TCP server. You can't initialize it twice and get two servers out of it. I expect a more complete code snippet than you gave would look like: from twisted.application import internet class TwoServers(TCPServer): def __init__(self): internet.TCPServer.__init__(self,9000, WebSocketFactory(factory)) internet.TCPServer.__init__(self,80, server.Site(HandlerHTTP)) This doesn't work. It's like trying to have an int that is two integers or a list that is two sequences. Instead, make two TCPServer instances: from twisted.application import service, internet from websocket import WebSocketFactory factory = ... HandleHTTP = ... holdMyServers = service.MultiService() internet.TCPServer(9000, WebSocket

Categories : Python

Error in if statement for setting balance
console.log("Your balance is (balance - 5.00)."); should be console.log("Your balance is %s.", (balance - 5.00)); The former will just say "Your balance is (balance - 5.00)" because JavaScript does not treat words like "balance" as variable references when they appear inside a string literal. In the second, the message format string is distinct from the expression you want to display, and console.log replaces %s sequences with the other arguments.

Categories : Javascript

Unable to restart varnish using "service varnish restart"
Fixed the problem by removing line breaks from the "/etc/default/varnish" file. DAEMON_OPTS="-a :80 -T localhost:1234 -f /etc/varnish/default.vcl -s malloc,256m" to DAEMON_OPTS="-a :80 -T localhost:1234 -f /etc/varnish/default.vcl -s malloc,256m"

Categories : Caching

Where is my nginx being configured? Changing nginx.conf still brings me to the 'Welcome to nginx' page
when you rewrited the nginx.conf in the folder of source code of nginx and installed it firstly, it will be copied to /usr/local/nginx/conf/nginx.conf (without specially modification of the configure in the folder of source code). But when you do this secondly, the nginx.conf in the folder of source code will be copied to /usr/local/nginx/conf/nginx.conf.defualt,and it usually don't work unless you use nginx -c /usr/local/nginx/conf/nginx.conf.defualt to assign its configure file evertime. There is a line NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf" in your script above,which indicates the configure file clearly. To handle this, I suggest rewrite /usr/local/nginx/conf/nginx.conf (not the one in the folder of source code) instead.

Categories : Nginx

Can I use Amazon ELB instead of nginx as load balancer for my Node.Js app?
Yes, but there are a few gotcha to keep in mind: If you have a single server, ensure you don't return anything except 200 to the page that ELB uses to check health. We had a 301 from our non-www to www site, and that made ELB not send anything to our server because of it. You'll get the ELB's IP instead of the client's in your logs. There is an ngx_real_ip module, but it takes from config hacking to get it to work.

Categories : Node Js

Setting a trace id in nginx load balancer
In our production environment we do have a custom module like this. It can generate a unique trace id and then it will be pushed into http headers which send to the upstream server. The upstream server will check if the certain field is set, it will get the value and write it to access_log, thus, we can trace the request. And I find an 3rd party module just looks the same: nginx-operationid, hope it is helpful.

Categories : Nginx

javascript statements not working... and getting 'failed to load resource' error
Aside from all the problems with your html structure, I suspect the root problem you're running into is that your browser can't find the delete.php file, which I assume is in the same directory that the file this code is from is in. While on a server that form of path-relative addressing will work, generally speaking browsers will prevent loading of files from the host file system for security purposes. Change your action to access delete.php via your localhost web server (i.e. http://localhost/troubleshoot/delete.php) and you should be able to load the file.

Categories : PHP

Make nginx load different locations for PC and mobile devices
The problem is as indicated: the location directive is not allowed inside an if. There are actually only a few things that are safe to do within if directives, examples are rewrite and return. See the documentation for more detailed information. Something the documentation doesn't mention is that under certain conditions set is also safe to use. In your specific case, try something like this: if ($http_user_agent ~* android) { rewrite ^ /mobile/$request_uri last; # internal rewrite } location /mobile { } Note that I haven't verified your regular expression. I am sure you can tweak that according to your needs.

Categories : Nginx

Running nginx infront of a unicorn or gunicorn under Elastic Load Balancer
In a word: yes. Amazon's ELB service is wonderful, but it is solely a load balancer. Running nginx on your own server gives you a locus of control and a place to do rewrites, redirects, compression, header munging, caching, and more. Furthermore it allows you to serve static files in the fastest possible way, rather than using a slot on your more heavyweight appserver.

Categories : Nginx

Rails 3.2 Nginx Unicorn always try to load index.html (403) from public folder
Finally i found the solution myself. Here is what i did: The other location-blocks interfered so it always loaded the public folder. After I deleted this lines : location / { gzip_static on; } location ^~ /assets/ { gzip_static on; expires max; add_header Cache-Control public; } the nginx server connects to unicorn.

Categories : Ruby On Rails

Adding rails apps to nginx avoiding high load time on 1st access
Unicorn sounds like it might be a better fit for your deployment scenario. You can keep nginx up front, but instead of loading rails itself, it will just connect to a unicorn Unix socket. Further, you can reload your application with new code gracefully, while nginx stays up and Unicorn swaps out backend quietly.

Categories : Ruby On Rails

nginx add_header not working
What does your nginx error log say? Do you know which add_header lines are breaking the configuration? If not, comment them all out then enable them 1 by 1, reloading nginx to see which one(s) is/are the problem. I would begin by commenting out the block: add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Headers' 'Authorization,Content-Type,Accept,Origin,User-Agent,DNT,Cache-Control,X-Mx-ReqToken'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE'; add_header PS 1 The problem could be that you're setting headers not supported by the core httpHeaders module. Installing the NginxHttpHeadersMoreModule may be helpful. Also, try replacing the two add_header lines int the location ~*

Categories : PHP

NodeJS working behind Nginx
You might want to try to set up the proxy_pass directive without a trailing URI part '/', like this: proxy_pass http://backend; If you specify the URI, it'll be used in the request sent to your backend node.js app. Instead of a request for /room/create, node.js will get a request for / as defined by your proxy_pass setting. For more information, please see the nginx proxy_pass documentation: http://wiki.nginx.org/HttpProxyModule#proxy_pass

Categories : Javascript

nginx and .htaccess are working together, is this possible?
Yea it's possible but there's a trick, the server has both apache and nginx installed, nginx listens to port 80 and apache listens on any other port, nginx would serve the assets directly ( CSS, JS, HTML, etc ) and pass the PHP or whatever app it is to apache, reduces the load on apache a bit, but consumes a little bit more memory because you have 2 servers running.

Categories : Apache

nginx rewrite module not working?
You seem to have mixed different bits from different how-to's, without understanding them. Observe: rewrite ^(.*)$ index.php?/$1 last; #question mark, typo? location ~ .php$ # matches end of request_uri fastcgi_split_path_info ^(.+.php)(/.+)$; # matches .php followed by a slash For the third statement to match, .php is never at end of request_uri, so this statement will never match in this location. Remove the question mark from the first statement, remove the dollar sign from the location. Then add: fastcgi_param SCRIPT_FILENAME $document_root$ fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_split_pathinfo; to the location block. Try to understand from the documentation and try to further restrict the location block.

Categories : Nginx

nginx + php-fpm+wordpress not working multisite
For starters, make sure that /var/run/php5-fpm.sock exits. For now just comment out chdir support until you have a working setup and then if you still want chroot support then go back and play with those settings. If I missed anything below that might help you, let me know and I'll update this. Here is a working copy of my Nginx server that is running Wordpress without any troubles: server { listen x.x.x.x:80; server_name example.com; # Character Set charset utf-8; # Logs access_log /vhosts/example.com/logs/access_log main; error_log /vhosts/example.com/logs/error_log; # Directory Indexes index index.html index.phtml index.shtml index.php; # Document Root root /vhosts/example.com/public; # Location location / { try_fi

Categories : Wordpress

Why is SSL redirect not working with force_ssl and Nginx?
First guess... the port 80 server block does not pass the host through, maybe that's it? proxy_set_header Host $http_host; The SSL block does, but if you start at the non-SSL side and Rails picks it up, it might not have the full header there?

Categories : Ruby On Rails

cache-control not working in chrome and varnish also not respecting cache-control
I think there are two different issues: Varnish.- Varnish is receiving a Cookie in the request, so by default it wont't cache the answer[1]. Chrome.- The server is answering with a "Pragma:no-cache" that's likely to avoid the Chrome caching of the item. Anyway, debugging varnish cache issues without the actual VCL used is quite difficult. [1] https://www.varnish-software.com/static/book/VCL_Basics.html#default-vcl-recv

Categories : Caching

Nginx rewrite some matching rules are not working
Actually i don't like neither of the methods, it might be working but it's not really the best way to write it, so lets try something different. location *~ ^/(contact|privacy|terms|faq)/?$ { try_files $uri $uri/ /index.php?v=$1; } location *~ ^/(twitter|facebook|login)/?$ { try_files $uri $uri/ /index.php?v2=$1; } location / { try_files $uri $uri/ /index.php; } o and I never heard of a last; break; it's probably working only because nginx is ignoring the last part of it, it's either a last or a break,

Categories : PHP

CodeIgniter, NGINX inside folder not working
Try the following, works a dream for me. Don't forget to change the fastcgi_pass, and backup your current config before you try it. server { listen Server IP:80; server_name domain.name; access_log /var/log/nginx/access.log; root /path/to/www; index index.php index.html index.htm; # enforce www (exclude certain subdomains) # if ($host !~* ^(www|subdomain)) # { # rewrite ^/(.*)$ $scheme://www.$host/$1 permanent; # } # enforce NO www if ($host ~* ^www.(.*)) { set $host_without_www $1; rewrite ^/(.*)$ $scheme://$host_without_www/$1 permanent; } # canonicalize codeigniter url end points # if your default controller is something other than "welcome" you should change the following if ($request_uri ~* ^(/wel

Categories : PHP

Text2wave festival not working via nginx php exec
My guess would be that you've got shell execution disabled in the php.ini configuration file used by your web server. Try opening /etc/php5/fpm/php.ini file, finding the disable_functions directive, and making sure that none of the following functions are present in the value of the directive: shell_exec,exec,passthru,system

Categories : PHP

nginx location index directive not working
If you explicitly request /index.html, is it served? If not, you might want to add an explicit root /path/to/root; to your server {} block. Also verify that index.html has the correct permissions. This will help with troubleshooting: It will force a 404 if the root index.html is not found. If that happens, at least you can check the logs to see were it was looking: location = / { index index.html; } Also, be sure to do nginx -s reload when changing the config.

Categories : Nginx

redmine installation not working through nginx and thin
The error is because your firewall "iptables" blocked the port. Rollback your iptables config, then issue the follow command: iptables -I INPUT -i lo -p tcp --dport 3123 -j ACCEPT Remember to save the setting by: service iptables save More information about iptables: https://help.ubuntu.com/community/IptablesHowTo p.s. sudo may be needed for the above commands.

Categories : Nginx

nginx gzip not working on browser but curl works
Your curl command works because it sends a HEAD request, instead of a GET request. Try curl with verbose mode: curl -Iv -H "Accept-Encoding: gzip,deflate" http://www.ihezhu.com/ You will get the same result as in browser with curl -i -H "Accept-Encoding: gzip,deflate" http://www.ihezhu.com/ "text/html" is always compressed. So it has nothing to do with gzip_types directive. This happened to me before when my upstream server was using http 1.0 instead of http 1.1. Have you tried the following? gzip_http_version 1.0; [update] Your nginx compile option seems normal. It's hard to understand how url length directly affects nginx on gzip. Checked the nginx source code. nothing on url is used to determine gzip. Based on the source code, there are 2 possible causes: Your php code ret

Categories : Nginx

nginx reverse_proxy with axis camera - default rewrite not working
Try replacing your proxy_redirect with this line proxy_redirect http://192.168.0.205:80/; http://192.168.0.205:80/camera/; I don't know what your Location header says exactly, but you should get the idea, replace the IP with a hostname or whatever the redirect is trying to take you, you're simply telling nginx to append /camera to whatever redirect the website asks you to do

Categories : Nginx

Wordpress nginx preview post is 404 not found, but old posts are working
I've tried to reproduce your issue on Ubuntu 12.04+nginx+php-fpm but without effect. It means that previewing works as expected. The only difference is I've uncommented line fast_cgi_pass 127.0.0.1:9000 and commented out the other one. As I see you've put 'varnish' in tags, so maybe it's problem with varnish as suggested here -> http://wordpress.org/support/topic/nginx-cant-preview-posts-404-error

Categories : Wordpress



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.