w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Elastic Beanstalk's Elastic Load Balancer name
Unfortunately the answer is no to the first 2. The last one, you are able to do it, but it more or less goes against the flow of Elastic beanstalk. You would need to create your own ELB with whatever name you like and then put it in front ot the instance that is created by beanstalk. You would need need to delete the ELB that beanstalk created so that it's not sitting there costing you money. I can't remember if beanstalk boots its environments via an AutoScaling group, but if it does, you'll need to associate that AutoScaling group with your new ELB. After creating and syncing all of that up, you need to point your CNAME to your new custom made ELB. That should work.

Categories : Amazon

nginx as load balancer server out 404 page based on HTTP response from app server
Simply set the proxy_intercept_errors option to on and create an nginx error page for it. error_page 404 /404.html; proxy_intercept_errors on; To ensure that nginx will serve the static file from it’s document root you also have to specify the following: location /404.html { internal; } I'm assuming here that nginx is configured to talk with your app servers as proxy, because that is the usual way and your question does not mention any of this.

Categories : Nginx

aws elastic load balancer not distributing
You need to have the route53 domain direct traffic to the ELB. If you have example.com and are trying to route that to the load balancer you need to associate the apex with the load balancer. To do this, go to the route53 tab. Click your hosted zone and go to record sets. then create a new zone and click yes for alias You then need to associate the hosted zone with your ELB. Now to get the traffic to fail over correctly you need to be running both instances behind the load balancer (preferably in multiple availability zones) and the ELB will take care of the failover. To do this, go to the elb section of ec2. Click your load balancer and add instances to it.

Categories : Amazon

Why am I getting a 502 bad gateway when sending a redirect from behind amazon elastic load balancer?
It turns out that ELB is very picky about what it considers a 'valid' response and will return 502 Bad Gateway if it isn't happy. I fixed it by making sure the response from my server had the following headers: eg. if I was listening at http://example.com I send back the following response: HTTP/1.1 301 Moved Permanently Content-Type: */*; charset="UTF-8" Location: https://example.com/ Content-Length: 0 This makes ELB happy and everything works. For interest, here's the code (Java, using Simpleframework): private static void startHttpsRedirector() throws IOException { org.simpleframework.http.core.Container container = new org.simpleframework.http.core.Container() { @Override public void handle(Request request, Response response) { Path path = reques

Categories : Http

What method(s) do I use to configure/update an Elastic Load Balancer via Java?
My bad. Forgot to execute the requests against the ELB variable. The code below creates the health check and assigns the instances associated with the ELB. Hope this helps the next person asking this question. ConfigureHealthCheckResult healthResult = myELB.configureHealthCheck(healthCheckReq); RegisterInstancesWithLoadBalancerResult registerResult = myELB.registerInstancesWithLoadBalancer(regInst);

Categories : Java

Amazon Elastic Load Balancer (ELB) url not resolved by instance attached to it
This is normal, if I correctly understand your testing framework. The way that ELB is scaling, it starts out running on a very small machine, and as traffic increases, it's directed to even larger and larger machines. However, ELB is not configured to handle flash traffic, especially from a small number of hosts, as is the case with a load testing scenario. This is because the DNS record is changed whenever ELB scales, and it sometimes takes a while to propagate. Load testing frameworks sometimes cache the DNS lookup, making things even slower. The official ELB documentation (http://aws.amazon.com/articles/1636185810492479) states that traffic should not be increased by more than 50% every 5 minutes. I found that scaling takes even longer if you're looking to get over 150-200k RPM.

Categories : Wcf

Secure a specific page by client IP behind AWS Elastic Load Balancer
The client IP is actually being passed via header (X-Forwarded-For). This header may include other load balancer IPs in addition to the client IP. If you can configure filtering based on headers, you should be able to do what you are attempting to do.

Categories : Asp Net

Running nginx infront of a unicorn or gunicorn under Elastic Load Balancer
In a word: yes. Amazon's ELB service is wonderful, but it is solely a load balancer. Running nginx on your own server gives you a locus of control and a place to do rewrites, redirects, compression, header munging, caching, and more. Furthermore it allows you to serve static files in the fastest possible way, rather than using a slot on your more heavyweight appserver.

Categories : Nginx

how to configure Elastic Beanstalk to deploy code to an instance but not add it to the load balancer
For the most part, though not straight forward, you could provide a .config file in .ebextensions to run your script files. This example of speeding up a deploy shows running some scripts and moving data back and forth. Better yet, the author describes the sequence and deployment process. I'm just embarking on this type of container customization. I have read of others dropping files in the /opt/elasticbeanstalk/hooks/appdeploy/pre and /opt/elasticbeanstalk/hooks/appdeploy/post directories, much of which can be derived by reading the post linked above. Also note that you can include the content of a script in the yaml .config file such as this which I found yesterday: files: "/opt/elasticbeanstalk/hooks/appdeploy/post/99_restart_delayed_job.sh": mode: "000755" owner: root

Categories : Ruby On Rails

How to simulate Amazon's 60 sec timeout on Elastic Load Balancer (ELB) on a local xAMP environment?
This timeout has to do with a request taking more than 60 seconds, without sending any data as a response. Long HTTP requests like this should really be avoided. Even browsers may give up if they dont receive any response within a timeout period. If you can't get rid of long HTTP requests and you need to simulate it. You can use an fcgi/php-fpm configuration of php and provide a timeout of 60 seconds for which the fcgi process will wait for PHP to respond.

Categories : Apache

How to get the actual URL in case of load balancer proxy server
I'm assuming you are using mod proxy (mod_jk/mod_ajp preserves the proxy host) Retrieve the "X-Forwarded-Host" header from request, which is the original host requested by the client. See http://httpd.apache.org/docs/2.2/mod/mod_proxy.html

Categories : Java

Ping Tool to check if server is online
Consider using Pingdom. It provides this service for you. One thing you have not considered is once your site goes down, you will continue to get email messages every minute, until the site is up again, or until you stop this script. A good approach is to switch states from reporting when the site is down to reporting when the site is up, once you have detected that it is down. And then back again, once it is back up. Essentially you one receive an email reporting 'site down', then another later on, hopefully, reporting 'site is up'. Pingdom does this for you, very nicely.

Categories : Bash

rails application elastic beanstalk timeout
Your db migration failed to run: [root directoryHooksExecutor info] Executing script: /opt/elasticbeanstalk/hooks/appdeploy/pre/12_db_migration.sh 2013-07-10 14:25:20,500 [INFO] (1759 MainThread) [directoryHooksExecutor.py-29] [root directoryHooksExecutor info] Output from script: Rake task failed to run, skipping database migrations. The easiest way to figure this out, is to deploy again, ssh to the server and manually run the command to see why it failed. This could be because of a number of reasons, including e.g. by default rejecting git repositories in your gemfile (bundle install I recall will run but db:migrate will fail). There are ways around all of this, we just need more information to help.

Categories : Ruby On Rails

Amazon web service load balancer unable to balance the load equally
There could be a number of reasons for this. Its without doing more digging, its hard to know which one you are experiencing. Sticky sessions can result in instances traffic becoming unbalanced. Although this depends heavily on usage patterns and your application. Cached DNS resolution. Part of how the ELB works is to direct traffic round robin on a DNS level. If a large number of users are all using the same DNS system provided by an ISP, they might all get sent to the same zone. Couple this with sticky sessions and you will end up with a bunch of traffic that will never switch. Using Route 53 with ALIAS records may reduce this somewhat. If you can't get the ELB to balance your traffic better, you can set up something similar with vanish cache or other software load balancer. Not as c

Categories : Amazon

Performing health check of BizTalk without impacting the services
There are some unclear statements and unproven assumptions in the question. Thus an answer in a general way: Load balancing in BizTalk has two aspects: network and hosts. Load balancing method depends on the host type: in-process vs. isolated. BizTalk Server Monitoring Management Pack has capabilities to monitor health of the BizTalk artifacts. Generally, network load balancer should not pose a problem for the performance as it only polls once per server (not each BizTalk host).

Categories : C#

Route 53 Amazon Aws - Check service (Solr) health in my ec2 instance
Yes, this is possible. The port and path of Route 53 health checks is fully configurable, so you can create one pointing to eg. your-ec2-instance:8983/admin/ping There are a few things you'd have to setup: your ec2 instance needs to be reachable from the IP addresses used by Route 53 healthcheckers apache solr needs a URL that can be used to determine if it's healthy. A good way of doing this is with solr's ping request handler Full disclosure - I work for Route 53

Categories : Solr

Camel and load balancer
If you need to have that setup so that each server might receive the same record - then you need an idempotent route. And you need to make sure your idempotent repository is the same between your machines. Using a database as the repository is an easy option. If you do not have a database, a hazelcast repo might be an option. What can be an issue is to determine what is unique in your records - such as an order number or customer + date/time or some increasing transaction ID number.

Categories : Apache

What to care about when using a load balancer?
The biggest issue that you are going to run into is going to be related to php sessions. By default php sessions maintain state with a single server. When you add the second server into the mix and start load balancing connections to both of them, then the PHP session will not be valid on the second server that gets hit. Load balancers like haproxy expect a "stateless" application. To make PHP stateless you will more than likely need to use a different mechanism for your sessions. If you do not/can not make your application stateless then you can configure HAProxy to do sticky sessions either off of cookies, or stick tables (source IP etc). The next thing that you will run into is that you will loose the original requestors IP address. This is because haproxy (the load balancer) termina

Categories : PHP

WCF security with load balancer
The client and server binding will be different. The client binding will use username auth in eitehr message or transport level with transport security (ssl): <bindings> <basicHttpBinding> <binding name="NewBinding0"> <security mode="Message" /> </binding> </basicHttpBinding> </bindings> then the server config will use the same config but without the transport security. If you chose to use message security then check out WCF ClearUsernameBinding. If you use trasnport security (basic http) then set mode="TransportCredentialOnly".

Categories : Wcf

Using Laravel behind a load balancer
We use a load balancer where I work and I ran into similiar problems with accessing cPanel dashboards where the page would just reload every time I tried accessing a section and log me off as my IP address was changing to them. The solution was to find which port cPanel was using and configure the load balancer to bind that port to one WAN. Sorry, I am not familiar with laravel and if it just using port 80 then this might not be a solution.

Categories : PHP

load balancer won't remove itself from dns
When I nslookup your domain I get app-lb-west-650828891.us-west-2.elb.amazonaws.com: DNS server handling your query: localhost DNS server's address: 127.0.0.1#53 Non-authoritative answer: Name: codepen.io Address: 54.245.121.59 It is possible that the DNS change just needed a little time to propagate.

Categories : Amazon

IIS Server Farm Health API?
There are plenty of performance counters at your disposal for monitoring IIS and related services remotely: Web Service Counters for the WWW Service Internet Information Services Global Counters Performance Counters for ASP.NET You can check out these counter values through windows's Performance Monitor, which provides a fair user interface, or use the following C# code to develop your own monitoring tool: var category = "Web Service"; var counter = "Current Connections"; var instance = "_Total"; var server = "192.168.0.1"; var perf = new PerformanceCounter(category , counter , instance , server); int connections = (int)perf.NextValue();

Categories : C#

Load Balancer between 5 network cards
There are several ways to do this and are all relatively easy. A VERY simple solution is to simply bind mssql to all 5 interfaces and give all interfaces a different network address. You can then configure some clients to point to one interface, others to the next etc. Depending on your network infrastructure you can also "bond" your network interfaces together so that they act like 1 interface on the OS. If you have a single switch and all of the interfaces are plugged into the single switch then bonding is an option. If they are plugged into two different switches then your switches will have to support lacp or something similar. You can also look at using a load balancer infront of your sql server. This could be problematic depending on your database, replication, sticky sessions etc.

Categories : Sql Server

Configuring Apache Load Balancer
Make sure you follow the advice in section stickyness: ProxyPass / balancer://mycluster stickysession=JSESSIONID|jsessionid scolonpathdelim=On (not only for the /test directory) Furthermore, for the JBoss application server, you need to supply route=web1 / route=web2 etc. in the Apache config and furthermore jvmRoute="web1" in the JBoss configuration of the <Engine name="jboss.web"... element (the location depends on the JBoss version you are using, for v4.2 it is server/default/deploy/jboss-web.deployer/server.xml) See also this tutorial

Categories : Apache

Can I use Amazon ELB instead of nginx as load balancer for my Node.Js app?
Yes, but there are a few gotcha to keep in mind: If you have a single server, ensure you don't return anything except 200 to the page that ELB uses to check health. We had a 301 from our non-www to www site, and that made ELB not send anything to our server because of it. You'll get the ELB's IP instead of the client's in your logs. There is an ngx_real_ip module, but it takes from config hacking to get it to work.

Categories : Node Js

Setting a trace id in nginx load balancer
In our production environment we do have a custom module like this. It can generate a unique trace id and then it will be pushed into http headers which send to the upstream server. The upstream server will check if the certain field is set, it will get the value and write it to access_log, thus, we can trace the request. And I find an 3rd party module just looks the same: nginx-operationid, hope it is helpful.

Categories : Nginx

AWS : S3FS AMI and load balancer high I/O Issue
I would like to recommend to take a look at the new project RioFS (Userspace S3 filesystem): https://github.com/skoobe/riofs. This project is “s3fs” alternative, the main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “testing” state, but it's been running on several high-loaded fileservers for quite some time. We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features. Regarding your issue: I'm not quite sure how does S3FS works with cached files, but in our project we try to avoid performing additional I/O operation. Please give it a try and let me know how RioFS works for you !

Categories : Amazon

Qt 4.8.4 how to check if file exists on http server
The function QFile::exists is not able to create HTTP requests, which would be necessary to achieve what you are trying to do. The forum discussion you linked to works, because the guy is trying to access a network drive; this is naturally supported by the operating system. To check whether the file exists, you will have to go the long way around - here is an explanation of how to communicate with a web server: http://developer.nokia.com/Community/Wiki/Creating_an_HTTP_network_request_in_Qt

Categories : C++

Dot notation for reaching the proprer child of JSON in $http request of Angular is not working?
Remove: var JSONData = JSON.stringify(data); And just use "data". You are converting your javascript object back into a string. You want to use it as an object. $scope.persons = data;

Categories : Http

how to check in python if file can be downloaded from http server
added extra clause to check the size of the file , and this fails for AF1 server as it just says file is present but doesn't give details of file attributes, could get this working through below changes def check_file(url, uid, pwd): print 'checking ' + url request = urllib2.Request(url) base64string = base64.encodestring('%s:%s' % (uid, pwd)).replace(' ', '') request.add_header("Authorization", "Basic %s" % base64string) request.get_method = lambda : 'HEAD' try: connection = urllib2.urlopen(request) data = connection.info() connection.close() try: file_size = int(data.getheaders("Content-Length")[0]) return 0 except IndexError, e: return 1 except urllib2.HTTPError, e: print e.getcode() return 1

Categories : Python

load balancing IBM Liberty profile with apache http server
Currently, you'll have to generate a plugin-cfg.xml from each liberty server (the license has info about how many servers you can aggregate in this way for load balancing and failover) and merge the result to make it appear like a cluster to the WAS Plugin. Other editions provide a merge tool, if you have access to them. The WAS plugin installation has an XSD file for the plugin-cfg.xml. 1) note the http and https transports in both plugin configurations 2) make a copy of one of the XML's to edit 3) Find the <ServerCluster <Config... <ServerCluster CloneSeparatorChange="false" GetDWLMTable="false" IgnoreAffinityRequests="true" LoadBalance="Round Robin" Name="cluster1" PostBufferSize="64" PostSizeLimit="-1" RemoveSpecialHeaders="true" RetryInterval="60" ServerIOTimeoutRetr

Categories : Apache

How to make restfull service truely Highly Available with Hardware load balancer
70-80 servers is a very horizontally scaled implementation... good job! Better is a very relative term, hopefully one of these suggestions count as "better". Implement an intelligent health check for the application with the ability to adjust the health check while the application is running. What we do is have the health check start failing while the application is running just fine. This allows the load balancer to automatically take the system out of rotation. Our stop scripts query the load balancer to make sure that it is out of rotation and then shuts down normally which allows the existing connections to drain. Batch multiple groups of systems together. I am assuming that you have 70 servers to handle peak load. This means that you should be able to restart several at a time. A st

Categories : Rest

nodejs HTTP server can't handle large response on high load
Right now all strings are first converted to Buffer instances. This can put a heavy load on the garbage collector to cleanup after each request. Running your application with --prof and examining the v8.log file with tools/*-tick-processor and you may see that. There is work being done to correct this so strings are written out to memory directly then cleaned up when the request is complete. It has been implemented for file system writes in f5e13ae, but not yet for other cases (much more difficult to implement than it sounds). Also converting strings to Buffers is very costly. Especially for utf8 strings (which are default). Where you can, definitely pre-cache the string as a Buffer and use it. Here is an example script: var http = require('http'); var str = 'a'; for (var i = 0; i <

Categories : Node Js

Is puma the ONLY multi-threaded rails 4 http server?
No. In alphabetical order: Net::HTTP::Server, despite the lack of advertising, supports multithreading Phusion Passenger has supported multithreading since v4 beta Rainbows! supports multiple concurrency models, including multithreading Reel is a Celluloid-backed "evented" server, which "also works great for multithreaded applications and provides traditional multithreaded blocking I/O support too" Thin has a threaded mode, which can be enabled by passing --threaded or by setting threaded: true in the appropriate configuration file (e.g. bundle exec thin start --threaded) WEBrick is on its own multithreaded, so it's not fair to eliminate it as an option; if you're using the Rails-embedded version, you'll need to monkey-patch Rails::Server to enable multi-threading Zbatery is based on Ra

Categories : Ruby On Rails

Rails 3.1 load Modernizr before my application javascript
Oops, yes there was something missing. I had put the javascript_include_tag :modernizr code in my template, rather than the application layout (app/views/layouts/application), which defines the html head for all templates in the app.

Categories : Misc

How check if succesfull cmd ping in php
This should do it: if(exec('ping http://www.google.com')) { echo 'True'; } else { echo 'False'; } I suggest you could use CUrl See Manual but that all depends upon what you are trying to achieve. Provide more data if needed. NOTE You are to use http:// before google.com as that's needed in order to make the ping.

Categories : PHP

How to embed an Http server (like i-Jetty, Paw, etc) in android application
Here's one I have successfully used: NanoHttpd https://github.com/NanoHttpd/nanohttpd http://en.wikipedia.org/wiki/NanoHTTPD NanoHttpd is an open-source, small-footprint web server that is suitable for embedding in applications, written in the Java programming language. The source code consists of a single .java file. And here is an Android sample project that uses it: https://gist.github.com/komamitsu/1893396 It's very small and simple and in pure java but it is fairly modifiable. There are others but they are a bit more heavyweight. Depends what you want to do. I would recommend you start small and see if that suits your purposes.

Categories : Android

How to handle exception generated from Rack before reaching your rails app
if the Ruby runtime is booted - which in this case seems it is ... you should be able to configure a (minimal) rack error application just set smt rack-y (require 'my_error_app'; run MyErrorApp) into the jruby.rack.error.app context parameter (e.g. in your web.xml with Warbler)

Categories : Ruby On Rails

Rails server failing to load
Unless you're running JRuby, you'll have to remove PDF Ravager from your gemfile (at least according to the source code). This is the file trying to require java.

Categories : Ruby On Rails

spray.io http server inside Play2 application context
I can say that Spray actually is the fastest JVM-based toolkit for web-based development, you can check out the latest benchmarks on the official blog. As for the question. If you want to write your own implementation for a little HTTP server then you should check spray-can http based api, spray-io is just a layer between Akka IO and Java NIO. I'm not very good at Play, but as a way i would sugest to create a multi-build sbt config or separate project with Spray http server and connect them through REST api. Architecture would be pretty simple cause it's based on Akka actors in the simplest case would look like a bunch of cases in the receive method: def receive = { case HttpRequest(GET, Uri.Path("/ping"), _, _, _) => sender ! HttpResponse(entity = "PONG") } On the Play side

Categories : Scala



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.