w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Calculating used memory by a set of processes on Linux
You will want to iterate through each processes /proc/[pid]/smaps It will contain an entry for each VM mapping of the likes: 7ffffffe7000-7ffffffff000 rw-p 00000000 00:00 0 [stack] Size: 100 kB Rss: 20 kB Pss: 20 kB Shared_Clean: 0 kB Shared_Dirty: 0 kB Private_Clean: 0 kB Private_Dirty: 20 kB Referenced: 20 kB Anonymous: 20 kB AnonHugePages: 0 kB Swap: 0 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Private_Dirty memory is what you are interested in. If you have the Pss field in your smaps file then this is the amount of resident memory divided by the amount of processes that share the physical memory. Private_Clean cou

Categories : C++

Set threshold number of processes in Linux
I'll answer the question that I think you are asking. One program can have multiple instances running. Each is a separate process. I'm not aware of any instance count that Linux makes available. So I don't think there is any setting that can be made to get Linux to enforce a maximum number of instances for you. If user access to the program can be forced to come through a shell script or wrapper program, you have some options. 1) To just give a warning to users invoking the program who have reached or exceeded the number of instances, you could do a shell script that does something like ps aux | grep TheProgramFileName > $InstanceCount Then the script would compare to a maximum, and warn the user. But that won't stop anyone. 2) If the goal is to prevent multiple instances, that

Categories : Linux

TCP/UDP high-performance server under linux
Your basic approach is right. Start by studying the infamous C10K problem and how it was overcome. Once you understand the major bottlenecks in various implementations, you need to consider the following as part your design process : Minimise thread creation/deletion cycles. Always choose an "event-based" model over a blocking model. Basically the one-thread-per-request model is preferred for its simplicity of implementation. But it does NOT scale well with the number of concurrent requests. When designing systems required to support more than a few 1000 concurrent requests, one prefers the use of sockets over threads. The optimum number of "worker-threads" to instantiate depends upon : Load (number of concurrent requests) System (CPU, RAM) Since this is a very popular probl

Categories : Linux

Linux C processes pids global visibility
Create a file with the parent process and write pids of all children. Read this file from the children after some delay. You can write out, N, the number of children first and then N lines each line containing one pid.

Categories : C

Linux C comunication beetween processes using signals and pipes
Based on posted functions, I think there are two main problems in your code. 1. You close file descriptors on each call of signal handler. It's not good idea - close file descriptors each time after reading or writing . Unidirectional pipes don't work in such way. After creation of descriptors pair you should close first descriptor in one process and second in another process. This will create a pipe from first process to second (man pipe). In example from man page, they close descriptor right after write, because they don't need it any more. 2. Seems like you have only 3 pairs of file descriptors. For connecting of 3 processes through unidirectional pipes (as shown on picture bellow), requires 6 pairs of descriptors: fd 5,6 +------------+

Categories : C

Linux free shows high memory usage but top does not
Don't look at the "Mem" line, look at the one below it. The Linux kernel consumes as much memory as it can to provide the I/O cache (and other non-critical buffers, but the cache is going to be most of this usage). This memory is relinquished to processes when they request it. The "-/+ buffers/cache" line is showing you the adjusted values after the I/O cache is accounted for, that is, the amount of memory used by processes and the amount available to processes (in this case, 578MB used and 7411MB free). The difference of used memory between the "Mem" and "-/+ buffers/cache" line shows you how much is in use by the kernel for the purposes of caching: 7734MB - 578MB = 7156MB in the I/O cache. If processes need this memory, the kernel will simply shrink the size of the I/O cache.

Categories : Linux

In Linux,pdftoppm command is running two processes for single file
These are not two pdftoppm processes. The following is the pdftoppm process: root 25523 49.6 0.7 18192 12620 ? RN 14:13 0:59 pdftoppm -f 1 -l 1 /pdf/input.pdf /test/processing/output The following is the process for the shell command: root 25522 0.0 0.0 1844 500 ? SN 14:13 0:00 sh -c /bin/bash -c "pdftoppm -f 1 -l 1 /pdf/input.pdf test/processing/output" The first line in your grep output is for the shell command that was executed. The second line was for the actual pdftoppm invocation. The third line was for the grep. (Both your shell command and grep contained the string pdftoppm, which were a part of the process list when queried.)

Categories : Linux

Linux, where are the return codes stored of system daemons and other processes?
When a process terminates its parent process must acknowledge this using the wait or waitpid function. These functions also return the exit status. After the call to wait or waitpid the process table entry is removed, and the exit status is no longer stored anywhere in the operating system. You should check if the software you use to start the process saves the exit status somewhere. If the parent process has not acknowledged that the child has terminated you can read its exit status from the /proc file system: it is the last field in /proc/[pid]/stat. It is stored in the same format that wait returns it, so you have to divide by 256 to get the exit code. Also you probably have to be root.

Categories : Linux

How to improve memory sharing between unicorn processes with Ruby 2.0 on Linux
According to this answer, which you may have already seen, there is a line that reads: Note that a "share-able" page is counted as a private mapping until it is actually shared. i.e. if there is only one process currently using libfoo, that library's text section will appear in the process's private mappings. It will be accounted in the shared mappings (and removed from the private ones) only if/when another process starts using that library. What I would do to test whether you're getting the benefits outlined in this article, is put a 10MB xml file as a literal string directly into your source code. Then, if you fire up 20 workers, you'll be able to see if you're using 200MB of memory, or only 10MB, as is expected with the new garbage collection feature. UPDATE: I was lo

Categories : Ruby On Rails

User processes in D-state leads to a watchdog reset using Linux 2.6.24 and arm processor
D-state means the processes are stuck in the kernel in a TASK_UNINTERRUPTIBLE sleep, this is unlikely to be bugs in the Squashfs error handling code because if a process exited Squashfs holding a mutex, the system would quickly grind to a halt as other processes entered Squashfs and slept forever waiting for the mutex. You would also see a low load average/system time as most processes would be sleeping. Furthermore there is no evidence Squashfs has hit any I/O errors. Load average (2.77) and system time (75.5%) is extremely high, coupled with the fact a lot of processes are in Squashfs_readpage (which is completing but slow), indicates the system is thrashing. There is too little memory and the system is spending all it's time constantly (re-)demand paging pages from disk. This will

Categories : Caching

High-load java server
I do not think that Idea 3 would be over engineering, and I'd hit that road. Building @MessageDriven Bean "adapter" that handles incoming socket connections in the onMessage method is easy to develop and from there you're in the well scaling EE world. You your case, you might even want to rely on UDP. See following example: http://www.apprigger.com/2011/06/javaee-udp-resource-adapter-example/ But from my point of view there are other important reasons to go this way. Some pointers: 1.) As you've already mentioned. Building an own Socket Server handling threads and requests is a lot of work, and at the end you might build your own little "application server". 2.) Don't be afraid of using an application server. Of course people tend to call such a platform "overhead". Though when I did

Categories : Java

High CPU load on ORMLite query
The resulting data is about 9kB in total. Is there anything wrong with the code? I don't immediately see any problems with your code. I'd make sure that userId is in an index if you have a large table but I suspect that isn't the issue. @DatabaseField(index = true) private String userId; You might also want to add an index to the foreign UserDTO field in the event table if that table is large but again I don't think the performance problem is on the Sqlite end unless your databases are large. One query optimization that you could try is to use another WHERE constraint when getting the events instead of the join: QueryBuilder<EventViewDTO, Long> eventQb = eventDao.queryBuilder(); // the "userDto" needs to be changed to whatever the foreign field name is data.eventViews =

Categories : Android

Java High-load NIO TCP server
Your logic around writing is faulty. You should attempt the write immediately you have data to write. If the write() returns zero it is then time to register for OP_WRITE, retry the write when the channel becomes writable, and deregister for OP_WRITE when the write has succeeded. You're adding a massive amount of latency here. You're adding even more latency by deregistering for OP_READ while you're doing all that.

Categories : Java

AWS : S3FS AMI and load balancer high I/O Issue
I would like to recommend to take a look at the new project RioFS (Userspace S3 filesystem): https://github.com/skoobe/riofs. This project is “s3fs” alternative, the main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “testing” state, but it's been running on several high-loaded fileservers for quite some time. We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features. Regarding your issue: I'm not quite sure how does S3FS works with cached files, but in our project we try to avoid performing additional I/O operation. Please give it a try and let me know how RioFS works for you !

Categories : Amazon

AWS EC2 Instances with Load Balancing during very high traffic
Reserved instances are there to save money for those instances you regularly run. I would suggest using reserved instances for your 'master'. The only advantage of keeping that one on-demand is the advantage that you could scale up or down as soon as your constant flow of traffic changes up or down. Make sure you choose the right use for your reserved instance; an 'always-on reserved instance' should have an heavy-use reserved purchase. Those 'peak instances' do best as an on-demand instance.

Categories : Amazon

Separation of concerns in Node.js app and dealing with load across different processes
It seems that you've decomposed system correctly -and have created that separation at the persistence "service" layer-, but I'd take this separation a bit further by moving toward a distributed system architecture (i.e. SOA / micro-services). The initial step of building a distributed system is identifying each of the functions necessary to meet the overall business goal of the application and mapping these to service endpoints. Each loosely coupled service endpoint will then serve a small isolated job/function and it will act as an abstraction for that business goal. By continuing the separation of responsibilities all the way to the service endpoint you create small independent boundaries for scalability, throughput, fault tolerance, security, deployment, etc. For example -RESTf

Categories : Node Js

Unique index on PostgreSQL text column - can it cause high CPU load?
For my personal views: yes; the unique index on a text a column is hurting the performance,especially when many INSERTS/UPDATES happen on the table. If your query does not need the unique index, I suggest you drop the unique index.

Categories : Database

show only one series data in High charts in start of drawing/load
I think I found solution visible: false in series, updated my jsfiddle series: [{ name: 'MyHiddenLine', data: [1,2,3], visible: false },

Categories : Javascript

syntax error near unexpected token `fi' - High load Notification mail
I don't really get the same error than you but you can already test this solution: if [ `uptime | awk '{ print$11 }' | cut -d. -f1` -gt 1 ]; This condition try to compare a float with an integer. I would make this test with an extended test command. And you can forget semicolon if you don't put then keyword on the same line than your condition : if [[ `uptime | awk '{ print$11 }' | cut -d. -f1` > 1 ]] then mail -s "$SUBJECT" $TO < /tmp/load exit fi

Categories : Bash

nodejs HTTP server can't handle large response on high load
Right now all strings are first converted to Buffer instances. This can put a heavy load on the garbage collector to cleanup after each request. Running your application with --prof and examining the v8.log file with tools/*-tick-processor and you may see that. There is work being done to correct this so strings are written out to memory directly then cleaned up when the request is complete. It has been implemented for file system writes in f5e13ae, but not yet for other cases (much more difficult to implement than it sounds). Also converting strings to Buffers is very costly. Especially for utf8 strings (which are default). Where you can, definitely pre-cache the string as a Buffer and use it. Here is an example script: var http = require('http'); var str = 'a'; for (var i = 0; i <

Categories : Node Js

Adding rails apps to nginx avoiding high load time on 1st access
Unicorn sounds like it might be a better fit for your deployment scenario. You can keep nginx up front, but instead of loading rails itself, it will just connect to a unicorn Unix socket. Further, you can reload your application with new code gracefully, while nginx stays up and Unicorn swaps out backend quietly.

Categories : Ruby On Rails

Azure Could Computing high availoability vs NEO4J high availability?
The short answer is probably yes. Windows Azure provide you infrastructure that allow you to build high availability system, it won't make any system high available by magic. As NEO4J is state-full, each node (with only one node Azure don't give you any SLA, you instance will be down) will need to share some state and the way to do it will be dependent on how NEO4J is working. So you will need to rely on NEO4J mechanism to do it. I don't know how NEO4J is working but you won't be able to skip designing an high available architecture around NEO4J using Windows Azure infra. Cloud may be a magic buzz word that can make things append on management level, but when we are on hard real world level Harry magic wand doesn't exist.

Categories : Neo4j

quicksand sorting price values low to high / high to low
val() gives you a string, so > and < comparisons are lexographical (not numeric). Try wrapping the values in parseInt() or parseFloat(). Make sure to add appropriate error handling as well.

Categories : Jquery

Forking two processes results in multiple processes
The problem with your code is that after the child processes have slept, they return from fork_off and repeat everything the parent is doing. void fork_off(int * proc_t, int * proc_i) { int f = fork(); if (f == 0) { fprintf(stderr, "Proc %d started ", getpid()); usleep(5000000); exit (0); /* exit() closes the entire child process * instead of simply return from the function */ } else if (f > 0) { /* Make sure there isn't an error being returned. * Even though I've never seen it happen with fork(2), * it's a good habit to get into */ proc_t[*proc_i] = f; (*proc_i)++; } else { /* Adding to the aforementioned point, consi

Categories : C

In linux, all kernel processes share the same kernel stack, each user process has its own stack, correct?
Incorrect. There's one kernel address space, and no kernel processes. There are kernel threads, and there are user space threads that enter the kernel. These run in the kernel address space. Each of these has a separate stack, within the kernel address space.

Categories : Linux

Varnish High DB Connections In High Traffic
Mostly spiking backend connections has little to do with your varnish configuration but alot to do with the cachability of your site. Are there cookies that prevents you from caching efficiently? You can chose to strip them or remove all but chosen ones, there are examples for both on the varnish site. Do a varnishstat and check your hit rates during peaks. Is it a good cache hitratio? Is it the same as during low load? If it's the same or higher in low load it's easy to work on improving it at any time. Do a varnishtop -i txurl to see what requests are the most frequently sent to backend servers. Maybe it's some URLs that are just not cached due to faulty headers? Maybe some pages can be cached longer? Maybe some parts of the pages can be cached with ESI? Make sure your varnish is not

Categories : Mysql

load xml to mysql under debian linux
The XML file you are trying to import is not formatted using a schema that MySQL knows how to import. You will need to convert it yourself. The formats that MySQL will recognize are defined here: http://dev.mysql.com/doc/refman/5.5/en/load-xml.html

Categories : Mysql

Dynamically load class in Linux Java
Why don't just put the connector into a lib sub-directory and then passed all jar contained in this one to the Java Class-Path ? Samples folder tree : MyApp bin launcher.sh MyApp.jar lib myLib.jar Here is the launcher.sh script : #!/bin/sh #Set basedir LAUNCHER_DIR=$(cd $(dirname $0); pwd) #Set Java Class-Path CLASSPATH="$LAUNCHER_DIR/bin/MyApp.jar"$(find "$LAUNCHER_DIR" -name '*.jar' -printf ":%p") #Launch application java -cp "$CLASSPATH" com.company.MyApp $* EDIT: It is not recommanded to directly use File.toURL as describe in the documentation, you must do File.toURI().toURL().

Categories : Java

When to use low < high or low + 1 < high for loop invariant
If your invariant is that the target must lie in low <= i <= high, then you use while (low < high); if your invariant is that the target must lie in low <= i < high then you use while (low + 1 < high). [Thanks to David Eisenstat for confirming this.]

Categories : Algorithm

MonoGame on linux failing to load effect assets
This is a very common problem when trying to load shaders into Monogame. I tried, and failed to be able to load my custom shader into the Monogame framework. You need to compile from the develop3d branch, and not the official release. You also need to convert your HLSL shader into a MojoShader compatible syntax. Then you need to either load the effect from the Monogame Content importer (which needs to be configured manually), or add your shader as an embedded resource and load it into your project in order to use it. I have never been able to actually pull this off myself. From my readings online, this particular part of the Monogame framework is not quite ready for primetime yet. Here is some information on it. They really didn't provide much information on this as I suspect they k

Categories : C#

Linux service can't load library path in the /etc/ld.so.conf.d
Did you run ldconfig (as root) lately? There's a shared library cache that's updated by that program, and if you updated a file in /etc/ld.so.conf.d without running ldconfig, the cache data could be out of date.

Categories : Linux

Get linux executable load address (__builtin_return_address and addr2line)
You can use the dl_iterate_phdr() on Linux to determine the load address of each dynamically loaded object: #define _GNU_SOURCE #include <stdio.h> #include <link.h> int callback(struct dl_phdr_info *info, size_t size, void *data) { printf("%s @ %#lx ", info->dlpi_name, (unsigned long)info->dlpi_addr); return 0; } int main() { dl_iterate_phdr(&callback, NULL); return 0; }

Categories : C

"Failed to load platform plugin "xcb" " while launching qt5 app on linux without qt installed
Since version 5, Qt uses a platform abstraction system (QPA) to abstract from the underlying platform. The implementation for each platform is provided by plugins. For X11 it is the XCB plugin. See Qt for X11 requirements for more information about the dependencies.

Categories : Linux

Why would a Java/Maven project created on Windows, not run on Linux - Failed to load Main-Class manifest attribute
This could happen because of encoding issues. Many IDE:s use some platform dependent encoding to encode its files unless you specify otherwise. It is a wise idea to use UTF-8 as encoding to make sure all the files are encoded in a platform independent fashion. How this is done depends on the IDE. Also, it is wise to specify the encoding in Maven, as specified in the official FAQ: <project> ... <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> ... </project>

Categories : Java

Exec(Linux). How it functions internally? Linux executable attributes(rlimit)
The RLIMIT_CORE is used to place a limit on the amount of info that coredump is allowed to produce, before it is aborted. Once this limit is hit, no more info is logged and the message Aborting Core is logged to the console. From the man page of core : A process can set its soft RLIMIT_CORE resource limit to place an upper limit on the size of the core dump file that will be produced if it receives a "core dump" signal. Use setrlimit() to configure RLIMIT_CORE to a larger value to obtain complete coredumps. The most common format of executables/shared-objects is ELF. On Linux, the dynamic loading and linking of these shared-objects is performed by ld.so ld.so is loaded in the address space of a newly created process (by exec in this case) and executed first. This is possible as

Categories : Android

Linux users', specifically Apache, permissions settings, [Linux noob :]
usually any daemon will need to access a number of ressources. it is therefore good practice to run each daemon under a special user:group, rather than nobody:nogroup. traditionally (e.g. on Debian systems) apache runs as www-data:www-data. finally, user permissions take precedence over group permissions (which in turn take precedence over other permissions). this means that a directory where the user does not have write perms but the user's group can write is effectively r/o for that user (but not for other members of the group)

Categories : Linux

Purpose of Curly Brace Usage of C Code found in Linux (include/linux/list.h)?
This is a GNU language extension known as a statement expression; it's not standard C.

Categories : C

Write C++ on Windows but use Linux System calls through a Linux emulator
You've already tagged your question with Cygwin. That seems like the best solution for what you want. Cygwin is basically a collection of programs which emulate a GNU/LInux environment through the use of a DLL (cygwin1.dll) which acts as a Linux API layer providing substantial Linux API functionality. Here's the link to the documentation for its API Edit: Most of the Cygwin source code that I've looked at is written in C++ and makes system calls using MS Windows API to do provide the *nix emulation. The source is well written and very readable (even to to a non-C++ programmer such as myself). I think using Cygwin would be a good transition from programming on Windows to a GNU/Linux environment.

Categories : C++

What's a good way to set up Closure Compiler on Linux? Or, where should Java .jar's live on a Linux?
If you using the java command directly, then you'll have to provide a path to the jar in question. It's probably easier to place the jar in one place and create a shell script that handles the invocation and jar path.

Categories : Java

How to turn a Linux application in C/C++ into a Desktop Enviroment for a Linux distro?
You either want to create a boot loader, or you want to replace the 'shell'. This would be governed by per-user or global xinit and Xsession files.

Categories : C



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.