w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
.NET application memory usage - high unused .NET and unmanaged memory and fragmentation
As Alex already pointed out a very nice explanation of the problem class large object heap fragmentation is found here: https://www.simple-talk.com/dotnet/.net-framework/the-dangers-of-the-large-object-heap/ The problem is well known in the .NET FX Dev Team and continuously been worked at. There is a good chance that the symptoms fade off using more recent FX releases. Starting with .NET 4.5.1 there will be a GC method call to even compact the LOH: http://blogs.msdn.com/b/mariohewardt/archive/2013/06/26/no-more-memory-fragmentation-on-the-large-object-heap.aspx However, finding the root cause of the LOHF would be way more efficient than just wiping it of the heap wasting tons of ms's Let me know, if you need further details how to isolate such effects. Seb

Categories : Dotnet

.NET - high memory usage by clr.dll and native heaps
Use !dumpheap -stat at each stage. You may be able to find which type is getting increased drastically at each stage. On those objects use !gcroot <"addr"> to find which object is holding it from getting garbage collected.

Categories : Dotnet

Linux free shows high memory usage but top does not
Don't look at the "Mem" line, look at the one below it. The Linux kernel consumes as much memory as it can to provide the I/O cache (and other non-critical buffers, but the cache is going to be most of this usage). This memory is relinquished to processes when they request it. The "-/+ buffers/cache" line is showing you the adjusted values after the I/O cache is accounted for, that is, the amount of memory used by processes and the amount available to processes (in this case, 578MB used and 7411MB free). The difference of used memory between the "Mem" and "-/+ buffers/cache" line shows you how much is in use by the kernel for the purposes of caching: 7734MB - 578MB = 7156MB in the I/O cache. If processes need this memory, the kernel will simply shrink the size of the I/O cache.

Categories : Linux

Why doesn't this item get removed from the cache when memory usage is high?
I would expect the CacheItemRemoveCallback to be fired Why do you expect that? I'd expect you to get an OutOfMemoryException fairly quickly on a 32-bit machine (*) - after a minute or so, by which time you'll have approx 1.2GB in your list. (*) unless the OS is started with the /3GB switch, in which case it will behave similarly to a 32-bit process on a 64-bit machine. On a 64-bit machine, your request will time when the a default of 90 seconds, by which time it will have added 90*200 = 1800 items = approx 1.8GB to your static list. A 64-bit process will handle this, and probably a 32-bit process would be able to do so on a 64-bit machine if it is LARGEADDRESSAWARE, which is definitely the case for IIS; not sure about Cassini. Also, ISS would probably recycle your application do

Categories : Asp Net

Excessively high memory usage in .NET MVC/Entity Framework application
We are automatically assuming that the problem is the EF. Can be, can be not. There are a lots of points that we should take care, not only data access infrastructure. With data access issued, as you are using only EF, you can gain fast improvement using simple .AsNoTracking() method. Adopt a ServiceLocator to help you manage your contexts pool. You can also user Dapper, instead of EF, in ReadOnly situations. And for last, but not least, use pure ADO.NET, for the more complex queries and a fastest execution. Refactor your ActionFilters to avoid using some "BaseController" that all controllers inherits is a good practice either. Check if your IDisposable classes are truly being supressed by CG, adopting the .Dispose(bool) pattern. Be sure that you are not persisting cache variables fo

Categories : Asp Net Mvc

Phonegap app uses high RAM because filereader() and change loaded pics to white instead because of memory usage
1000 images of those dimensions is a significant amount of data so will take a significant amount of RAM. Do you really need all 1000 to be in memory at the same time? Without knowing the user interface layout and use case requirements for your app I'm just speculating, but could you not, for example, load each image on demand asynchronously as it needs to be displayed? Or if the delay in reading the image from the file system creates an unacceptable delay in displaying it, you could pre-load just some of the images, for example if they are in a sequence, then just have a couple in memory either side of the currently displayed image.

Categories : Cordova

Python OpenCV extremely high CPU usage after 10 second runtime
You should run a profile of your code with CProfile and see what's chewing up your resources. The official docs on profiling are here: http://docs.python.org/2/library/profile.html

Categories : Python

How can I monitor the memory usage of a shell script at runtime?
you can use the ps command: ps aux | grep <scriptName> ps aux | grep "curl -s example" ps aux | grep "egrap -s 'sth'" ps aux | grep "sleep 60" the 3rd and 4th columns are cpu and mem (you can get them using awk)

Categories : Shell

Memory usage keep growing with Python's multiprocessing.pool
Use map_async instead of apply_async to avoid excessive memory usage. For your first example, change the following two lines: for index in range(0,100000): pool.apply_async(worker, callback=dummy_func) to pool.map_async(worker, range(100000), callback=dummy_func) It will finish in a blink before you can see its memory usage in top. Change the list to a bigger one to see the difference. But note map_async will first convert the iterable you pass to it to a list to calculate its length if it doesn't have __len__ method. If you have an iterator of a huge number of elements, you can use itertools.islice to process them in smaller chunks. I had a memory problem in a real-life program with much more data and finally found the culprit was apply_async. P.S., in respect of memory us

Categories : Python

What slows down Java if CPU usage low, hdd usage low, memory usage low?
You can test what is the bottle neck by taking different parts out of the equation. To take out the CPU, just read the files and don't do anything else. To take out the disk, keep reading the same file again and again, assuming it fits in your file cache. Compare this to the time for everything.

Categories : Java

Reduce RAM usage in Python script
What do you want the booklist for, in the end? You should export each book at the end of the "for url in range" block (inside it), and do without the allbooks dict. If you really need a list, define exactly what infos you will need, not keeping full Book objects.

Categories : Python

Wordpress High CPU Usage
First of all you should have to install " WP Overview (lite) Footer Memory Usage " plugin on your server and you can check it... And also install " W3 Total Cache " and cache your files and images from the server. You can increase a speed of your side with this plugin.

Categories : PHP

mongodb high cpu usage
Here's a summary of a few things to look into: 1. Observed a large number of connections and cursors (13k): - fix: make sure your connection pool is appropriate. For reporting, and your current request rate, you only need a few connections at most. Also, I'm guessing you have a m1small instance, which means you only have 1 core. 2. Review queries and indexes: - run your queries with explain(), to observe how the queries are executed. The right model normally results in queries only pulling very few documents and utilization of an index. 3. Memory (compact and readahead setting): - make the best use of memory. 1.6GB is low. Check how much free memory you have, and compare it to what is reported as resident. A couple of common causes of low resident memory is due to fragmentation. I

Categories : Ruby On Rails

High CPU usage with SDL + OpenGL
Loops use as much computing power as they can. The main problem may be located in: int delay = 1000 / 60 - (SDL_GetTicks() - now); your delay duration may be less than zero so that your operation may be just an infinite loop without waiting. You need to control the value of variable delay. Moreover, in the this link: it is proposed that SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL,1); can be used to enable vsync so that it will not use all the CPU

Categories : C++

High CPU Usage by Tomcat
There is not sufficient information / evidence to explain what is going on. This could be a direct result of having an excessive number of request threads, or it could underlying problem in your webapp that is exacerbated by the number of threads. The only (possible) clue I can pull out of this is that (maybe) the high TakeQueue value means something is doing a lot of internal request forwarding. I suggest: Reduce the number of threads by a factor of 10 or more to see if that makes any difference. It is a bad thing to have a huge number of threads active at the same time. As in ... bad for system performance. Use visualvm to try tp work out what the worker threads are doing. See if you can spot errors or unusual behaviour in the tomcat logs, and the request logs. (Turn the logging

Categories : Java

CPU usage goes high when read
If you're seeing memory pressure during reads, you're probably reading too many rows at once. Tracing the request can give more visibility into what's going on: http://www.datastax.com/dev/blog/tracing-in-cassandra-1-2

Categories : Cassandra

Memcached slow gets, high CPU usage
Well, I've found the problem! To get an idea of the requests per second I used the memcache.php file that's available out there. It told me that there were 350 requests per second. The thing is that there has been quite an increase of use in the past few days, and the requests/second is really just an average over the entire uptime. Calculated by (hits+missed)/uptime. Now after restarting memcached this average returns more correct values and there are actually 4000 requests per second. tl;dr: Wrong stats in first post. Correct stats are: 4000 requests/second. I suppose my hardware simply can't cope with that.

Categories : PHP

Image viewer and high ram usage
Here is my take: <Window x:Class="LargeJpeg.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525"> <Image x:Name="Image" Stretch="None"/> </Window> public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); var bitmap = new BitmapImage(); bitmap.BeginInit(); bitmap.CacheOption = BitmapCacheOption.None; bitmap.UriSource = new Uri(@"C:5x5.jpg", UriKind.Absolute); bitmap.DecodePixelWidth = (int)Image.ActualWidth; bitmap.EndInit(); bitmap.Freeze(); Image.Source = bitmap; } } Average memory usage: 130 mb on a

Categories : C#

High CPU usage when reading from console
I assume you have some kind of code that checks the status of readLine(), otherwise Java will continue to block. String line = null; while ((line = br.readLine()) != null) { // handle contents of line here } You may be better off using the Scanner class to read user input. Scanner sc = new Scanner(System.in); int num = sc.nextInt(); ....

Categories : Java

Can High CPU usage be avoided in FileServer?
For TCP sockets function receive_data may not work correctly. The fact that it allocates a new local buffer suggests that this buffer gets destroyed when the function returns. This implies that receive_data cannot handle incomplete messages. A correct approach is to allocate a buffer for each socket once. Read from the socket into that buffer and then process and discard complete messages in the front of the buffer. Once all complete messages have been consumed, move the tail of the buffer that contains an incomplete message to the front and next time the socket is ready for reading append new bytes to the end of the incomplete message until it gets complete.

Categories : C++

Named pipes in service causes high CPU usage
Unless the service sleeps/awais at some point, it's a perfectly normal behavior. By default, a while true{} loop will use 100% of the processing power of where it's executing. 25% sounds a lot like 1/4 of the 4 threads available on your computer. You actually want to use 100% of the CPU whenever you code, else why are you paying for faster computers ?...

Categories : C#

CPU usage is high when using opengl control class?
This needs more investigation but you may have problems with your main-loop. This probably is not a problem with OpenGL, but with usage of WinApi. When you add textures, models, shaders... your cpu usage should be similar. You use SetTimer(1, 1, 0); it means 1 millisecond of delay as I understand? Can you change it to 33 milliseconds (33 FPS)? That way you will not kill your message pump in mfc app. Note that this timer is very imprecise. link to [Basic MFC + OpenGL Message loop], (http://archive.gamedev.net/archive/reference/articles/article2204.html), using OnIdle() Here is a great tutorial about MFC + opengl + threading - @songho http://gamedev.stackexchange.com/questions/8623/a-good-way-to-build-a-game-loop-in-opengl - discussoion regarding mail loop in GLUT

Categories : Visual Studio 2010

Android game, battery usage is very high
Charging from a USB port in your PC is going to be very slow (the USB is also transferring data, not just power). I would think it is okay for a game to use more charge than it is receiving when connected to a PC USB port.

Categories : Android

Laravel Artisan Queues - high cpu usage
I had the same issue. But I found another solution. I used the artisan worker as is, but I modified the 'watch' time. By default(from laravel) this time is hardcoded to zero, I've changed this value to 600 (seconds). See the file: 'vendor/laravel/framework/src/Illuminate/Queue/BeanstalkdQueue.php' and in function 'public function pop($queue = null)' So now the work is also listening to the queue for 10 minutes. When it does not have a job, it exits, and supervisor is restarting it. When it receives a job, it executes it after that it exists, and supervisor is restarting it. ==> No polling anymore! notes: it does not work for iron.io queue's or others. it might not work when you want that 1 worker accept jobs from more than 1 queue.

Categories : Laravel

Python multiprocessing in function - memory usage increases with every function call
Try adding this line to the end of calc(): for w in consumers: w.join() Calling join() on your joinable queue blocks until everything on the queue has been consumed, but it doesn't guarantee that the subprocesses have been garbage collected. I suspect you have some objects in your subprocesses lingering in memory since they haven't been joined.

Categories : Python

What is proper way to write a chrome extension with high I/O usage
Unfortunately, you cannot raise the quota. However, you can save your settings in some global variable or chrome.storage.local and periodically write them to chrome.storage.sync.

Categories : Javascript

MSMQ System.Messaging high resource usage
Why not host your queue reader process in a windows service. This will continually poll the queue each 10 seconds. Then use the windows scheduler to start/stop the service at relevant times to create your service window. This means you won't need to do anything complicated in your scheduled task, and you won't be loading and unloading all the time.

Categories : C#

Clock_gettime showing high usage during profiling of code
It's all relative to whatever else your program is doing, and keep in mind that if you're doing any I/O, the actual CPU time your program uses may be small, and gprof doesn't see anything else. So if some calls to timing routines get stuck in there, and they are called often enough, sure they can show a high percent. Why doesn't gprof show where they're being called from? For routines compiled with -pg, it tries to figure out who the caller is when any routine is entered. It tries, but that doesn't mean it succeeds. Anyway, that's gprof.

Categories : Misc

Python script fills memory
It could indeed be that there is some sort of leak in the psspy namespace. In order to understand more fully where the memory is being used you should use one of the Python profilers.

Categories : Python

memory leak in python script for blender
The only thing that I can see that would be causing the memory leak would be the ob object. You do something to that object 512^3 times. If ob.ray_cast stores some data on the ob object, then I could see that being a problem. Try replacing ob.ray_cast(orig,axis*10000.0) with 3 static values and see if the memory issue persists. There could be a memory leak on the C side of blender.

Categories : Python

CPU usage not maximized and high synchronization in server app relying on async/await
It sounds like your server is almost completely asynchronous (async MSMQ, async DB, async HttpClient). So in that case I don't find your results surprising. First, there is very little CPU work to do. I'd fully expect each of the thread pool threads to sit around most of the time waiting for work to do. Remember that no CPU is used during a naturally-asynchronous operation. The Task returned by an asynchronous MSMQ/DB/HttpClient operation does not execute on a thread pool thread; it just represents the completion of an I/O operation. The only thread pool work you're seeing are the brief amounts of synchronous work inside the asynchronous methods, which usually just arrange the buffers for I/O. As far as throughput goes, you do have some room to scale (assuming that your test was floodin

Categories : C#

How to get a file on a memory stick read into a python script?
Since you don't explicitly open the file yourself, the simplest thing to do in this case would be to just make sure that the path to the file you pass asciitable.read() is valid. Here's what I mean: import asciitable import os from string import ascii_uppercase import sys PATH_TEMPLATE = '{}:/ECBGF/bg0809_protected.txt' for drive in ascii_uppercase[:-24:-1]: # letters 'Z' down to 'D' file_path = PATH_TEMPLATE.format(drive) if os.path.exists(file_path): break else: print 'error, file not found' sys.exit(1) x = asciitable.read(file_path, guess=False, delimiter=' ', fill_values=[('', '-999')])

Categories : Python

Python Script with Gevent Pool, consumes a lot of memory, locks up
body = urllib2.urlopen("http://" + url, None, 5).read() Above line reads the entire content in memory as a string. To prevent that, change fetch() as follow: def fetch(url): try: u = urllib2.urlopen("http://" + url, None, 5) try: with open(outputDirectory + "/" + url, 'w') as outputFile: while True: chunk = u.read(65536) if not chunk: break outputFile.write(chunk) finally: u.close() print "Success", url except: print "Fail", url

Categories : Python

hadoop map reduce vs clojure pmap function
Lots of languages have map reduce capabilities, including Clojure. I'd say that Hadoop would be the hands-down winner because it manages it over clusters of machines. It's the potential for massive parallelization that would give it the clear edge over anything else that didn't have it built in.

Categories : Hadoop

High non-mapped virtual memory for mongodb
I investigated and had some strong indications that the issue was related to using rockmongo. I launched a private support ticket with 10gen and they found that the issue was indeed rockmongo related. Apparently it uses a lot of eval() calls which spawn a server side V8 javascript engine requiring a lot of memory. I filed a bug-report with rockmongo.

Categories : Mongodb

ThreadPoolExecutor to handle high Memory Tasks in Grails?
I think u can do this by using GParsExecutorsPool GParsExecutorsPool.withPool() { Closure longLastingCalculation = {calculate()} Closure fastCalculation = longLastingCalculation.async() //create a new closure, which starts the original closure on a thread pool Future result=fastCalculation() //returns almost immediately //do stuff while calculation performs … println result.get() } For more details check this link: Use of ThreadPool - the Java Executors' based concurrent collection processor

Categories : Java

Android pdf writer APW high resolution images cause out of memory expection
First of all the resolution you are mentioning is very high. And i have already mentioned the issues related to Images in Android in this Answer Secondly in case first solution doesn't work for you i would suggest Disk based LruCache.And store the chunks into that disk based cache and then retrieve and use it. Here is an Example of that. Hope this would help. If it doesn't comment on this answer and i will add more solutions.

Categories : Java

High volume of data in Memory in azure worker role
The only issue I can see is that your lists' underlying arrays may end up on the large object heap, and since they reference your cached data, it will be garbage collected more seldom than normal objects. I can't imagine you'll run into trouble if you're not replacing the data very frequently, but it's something to keep an eye on. Also, .NET 4.5 has improvements in this area, so you'll want to try to run under that CLR.

Categories : C#

How to avoid TLB miss (and high Global Memory Replay Overhead) in CUDA GPUs?
You can only avoid TLB misses by changing your memory access pattern. A different layout of your data in memory can help with this. A 3D texture will not improve your situation, as it trades improved spatial locality in two additional dimensions against reduced spatial locality in the third dimension. Thus you would unnecessarily read data of neighbors along the Y axis. What you can do however is mitigate the impact of the resulting latency on throughput. In order to hide t = 700 cycles of latency at a global memory bandwidth of b = 250GB/s, you need to have memory transactions for b / t = 175 KB of data in flight at any time (or 12.5 KB for each of the 14 SMX). With a fully loaded memory interface and a high ratio of TLB misses, you will however find that latency gets closer to 2000 cy

Categories : Caching

Memory usage in IE for gwt application
Garbage collector in IE is quite strange thing. E.g. you can force it to run by simply minimizing browser window. I guess leaks in your case are images that weren't removed properly by browser when you clear container. Try to remove them by using JS "delete" operation, like that: private native void utilizeElement(Element element) /*-{ delete element; }-*/; Then change your manageContent a little: if(contentPanel.getWidgetCount() > 0) { for (Iterator<Widget> it = contentPanel.iterator(); it.hasNext();) utilizeElement(it.next().getElement()); contentPanel.clear(); } Hope this helps.

Categories : Image



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.