Gnome-shell panel shadow |
...ok, I've found out where the problem is.
Unlikewise HTML, the shadow alpha is also affected by panel background
alpha.
So a black shadow (alpha=1) on a transparent background panel, would result
in a transparent (thus insvisible) shadow. So by setting
background-color: rgba(0,0,0,0.5);
font-weight: bold;
height: 1.86em;
box-shadow: 0px 3px 10px rgba(0,0,0,0.5);
It would result on a 0.25 alpha shadow (0.5 * 0.5 = 0.25). That's why the
box-shadow effect from my question above is not shown.
As said, that's a different behaviour than HTML where a transparent
background div would not affect its box-shadow effect. Perhaps some
gnome-shell developer passing by here might want to consider this
implementation.
|
How to get Empathy contact list in GNOME shell extensions? |
The Empathy-IM tool is a reference-implementation of the "Freedesktop
Telepathy"-specification. From Wikipedia:
How Telepathy works
Protocol implementations provide a D-Bus service called a connection
manager. Telepathy clients use these to create connections to
services. Once a connection is established, further communication
happens using objects called channels which are requested from the
connection. A channel might be used to send and receive text messages,
or represent the contact list, or to establish a VoIP call.
Applications
Empathy
[...]
Therefor, you can use the DBus interface to communicate with it.
Gnome-Shell offers a Gjs/GnomeShell-Extension binding, which is explained
nowhere, but example code is available:
Mailgroup post
Test-file f
|
High CPU Usage by Tomcat |
There is not sufficient information / evidence to explain what is going on.
This could be a direct result of having an excessive number of request
threads, or it could underlying problem in your webapp that is exacerbated
by the number of threads.
The only (possible) clue I can pull out of this is that (maybe) the high
TakeQueue value means something is doing a lot of internal request
forwarding.
I suggest:
Reduce the number of threads by a factor of 10 or more to see if that makes
any difference. It is a bad thing to have a huge number of threads active
at the same time. As in ... bad for system performance.
Use visualvm to try tp work out what the worker threads are doing.
See if you can spot errors or unusual behaviour in the tomcat logs, and the
request logs. (Turn the logging
|
High CPU usage with SDL + OpenGL |
Loops use as much computing power as they can. The main problem may be
located in:
int delay = 1000 / 60 - (SDL_GetTicks() - now);
your delay duration may be less than zero so that your operation may be
just an infinite loop without waiting. You need to control the value of
variable delay.
Moreover, in the this link: it is proposed that
SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL,1); can be used to enable vsync so
that it will not use all the CPU
|
CPU usage goes high when read |
If you're seeing memory pressure during reads, you're probably reading too
many rows at once. Tracing the request can give more visibility into
what's going on: http://www.datastax.com/dev/blog/tracing-in-cassandra-1-2
|
mongodb high cpu usage |
Here's a summary of a few things to look into:
1. Observed a large number of connections and cursors (13k):
- fix: make sure your connection pool is appropriate. For reporting, and
your current request rate, you only need a few connections at most. Also,
I'm guessing you have a m1small instance, which means you only have 1 core.
2. Review queries and indexes:
- run your queries with explain(), to observe how the queries are
executed. The right model normally results in queries only pulling very few
documents and utilization of an index.
3. Memory (compact and readahead setting):
- make the best use of memory. 1.6GB is low. Check how much free memory you
have, and compare it to what is reported as resident. A couple of common
causes of low resident memory is due to fragmentation. I
|
Wordpress High CPU Usage |
First of all you should have to install " WP Overview (lite) Footer Memory
Usage " plugin on your server and you can check it...
And also install " W3 Total Cache " and cache your files and images from
the server. You can increase a speed of your side with this plugin.
|
High CPU usage when reading from console |
I assume you have some kind of code that checks the status of readLine(),
otherwise Java will continue to block.
String line = null;
while ((line = br.readLine()) != null) {
// handle contents of line here
}
You may be better off using the Scanner class to read user input.
Scanner sc = new Scanner(System.in);
int num = sc.nextInt();
....
|
Image viewer and high ram usage |
Here is my take:
<Window x:Class="LargeJpeg.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="MainWindow" Height="350" Width="525">
<Image x:Name="Image" Stretch="None"/>
</Window>
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
var bitmap = new BitmapImage();
bitmap.BeginInit();
bitmap.CacheOption = BitmapCacheOption.None;
bitmap.UriSource = new Uri(@"C:5x5.jpg", UriKind.Absolute);
bitmap.DecodePixelWidth = (int)Image.ActualWidth;
bitmap.EndInit();
bitmap.Freeze();
Image.Source = bitmap;
}
}
Average memory usage: 130 mb on a
|
Can High CPU usage be avoided in FileServer? |
For TCP sockets function receive_data may not work correctly.
The fact that it allocates a new local buffer suggests that this buffer
gets destroyed when the function returns. This implies that receive_data
cannot handle incomplete messages.
A correct approach is to allocate a buffer for each socket once. Read from
the socket into that buffer and then process and discard complete messages
in the front of the buffer. Once all complete messages have been consumed,
move the tail of the buffer that contains an incomplete message to the
front and next time the socket is ready for reading append new bytes to the
end of the incomplete message until it gets complete.
|
Memcached slow gets, high CPU usage |
Well, I've found the problem!
To get an idea of the requests per second I used the memcache.php file
that's available out there. It told me that there were 350 requests per
second. The thing is that there has been quite an increase of use in the
past few days, and the requests/second is really just an average over the
entire uptime. Calculated by (hits+missed)/uptime.
Now after restarting memcached this average returns more correct values and
there are actually 4000 requests per second.
tl;dr: Wrong stats in first post. Correct stats are: 4000 requests/second.
I suppose my hardware simply can't cope with that.
|
Android game, battery usage is very high |
Charging from a USB port in your PC is going to be very slow (the USB is
also transferring data, not just power). I would think it is okay for a
game to use more charge than it is receiving when connected to a PC USB
port.
|
.NET - high memory usage by clr.dll and native heaps |
Use
!dumpheap -stat
at each stage. You may be able to find which type is getting increased
drastically at each stage. On those objects use
!gcroot
<"addr">
to find which object is holding it from getting garbage collected.
|
Named pipes in service causes high CPU usage |
Unless the service sleeps/awais at some point, it's a perfectly normal
behavior. By default, a while true{} loop will use 100% of the processing
power of where it's executing. 25% sounds a lot like 1/4 of the 4 threads
available on your computer.
You actually want to use 100% of the CPU whenever you code, else why are
you paying for faster computers ?...
|
Laravel Artisan Queues - high cpu usage |
I had the same issue.
But I found another solution. I used the artisan worker as is, but I
modified the 'watch' time. By default(from laravel) this time is hardcoded
to zero, I've changed this value to 600 (seconds). See the file:
'vendor/laravel/framework/src/Illuminate/Queue/BeanstalkdQueue.php'
and in function
'public function pop($queue = null)'
So now the work is also listening to the queue for 10 minutes. When it does
not have a job, it exits, and supervisor is restarting it. When it receives
a job, it executes it after that it exists, and supervisor is restarting
it.
==> No polling anymore!
notes:
it does not work for iron.io queue's or others.
it might not work when you want that 1 worker accept jobs from more than 1
queue.
|
CPU usage is high when using opengl control class? |
This needs more investigation but you may have problems with your
main-loop.
This probably is not a problem with OpenGL, but with usage of WinApi. When
you add textures, models, shaders... your cpu usage should be similar.
You use SetTimer(1, 1, 0); it means 1 millisecond of delay as I understand?
Can you change it to 33 milliseconds (33 FPS)?
That way you will not kill your message pump in mfc app. Note that this
timer is very imprecise.
link to [Basic MFC + OpenGL Message loop],
(http://archive.gamedev.net/archive/reference/articles/article2204.html),
using OnIdle()
Here is a great tutorial about MFC + opengl + threading - @songho
http://gamedev.stackexchange.com/questions/8623/a-good-way-to-build-a-game-loop-in-opengl
- discussoion regarding mail loop in GLUT
|
Clock_gettime showing high usage during profiling of code |
It's all relative to whatever else your program is doing, and keep in mind
that if you're doing any I/O, the actual CPU time your program uses may be
small, and gprof doesn't see anything else.
So if some calls to timing routines get stuck in there, and they are called
often enough, sure they can show a high percent.
Why doesn't gprof show where they're being called from?
For routines compiled with -pg, it tries to figure out who the caller is
when any routine is entered.
It tries, but that doesn't mean it succeeds.
Anyway, that's gprof.
|
Python OpenCV extremely high CPU usage after 10 second runtime |
You should run a profile of your code with CProfile and see what's chewing
up your resources. The official docs on profiling are here:
http://docs.python.org/2/library/profile.html
|
MSMQ System.Messaging high resource usage |
Why not host your queue reader process in a windows service. This will
continually poll the queue each 10 seconds.
Then use the windows scheduler to start/stop the service at relevant times
to create your service window.
This means you won't need to do anything complicated in your scheduled
task, and you won't be loading and unloading all the time.
|
What is proper way to write a chrome extension with high I/O usage |
Unfortunately, you cannot raise the quota. However, you can save your
settings in some global variable or chrome.storage.local and periodically
write them to chrome.storage.sync.
|
Linux free shows high memory usage but top does not |
Don't look at the "Mem" line, look at the one below it.
The Linux kernel consumes as much memory as it can to provide the I/O cache
(and other non-critical buffers, but the cache is going to be most of this
usage). This memory is relinquished to processes when they request it.
The "-/+ buffers/cache" line is showing you the adjusted values after the
I/O cache is accounted for, that is, the amount of memory used by processes
and the amount available to processes (in this case, 578MB used and 7411MB
free).
The difference of used memory between the "Mem" and "-/+ buffers/cache"
line shows you how much is in use by the kernel for the purposes of
caching: 7734MB - 578MB = 7156MB in the I/O cache. If processes need this
memory, the kernel will simply shrink the size of the I/O cache.
|
Can any one explain the usage of Shell::Source perl module or Shell::GetEnv module |
Reading and changing environment variables is built-in to Perl, you do not
need the modules you mentioned.
$ENV{UVM_HOME} = '/u/tools/digital/uvm/uvm-1.1a';
$ENV{VIPP_HOME} = '/u/tools/digital/vipcat_11.30-s012-22-05-2012';
$ENV{VIP_AXI_PATH} = "$ENV{VIPP_HOME}/vips/amba_axi/vr_axi/sv/";
|
Why doesn't this item get removed from the cache when memory usage is high? |
I would expect the CacheItemRemoveCallback to be fired
Why do you expect that?
I'd expect you to get an OutOfMemoryException fairly quickly on a 32-bit
machine (*) - after a minute or so, by which time you'll have approx 1.2GB
in your list.
(*) unless the OS is started with the /3GB switch, in which case it will
behave similarly to a 32-bit process on a 64-bit machine.
On a 64-bit machine, your request will time when the a default of 90
seconds, by which time it will have added 90*200 = 1800 items = approx
1.8GB to your static list. A 64-bit process will handle this, and probably
a 32-bit process would be able to do so on a 64-bit machine if it is
LARGEADDRESSAWARE, which is definitely the case for IIS; not sure about
Cassini.
Also, ISS would probably recycle your application do
|
Excessively high memory usage in .NET MVC/Entity Framework application |
We are automatically assuming that the problem is the EF. Can be, can be
not. There are a lots of points that we should take care, not only data
access infrastructure.
With data access issued, as you are using only EF, you can gain fast
improvement using simple .AsNoTracking() method. Adopt a ServiceLocator to
help you manage your contexts pool.
You can also user Dapper, instead of EF, in ReadOnly situations.
And for last, but not least, use pure ADO.NET, for the more complex queries
and a fastest execution.
Refactor your ActionFilters to avoid using some "BaseController" that all
controllers inherits is a good practice either.
Check if your IDisposable classes are truly being supressed by CG, adopting
the .Dispose(bool) pattern.
Be sure that you are not persisting cache variables fo
|
CPU usage not maximized and high synchronization in server app relying on async/await |
It sounds like your server is almost completely asynchronous (async MSMQ,
async DB, async HttpClient). So in that case I don't find your results
surprising.
First, there is very little CPU work to do. I'd fully expect each of the
thread pool threads to sit around most of the time waiting for work to do.
Remember that no CPU is used during a naturally-asynchronous operation.
The Task returned by an asynchronous MSMQ/DB/HttpClient operation does not
execute on a thread pool thread; it just represents the completion of an
I/O operation. The only thread pool work you're seeing are the brief
amounts of synchronous work inside the asynchronous methods, which usually
just arrange the buffers for I/O.
As far as throughput goes, you do have some room to scale (assuming that
your test was floodin
|
Phonegap app uses high RAM because filereader() and change loaded pics to white instead because of memory usage |
1000 images of those dimensions is a significant amount of data so will
take a significant amount of RAM. Do you really need all 1000 to be in
memory at the same time? Without knowing the user interface layout and use
case requirements for your app I'm just speculating, but could you not, for
example, load each image on demand asynchronously as it needs to be
displayed? Or if the delay in reading the image from the file system
creates an unacceptable delay in displaying it, you could pre-load just
some of the images, for example if they are in a sequence, then just have a
couple in memory either side of the currently displayed image.
|
How to spawn a new application from Gnome/GTK to a CLI application and read its output back into Gnome/GTK application? |
top and tail -f are long-running programs (in fact, they never exit), so
your strategy of reading the whole output and populating the the textbuffer
with it obviously won't work.
Instead, you need to create an IO channel that watches what's going on with
the pipe, hook the channel into the event loop, and append it to the text
buffer as new output arrives. The minimalistic change to your program would
be to rewrite the prepare function like this:
static gboolean data_ready(GIOChannel *channel, GIOCondition cond, gpointer
data)
{
FILE *fp = data;
char line[256];
if (fgets(line, sizeof line, fp)) {
gtk_text_buffer_get_end_iter (buffer, &iter);
gtk_text_buffer_insert (buffer, &iter, line, -1);
return TRUE;
}
else {
fclose(fp);
re
|
How can I monitor the memory usage of a shell script at runtime? |
you can use the ps command:
ps aux | grep <scriptName>
ps aux | grep "curl -s example"
ps aux | grep "egrap -s 'sth'"
ps aux | grep "sleep 60"
the 3rd and 4th columns are cpu and mem (you can get them using awk)
|
.NET application memory usage - high unused .NET and unmanaged memory and fragmentation |
As Alex already pointed out a very nice explanation of the problem class
large object heap fragmentation is found here:
https://www.simple-talk.com/dotnet/.net-framework/the-dangers-of-the-large-object-heap/
The problem is well known in the .NET FX Dev Team and continuously been
worked at.
There is a good chance that the symptoms fade off using more recent FX
releases.
Starting with .NET 4.5.1 there will be a GC method call to even compact the
LOH:
http://blogs.msdn.com/b/mariohewardt/archive/2013/06/26/no-more-memory-fragmentation-on-the-large-object-heap.aspx
However, finding the root cause of the LOHF would be way more efficient
than just wiping it of the heap wasting tons of ms's
Let me know, if you need further details how to isolate such effects.
Seb
|
Shell scripts: conventions to write usage text for parameters? |
POSIX defines a convention for utility argument syntax (Oddly enough they
seem to have forgotten to put spaces between the ] and following [
groupings, like they do on actual utility description pages (for example
command and find)):
"Arguments or option-arguments enclosed in the '[' and ']' notation are
optional and can be omitted." In consequence, "un-enclosed" ("declosed?")
arguments are mandatory.
"Frequently, names of parameters that require substitution by actual values
are shown with embedded underscores. Alternatively, parameters are shown as
follows:
<parameter name>
"
"Arguments separated by the '|' vertical bar notation are
mutually-exclusive."
"Ellipses ( "..." ) are used to denote that one or more occurrences of an
option or operand are allowed."
|
Unable to find value using variable as field name in find() method within Mongo Shell |
Try the following:
var eType = 'CT';
var lang = 'English';
var varGenre = 'Action';
var gnrNameChk = {};
gnrNameChk[eType+'.'+lang+'.genre.name'] = varGenere;
print(gnrNameChk);
db.perscoll.find(gnrNameChk).count();
Hope that helps you!!!
|
Find double files using PHP with high performance |
You could try to find possible duplicates by only looking at the file size.
Then only if multiple files have the same size you need to hash them. This
is probably faster, since looking up files sizes is not much of an effort.
|
Java : Matcher.find using high cpu utilization |
Avoid expresions with:
Multiline
case insensitive
etc.
Perhaps you can consider grouping regular expressions and apply a given
group of regular expresions depending on user input.
|
Azure Could Computing high availoability vs NEO4J high availability? |
The short answer is probably yes.
Windows Azure provide you infrastructure that allow you to build high
availability system, it won't make any system high available by magic.
As NEO4J is state-full, each node (with only one node Azure don't give you
any SLA, you instance will be down) will need to share some state and the
way to do it will be dependent on how NEO4J is working. So you will need
to rely on NEO4J mechanism to do it.
I don't know how NEO4J is working but you won't be able to skip designing
an high available architecture around NEO4J using Windows Azure infra.
Cloud may be a magic buzz word that can make things append on management
level, but when we are on hard real world level Harry magic wand doesn't
exist.
|
NetBeans hangs on attempting to use "find usage" |
Just rollback for jdk packages:
sudo apt-get install openjdk-6-jdk=6b24-1.11.4-3ubuntu1
sudo apt-get install openjdk-6-jre=6b24-1.11.4-3ubuntu1
sudo apt-get install openjdk-6-jre-headless=6b24-1.11.4-3ubuntu1
sudo apt-get install openjdk-6-jre-lib=6b24-1.11.4-3ubuntu1
|
Java Regex find() vs match() usage |
Use String.split(String regex):
String line = "[0r(1)2[000p[040qe1w3h162[020t*882*11/11/2010*12:26*";
String[] parts = line.split("\*");
String date = parts[2];
String time = parts[3];
System.out.println("date=" + date + ", time=" + time);
Output:
date=11/11/2010, time=12:26
|
How to find all WPF controls definition and usage examples? |
Start here: Windows Presentation Foundation for general WPF information.
For info specifically on controls: Controls by Category
The msdn site is probably your first best option to find out about WPF.
|
How to find usage of an API method in all libraries in classpath? |
You can use ctrl-alt-shift-f7 to adjust the scope of your find-usages
search to Project and libraries.
In addition to your project files, it will search everything under External
Libraries in your project pane - i.e. maven dependencies and the jdk.
However, for some reason I can't see any project libraries or global
libraries under External Libraries, and as such they seem to be excluded
from this search...
|
Find usage result in Intellij in Ubuntu |
You may be using a different keymap, please double check that Show Usages
action is what you are invoking, and not Find Usages:
|
Can't find the error in this usage of a collection's enumerator |
You've got it wrong. Enumerators are basically how C# deals with getting
all the parts of the enumerator. You don't need to worry about those, as
all collection classes have implemented those for you. What you seem to be
looking for is the Enumerable.Where extension method, located in
System.Linq. Here's the code with it:
public class EmployeeCollection : IEnumerable<Employee>
{
public List<Employee> Employees { get; set; }
public EmployeeCollection()
{
Employees = new List<Employee>();
}
public IEnumerator<Employee> GetEnumerator()
{
return Employees.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return Employees.GetEnumerator();
}
}
public class X
{
static void Main(string[] args)
|