Executing functions conditionally and in parallel |
How about this?
Dictionary<byte, Action> actions = new Dictionary<byte,
Action>()
{
{ 1, MyFunction1 },
{ 2, MyFunction2 },
...
{ 6, MyFunction6 }
};
List<byte> actionList = new List<byte>() { 1, 3, 5, 6 };
Parallel.Invoke((from action in actionList select
actions[action]).ToArray());
|
Executing different types of browsers in parallel in Selenium Grid |
Running browsers in parallel one machine doesn't quite work well. I've
tried it many times, and they conflict each other, sometimes making tests
fail...What I did was use a VM per browser, but that's costly.
|
Thread-safe warning when executing maven parallel build |
org.apache.maven.plugins:maven-clean-plugin:2.5 and
org.codehaus.mojo:cobertura-maven-plugin:2.5.2 are both threadsafe (from
https://maven.apache.org/plugins/)
Mostly this warning is nothing to worry about, but I recommend using a
threadsafe version for your plugins if you are using the parallel build
feature.
|
Driver behavior executing parallel TestNG selenium tests with dataprovider |
I had the same experience about dataProvider. In my case I used
dataProvider's (parallel=true) attribute though. There are two solutions to
your problem.
Use dataProvider and in test class and use factory annotation for your
constructor.
In the factory annotation's attribute, use dataProvider="Your
dataProvider's name".
In the testng.xml, instead of parallel=methods, use parallel=instances.
The drawback of the above approach is that when you get the report; may be
it is maven-
surefire, testng Eclipse report or reportNG report, you do not see
parameters passed up
front. To overcome this, you can use the following approach.
Create a factory class and instantiate your test class in the factory
method using a
for loop. (Start for loop from 0.) In the test class define a constructor
whi
|
Python thread timer, not executing or executing instantly |
You lost all the indentation in your code snippet, so it's hard to be sure
what you did.
The most obvious problem is responseTimer.start. That merely retrieves the
start method of your responseTimer object. You need to call that method to
start the timer; i.e., do responseTimer.start().
Then it will produce the output you expected, with a delay of about 2
seconds before the final "timeout!" is printed.
|
Python: Some things in package visible, others not |
Is not the right way, but you can force the load of your packages:
# in your world/chip/__init__.py
from grass import *
from snow import *
from water import *
And then, when you import the Chip module, you will load all the others
packages:
# Your structure dirs
$ tree
.
`-- world
|-- __init__.py
`-- chip
|-- __init__.py
|-- grass.py
|-- snow.py
|-- water.py
In your shell:
$ python
>>> dir()
['__builtins__', '__doc__', '__name__', '__package__', 'help']
>>> from world.chip import *
>>> dir()
['Grass', 'Snow', 'Water', '__builtins__', '__doc__', '__name__',
'__package__', 'grass', 'help', 'snow', 'water']
|
How do the for / while / print *things* work in python? |
No, you can't, not from within Python. You can't add new syntax to the
language. (You'd have to modify the source code of Python itself to make
your own custom version of Python.)
Note that the iterator protocol allows you to define objects that can be
used with for in a custom way, which covers a lot of the possible use cases
of writing your own iteration syntax.
|
Taking things out of one list and putting them in another (python) |
EDIT: Mine works but THIS one is cleaner.
def Hit(player_number, target_player, target_rank, pHands):
try:
while True:
pHands[player_number].append(pHands[target_player].pop(pHands.index(target_rank)))
print pHands[player_number]
print pHands[target_player]
except ValueError:
pass # value no longer in hand
pHands = [['a', '2', '3', '4', '4', '5', '6', '7', '7', 't'], ['2', 'q',
'6', '9', '5', 'k', 'k', 'a', '3', '8'], ['j', '9', 't', 't', '2', 't',
'7', 'j', '5', '9'], ['8', '8', 'a', 'q', 'k', '4', '6', '9', 'q', '2']]
Hit (0,1,'a',pHands)
|
Python 3.3 config file - dictionary getting filled with random things |
Maybe you can try:
exec(open("settings.cfg").read(), {}, settingscfg)
in order to add the local variables to your dict.
See the documentation of exec for more information:
http://docs.python.org/3.3/library/functions.html#exec
|
working in python console while executing a boost::python module |
You have two options:
start python with the -i flag, that will cause to drop it to the
interactive interperter instead of exiting from the main thread
start an interactive session manually:
import code
code.interact()
The second option is particularily useful if you want to run the
interactive session in it's own thread, as some libraries (like
PyQt/PySide) don't like it when they arn't started from the main thread:
from code import interact
from threading import Thread
Thread(target=interact, kwargs={'local': globals()}).start()
... # start some mainloop which will block the main thread
Passing local=globals() to interact is necessary so that you have access to
the scope of the module, otherwise the interpreter session would only have
access to the content of the thread's scope.
|
logging while using parallel python |
I had a similar problem when I tried to write paralellised tests which
write results to a shared dictionary. The multiprocessing.Manager was the
answer:
# create shared results dictionary
manager = multiprocessing.Manager()
result_dict = manager.dict({})
so you can simply post the logs from processes to that common dictionary
and then process them.
or use LOG = multiprocessing.get_logger() as explained here:
https://docs.python.org/2/library/multiprocessing.html
and here: How should I log while using multiprocessing in Python?
|
Python parallel threads |
You are executing the target function for the thread in the thread instance
creation.
if __name__ == "__main__":
t1 = threading.Thread(name="Hello1", target=testForThread1()) #
<<-- here
t1.start()
This is equivalent to:
if __name__ == "__main__":
result = testForThread1() # == 'ok', this is the blocking execution
t1 = threading.Thread(name="Hello1", target=result)
t1.start()
It's Thread.start()'s job to execute that function and store its result
somewhere for you to reclaim. As you can see, the previous format was
executing the blocking function in the main thread, preventing you from
being able to parallelize (e.g. it would have to finish that function
execution before getting to the line where it calls the second function).
The proper way to set the thr
|
python loop in parallel |
If you want to do things in parallel... well, you have to run them in
parallel. I think the reason why you're only seeing a 20% speed boost is
because of the cost of creating and joining processes is quite high and
chewing into the performance benefit you get from multiprocessing. This
degraded speed increase is to be expected, since there is a cost associated
with creating and joining processes. Try something more time consuming and
you'll notice the 20% rise to a more "expected" amount.
Also, if your bottleneck is the CPU isn't able to run multiple processes
(shouldn't really happen nowadays with the dual/quad core cpus), you might
not even notice a speed boost, since they're still being run serially at
the hardware level.
|
Python: Compiling regexes in parallel |
As much as I love python, I think the solution is, do it in perl (see this
speed comparison, for example), or C, etc.
If you want to keep the main program in python, you could use subprocess to
call a perl script (just make sure to pass as many values as possible in as
few subprocess calls as possible to avoid overhead.
|
Parallel data processing in Python |
You might want to look into the Python Multiprocessing library. With
multiprocessing.pool, you can give it a function and an array, and it will
call the function with each value of the array in parallel, using as many
or as few processes as you specify.
|
Running functions parallel in Python |
Just use threading and pass your thread either a function handle to call
when it has data ready or the application handle in the case of GUI
applications so that the thread can create an event with the data attached.
|
Any python library for parallel and distributed tasks? |
Try using celery.
Celery is an asynchronous task queue/job queue based on distributed
message passing. It is focused on real-time operation, but supports
scheduling as well.
The execution units, called tasks, are executed concurrently on a
single or more worker servers using multiprocessing, Eventlet, or
gevent. Tasks can execute asynchronously (in the background) or
synchronously (wait until ready).
|
opening files in parallel using threading in Python |
it works for me in python 3.3
i guess you have an error in do_work that (1) is not being logged and (2)
means that task_done is not being called.
so change:
def worker():
while True:
item = q.get()
do_work(item)
q.task_done()
to
def worker():
while True:
item = q.get()
try:
do_work(item)
except Exception as e:
print(e)
finally:
q.task_done()
you don't need the except (it's just to print something that may help) but
the finally is critical, or the q.join() will never exit when you have an
error.
|
limits of python in parallel file processing |
Are you using the multiprocessing module (separate processes) or just using
threads for the parallel processing?
I doubt very much that the SSD is the problem. Or python. But maybe the csv
module has a race condition and isn't thread safe?
Also, check your code. And the inputs. Are the "bad" writes consistent? Can
you reproduce them? You mention GIGO, but don't really rule it out ("Almost
always, ...").
|
Python multiprocessing pool for parallel processing |
The docs are quite clear on this: each call to apply blocks until the
result is ready. Use apply_async.
|
Making multiple API calls in parallel using Python (IPython) |
Without more information about what you are doing in particular, it is hard
to say for sure, but a simple threaded approach may make sense.
Assuming you have a simple function that processes a single ID:
import requests
url_t = "http://localhost:8000/records/%i"
def process_id(id):
"""process a single ID"""
# fetch the data
r = requests.get(url_t % id)
# parse the JSON reply
data = r.json()
# and update some data with PUT
requests.put(url_t % id, data=data)
return data
You can expand that into a simple function that processes a range of IDs:
def process_range(id_range, store=None):
"""process a number of ids, storing the results in a dict"""
if store is None:
store = {}
for id in id_range:
store[id] = process_id(id)
ret
|
Print multiple scrolling lines in parallel using python |
A few simple remarks:
I don't see why you duplicate content_list as content1 and content2 to
later pass content1 twice as argument. Maybe your second thread should use
content2 and not content1:
thread2 = threading.Thread(target=scroll_text, args=(content2, 1, 70,
0.05))
You should close your filehandles at the end, for example:
fd.close()
You probably use sys.stdout.write() on purpose, I would just like to
mention the following way to print to STDOUT:
from __future__ import print_function # python3 compatible, changes the
syntax a little bit
print (something) # newline is inclusive, you don't need to add '+
'
|
Parallel application in python becomes much slower when using mpi rather than multiprocessing module |
MPI is actually designed to do inter node communication, so talk to other
machines over the network.
Using MPI on the same node can result in a big overhead for every message
that has to be sent, when compared to e.g. threading.
mpi4py makes a copy for every message, since it's targeted at distributed
memory usage.
If your OpenMPI is not configured to use sharedmemory for intra node
communication this message will be sent trough the kernel's tcp stack, and
back, to get delivered to the other process which will again add some
overhead.
If you only intend to do computations within the same machine, there is no
need to use mpi here.
Some of this is discussed in this thread.
Update
The ipc-benchmark project tries to make some sense out of how different
communication types perform on diffe
|
parallel installion of Python 2.7 and 3.3 via Homebrew - pip3 fails |
I was having the same problem as you were and I had
export PYTHONPATH="/usr/local/lib/python2.7/site-packages:$PYTHONPATH"
in my ~/.bash_profile. Removing that line solved the problem for me. If you
have that or something like it in your ~/.bashrc or ~/.bash_profile, try
removing it.
|
Haskell: sub-optimal parallel GC work balance, no speedup in parallel execution |
The varaition is likely due to the fact that using +RTS -Nn leads to
creation of one bound thread and n worker threads (cf. the output), hence
one worker will share a physical core with the bound thread and interfere.
Hence, it is recommended to use a number lower then the total number of
available physical cores as argument for +RTS -N.
Another potential issue is load balancing: you may need to split the work
differently if there is load unbalance (threadscope profile would help).
Have a look at this paper for more details on tuning.
|
Running 2+ distinct independent class methods in parallel in Python |
Doug Hellmans "Module of the Week" often have good examples:
http://pymotw.com/2/multiprocessing/basics.html
His book on the Standard library is worth the money as well.
|
Sequential or parallel: what is the proper way to read multiple files in python? |
It partly depends on the type of storage medium they're on. A conventional
hard drive will crawl nearly to a halt due to seek activity. An SSD, OTOH,
is much less susceptible to random reads (though it isn't entirely
unaffected).
Even if you have an SSD, you might find that there's a point of diminishing
returns, though the default Pool size is probably fine, and you may even
find that the sweet-spot is much higher than cpu_count(). There are too
many factors to make any predictions, so you should try different pool
sizes.
|
How to design my python application to work in parallel across multi-cores |
The very general approach, which I first saw in tech talk on Youtube growth
[1] is this: identify and fix your bottlenecks, drink, sleep, etc, continue
with bottlenecks again.
So, when you've got too much work and cores sit idle, well, run more
processes. Using the same approach you'll know when to stop or even shrink
process pool.
[1] https://www.youtube.com/watch?v=ZW5_eEKEC28
|
Python Multiprocessing: Strange Behaviour Reading Single File in Parallel |
You are passing an open file object to another process. I don't like this;
it doesn't seem very clean. I would prefer to pass the filename to the
child process, and the child process would open the file, write to it, and
close it. This would be clean.
I guess that when the child process writes to the file object it does some
internal caching. Apparently the child process does not close the file, and
ends without flushing the cache. The out_file.seek(0) statement has the
side effect of flushing the cache. You can probably achieve the same with
out_file.flush().
But really, just pass the filename to the child process. Otherwise whatever
you achieve would differ between operating systems and Python versions.
|
Python MySQL queue: run code/queries in parallel, but commit separately |
What probably happens is you share the MySQL connection between the two
threads. Try creating a new MySQL connection inside each thread.
For program 2, look at http://www.celeryproject.org/ :)
|
executing perl script from python |
Your Perl script is wrong:
You read the path from STDIN, not from the command line arguments.
You do not remove the newline after reading a line. You would be looking
for "foo
" instead of "foo".
You have no error checking whatsoever.
You do not load the Data::Dumper module.
And are you sure that you really want to execute the file at $path as Perl
code?
Cleaned up, I'd write the Perl script as
perl -MData::Dumper
-do $ARGV[0];'
-e'print Dumper \%some_global'
input-file.pl >output
Or as
use strict; use warnings;
use Data::Dumper
do $ARGV[0];
open my $fh, ">", "output" or die "Can't open output: $!";
print { $fh } Dumper \%some_global;
If you really want the filename from STDIN:
use strict; use warnings;
use Data::Dumper;
chomp(my $path = <STDIN>);
do $p
|
Executing an Excel macro from Python |
I've never done this with EnsureDispatch, but here's how you do it with
Dispatch
from win32com.client import Dispatch
xlApp = Dispatch('Excel.Application')
result = xlApp.Run("<macro name here>",<the macro variables go
here -seperate each one with a comma>)
# for example:
#result = xlApp.Run("myMacro", "foo","bar")
#and make sure you close the xl
xlApp.Quit()
oh and I'm aslo on python 2.7, (but I doubt it will matter as the
dependency is win32)
|
Executing a set of codes when python GUI is clicked |
Use the command option of a Button:
from Tkinter import Tk, Button
root = Tk()
def func():
'''Place code to convert files in here'''
print "Button has been pushed"
Button(text="Push me", command=func).grid()
root.mainloop()
func will only run when the button is pressed.
|
Executing a python file from PHP - Linux |
This is most likely a permission issue.
Try
echo exec("whoami");
This will let you know who php is running as. Then you need to verify this
user can run the python script. If the file was not created by the same
daemon that runs python, you will most likely be denied permission.
Update
This will let you know who owns all the files you are working with. The
file being written to needs to be writable by the user that is running
python. If you are running python from ssh, that is most likely not the
same user as when you run python from exec.
echo exec('whoami') . "<br>";
echo exec("ls -l test.txt") . "<br>";
echo exec("ls -l somefile.py") . "<br>";
Update 2
Because I constantly forget this exists.
passthru('python somefile.py 1 2>&1');
This will e
|
Python script executing SQL on Synology NAS |
you can install a mysql connector from official mysql source :
http://dev.mysql.com/doc/connector-python/en/index.html. bye
|
Executing shell command from python |
You can call a subprocess as if you were in the shell by using Popen() with
the argument shell=True:
subprocess.Popen("nohup ./op.o > myout.txt &", shell=True)
|
Executing Tkinter Code Successfully (Python 2.7) |
It appears that you have some indentation issues. Python isn't free form,
so you need to pay attention to indentation:
from Tkinter import *
class App(Frame):
def __init__(self, master):
Frame.__init__(self, master)
self.grid()
self.create_widgets()
def create_widgets(self):
self.entryLabel = Label(self, text="Please enter a list of numbers
(no commas):")
self.entryLabel.grid(row=0, column=0, columnspan=2)
self.listEntry = Entry(self)
self.listEntry.grid(row=0, column=2, sticky=E)
self.entryLabel = Label(self, text="Please enter an index value:")
self.entryLabel.grid(row=1, column=0, columnspan=2, sticky=E)
self.indexEntry = Entry(self)
self.indexEntry.grid(row=1, column=2)
self.
|
Python - When executing program via CMD, it just closes after the second input |
This has to do with how Windows handles the execution. The default is to
close right away after the program has terminated. There may be a setting
to fix this, but a quick solution is to open up Command Prompt, cd to the
direction, and execute your script directly.
|
Executing Fabric python code on windows 7 |
Fabric requires Python version 2.5 or 2.6. Fabric has not yet been tested
on Python 3.x and is thus likely to be incompatible with that line of
development.
It's not so simple to install Fabric on Windows, because it uses some
specific C libs that needs to be compiled. Try next in Windows:
pip install fabric # Failed!
easy_install fabric # Failed again!
But if you don't want to install Visual Studio or Cygwin and compile the C
code so I will consider another way, more simple. As I have figured out,
Fabric needs next Python libs to be installed on Windows:
PyCrypto
PyWin32
Both requires compilation OR may be installed from the pre-built binary
packages (my choice!):
PyCrypto: click here
PyWin32: click here
Download and install these two and you will be finally able to do:
pip inst
|
Python Print Output Executing Out of Order |
You're not closing the file when you do this:
file.close
You're just referencing the close method as a value. What you wanted was to
call the close method:
file.close()
Eventually, the file gets garbage-collected, at which point all of its
buffered data gets flushed. But meanwhile, you've opened the same file
elsewhere and appended new data to it.
On many platforms, your left-over buffers would end up overwriting the
later-appended data, instead of getting added after them, making this even
harder to debug. You got lucky here. :)
If you ran this code through a linter, it would have warned you. With
pylint, I get "W0104: Statement seems to have no effect". However, note
that it can't possibly catch all such errors. For example:
import random
r = random.random
Here, you're setti
|