|Importing scipy.misc.pilutil in python using Komodo and Anaconda?|
The problem is that the scipy.misc.__init__ deletes the pilutil module -
relevant code line - so you cannot import it directly. But all functions
from the pilutil module are, before that, added to the misc module, and you
can use them from there:
In : from scipy import misc
In : misc.fromimage
Out: <function scipy.misc.pilutil.fromimage>
In : misc.bytescale
Out: <function scipy.misc.pilutil.bytescale>
|scipy ImportError on travis-ci|
I found two ways around this difficulty:
As @unutbu suggested, build your own virtual environment and install
everything using pip inside that environment. I got the build to pass, but
installing scipy from source this way is very slow.
Following the approach used by the pandas project in this .travis.yml file
and the shell scripts that it calls, force travis to use system-wide
site-packages, and install numpy and scipy using apt-get. This is much
faster. The key lines are
in travis.yml before the before_install group, followed by these shell
and then finally
apt-get install py
|ImportError when importing certain modules from SciPY|
This problem an be solved if instead of installing the usual numpy
distribution, the numpy-MKL package is installed.
This package is available here.
Do remove the previous installation before going with the new one!
|Scipy - Sparse Library ImportError: DLL load failed: %1 is not a valid Win32 application|
Ultimately, this means that scipy.sparse itself, or something it imports,
either is, or depends on at load time, a .DLL or .pyd file that's broken or
for the wrong-architecture.
So, there are two steps to tracking this down.
First, you need to figure out which actual .pyd/.DLL file is raising this
exception. Unfortunately, Python 2.7 will not give you this information
You may be able to figure it out by looking at the traceback from the
ImportError—it should be something imported by the lowest module in the
chain. (If you don't understand the traceback, paste it into your answer,
and hopefully someone else can tell you.)
Failing that, you will have to walk through things manually. You can look
at the source to scipy/sparse/__init__.py in your site-packages or online
|Pyqtgraph with anaconda python on MAC gives nib error|
The solution is to use python.app (or the equivalent pythonw) to run the
program, instead of just python. If pyqtgraph installed any commands,
you'll need to edit them so that their shebang line calls
|Python importerror : pyqtconfig|
Sorry for my poor English. After some googling I found that in Pyqt 4.10
and above, pyqtconfig.py seems be removed. Please check
I added a pyqtconfig.py and sipconfig.py get from web.
After that Eric4 starts (with some errors). Haven't do many operations in
the editor yet.
|Python ImportError: No module named cv|
It needs this package to function
|Python on OS X: ImportError: No module named fsevents|
I think you have not installed fsevents, you can download fsevents from
extract the repository and cd into and do python setup.py install.
|python ImportError "module named termios" on GAE|
It looks like there are a number of pyimgur libraries available. I would
make sure that you are using the correct one.
It looks like you are trying to use this library based on the other
dependencies you have installed (decorator, oauth2, requests)
However, there is no auth module in that library, so line 28 of your
uploadimage.py file doesn't make any sense because there is no auth to
from pyimgur import auth
Try out the library I linked instead.
|Python 2.7 - ImportError: No module named Image|
Try to put the python(2.7) at your Windows path.
Do the following steps:
Open System Properties (Win+Pause) or My Computer and right-click then
Switch to the Advanced tab
Click Environment Variables
Select PATH in the System variables section
Add python's path to the end of the list (the paths are separated by
|Python ImportError when running an installed script|
I just spent some time coming to grips with this problem on my own project
before I realized I was not properly respecting the python import path.
When the name of the script you are running matches the root of your
namespace there will be namespace problems. opal.py is conflicting with
opal/init.py. This can even happen when using setuptools as you are.
Assuming SITEPACKAGES=/usr/lib/pythonX.Y/site-packages your package will be
installed at $SITEPACKAGES/opal-$version.egg and your scripts will be
installed in $SITEPACKAGES/opal-$version.egg/EGG-INFO/bin/opal.py with a
file added to /usr/local/bin that loads the script that is placed in
EGG-INFO/bin/. When that script loads there is a small problem.
The python path will include $SITEPACKAGES/opal-$version.egg/EGG-INFO/bin/.
|Python ImportError while module is installed [Ubuntu]|
/host/Python27/Lib/site-packages is not a default python directory on linux
installations as far as I am aware.
The normal python installation (and python packages) should be found under
/usr/lib or /usr/lib64 depending on your processor architecture.
If you want to check where python is searching in addition to these
directories you can use a terminal with the following command:
If the /host/Python27/Lib/site-packages path is not listed, attempt to use
the following command and try it again:
If this should work and you do not want to write this in a terminal every
time you want to use these packages, simply put it into a file called
.bashrc in your home folder (normally /home/<username>).
|Python ImportError: No module named 'xxxx'|
You need to use either explicit relative or absolute imports when you use
from wordpress_xmlrpc import base
from . import base
In python3 import base would only import an absolute package base, as
implicit relative imports are no longer supported.
|scipy 0.11.0 to 0.12.0 changes a linear scipy.interpolate.interp1d, breaks my constantly updated interpolator|
It looks like _y is just a copy of y that has been reshaped by
interp1d._reshape_yi(). It should therefore be safe to just update it
self.itpr._y = self.itpr._reshape_yi(self.itpr.y)
In fact, as far as I can tell it's only _y that gets used internally by the
interpolator, so I think you could get away without actually updating y at
A much more elegant solution would be to make _y a property of the
interpolator that returns a suitably reshaped copy of y. It's possible to
achieve this by monkey-patching your specific instance of interp1d after it
has been created (see Alex Martelli's answer here for more explanation):
x = np.arange(100)
y = np.random.randn(100)
itpr = interp1d(x,y)
# method to get self._y from self.y
|Python 2.6 on MacOSX - ImportError: No module named _collections|
Django is attempting to issue the following command (or some variant of
And is failing. This could be because your installation is missing
components . You can verify it's not a django related issue by doing the
Pythoness-410:auth gfleche$ python
>>> import collections
If you get an error back, then it's likely to be an issue with your python
installation - since collections has been around for some time, and is part
of the standard install.
|ImportError: No module named transport (Paramiko, Python 3.2.5)|
It appears Paramiko is trying a relative import, which is not recognised in
this form in Python 3 anymore. See the changes in Python 3.
The import statements in Paramiko should be one of
from .transport import SecurityOptions, Transport
(note the leading dot), or
from paramiko.transport import SecurityOptions, Transport
You can either fix the paramiko source code, or as a workaround, you could
to your PYTHONPATH. Neither are preferred.
Did you run the 2to3 tool before you ran python3 setup.py install? I'm not
sure if that would fix this though, since the tool probably can't
distinguish between a relative or absolute import in the way it is used
Do check on the Paramiko forums (if there
|Python, install .pyd file: ImportError: DLL load failed|
There's only dlls built for Python-2.6 in the site.
You cannot use dll built for Python2.6 in Python 2.7.
Find the dll built for your system (platform, python version should match).
Or build it yourself.
|PySide for Python 2.7.2 ImportError DLL load failed on Win32|
I uninstalled everything python & qt (including NINJA-IDE, which uses
qt, just in case).
I reinstalled python 2.7.5 (was using 2.7.3).
I reinstalled PySide-1.2.0.win32-py2.7.exe
I can now import PySide.QtCore
Somewhere in the past I must have done something that made my interpreter
think it knew where to look for a qt file ('QtCore.pyd' perhaps?) and
unistalling everything / reinstalling fixed this. That's my guess.
|ImportError: No module named bs4 because in wrong python folder|
Look, I suggest you download first the package of BeautifulSoup from its
source in https://pypi.python.org/pypi/BeautifulSoup, once this has
completed go to the folder where you download the compressed file ,
decompress the file and then inside you will see a python file named
setup.py. This is the install file of the package, you need to run a
command prompt in the folder and execute teh following line :
python setup.py install
This is another way of install Python packages without pip.
|"ImportError: No module named xlsxwriter" while converting python script to .exe|
ive only used cx_freeze a few times and was successful using these steps
(you were possibly missing something in this):
first create a setup.py
from cx_Freeze import setup, Executable
exe = Executable(
name = "desiredname",
version = "1",
description = "example program",
executables = [exe]
before running this make sure that you have all non default(built in)
modules and the setup.py in the same folder as the "yourmodule.py" then
from the command line run "python setup.py build"
|PySide (1.1.2), cx_freeze, WinXP, Python 3.3: ImportError: DLL load failed|
Turned out there was a simple solution to the issue: A DLL from another
module in use by the application was missing; copying it into the root
build-out folder next to the frozen EXE easily solved the problem.
The best (and probably only) approach of attack to solve these issues
probably is to copy all DLLs from used modules into the build dir one by
one until the frozen build doesn't throw the error anymore. I couldn't find
another way as the stacktrace didn't point to a specific file that failed
If anyone runs into similar issues, I'd be happy to provide extra info.
|Matlab's gaussmf in Python/SciPy?|
Is this what you're looking for?
|Constrained Spline Fit using Scipy in Python|
Reading the question you linked, I think you only need x to be monotonic.
If your data is a Series with x as the index, then just do
UnivariateSpline(s.sort()). If your data is a DataFrame, do
Perhaps you actually want a monotonic spline, in spite of the fact that
y(x) does not seem to be monotonic. I know of no way to introduce
constraints to UnivariateSpline directly, but we can constrain the data
before we fit the spline. Generate a "forced monotonically decreasing"
variant of your data like this:
Each element will be replaced with the smallest element seen so far,
suppressing any increases. Any spline from such data should also be
Finally, in general, for curve fitting with constraints, checkout
|Python SciPy Possible cases of n choose k|
Any reason for using scipy and not itertools for this particular problem?
Looking into itertools.combinations or itertools.permutations may provide a
more adequate solution.
|ImportError: No module named ' ' while Import class in the __init__.py file Python|
You are using Python 3. Do
from .Courses import Courses
from .Fridge import Fridge
Python 2 would look for Courses module in the same dir, but Python 3 looks
for Courses module in site packages - and, obviously, it's not there.
P.S. "Phython" - sounds interesting ;)
|Python-magic installation error - ImportError: failed to find libmagic|
The below original answer is now outdated. Please simply follow the
instructions denoted under the "dependencies" section.
I was able to solve this problem by moving the 3 files from GNUWin32
project to a separate directory (not the system32 directory the docs
suggest) and adding them to the PATH environment variable
|Boost.Python hello world tutorial: ImportError: ./hello.so: undefined symbol: _ZN3Num3setEf|
The error says that your Num::set member function is not defined. If you
change your Num struct to something like:
float get() const
void set(float value)
it should work.
|Two Python Version conflict in Ubuntu Oneiric 11.10 issue: ImportError: No module|
Have you installed it using apt-get or built from sources?
If built from sources, are you sure the installation has finished
successfully? Usually in order to build Python from sources one has to do
the following steps
sudo make install (sudo might not be required, but the make script will
attempt to change files in /usr/ directory.
in your python sources directory. The last command, amongst others, copies
python to the /usr/ directory. If you want to have it installed somewhere
else you'd have to pass --path=XXX (if i'm not mistaken) to ./configure.
|voronoi and lloyd relaxation using python/scipy|
You have a region you know, a point you don't, and you know that
vor.point_region[point] == region. For a single region, you can figure out
the corresponding point as:
point = np.argwhere(vor.point_region == region)
You can also create a region_point indexing array to figure out multiple
points from an array of regions as:
region_point = np.argsort(vor.point_region)
points = region_point[regions-1]
|Using python scipy to fit gamma distribution to data|
Looking at the implementation of gamma.fit:
def fit(self, data, *args, **kwds):
floc = kwds.get('floc', None)
if floc == 0:
xbar = ravel(data).mean()
logx_bar = ravel(log(data)).mean()
s = log(xbar) - logx_bar
return log(a) - special.digamma(a) - s
aest = (3-s + math.sqrt((s-3)**2 + 24*s)) / (12*s)
xa = aest*(1-0.4)
xb = aest*(1+0.4)
a = optimize.brentq(func, xa, xb, disp=0)
scale = xbar / a
return a, floc, scale
return super(gamma_gen, self).fit(data, *args, **kwds)
If you put floc=None, it will call the fit function of the parent class
(which is rv_continuous) and you can fix the scale.
|How to put colours in dendograms of matplotlib - scipy in python?|
Look at the documentation, Looks like you could pass the link_color_func
keyword or color_threshold keyword to have different colors.
The default behavior of the dendrogram coloring scheme is, given a
color_threshold = 0.7*max(Z[:,2]) to color all the descendent links below a
cluster node k the same color if k is the first node below the cut
threshold; otherwise, all links connecting nodes with distances greater
than or equal to the threshold are colored blue [from the docs].
What the hell does this mean? Well, if you look at a dendrogram, different
clusters linked together. The "distance" between two clusters is the height
of the link between them. The color_threshold is the height below which new
clusters will be different colors. If all your clusters are blue, then you
|Fitting data using UnivariateSpline in scipy python|
There are a few issues.
The first issue is the order of the x values. From the documentation for
scipy.interpolate.UnivariateSpline we find
x : (N,) array_like
1-D array of independent input data. MUST BE INCREASING.
Stress added by me. For the data you have given the x is in the reversed
To debug this it is useful to use a "normal" spline to make sure everything
The second issue, and the one more directly relevant for your issue,
relates to the s parameter. What does it do? Again from the documentation
s : float or None, optional
Positive smoothing factor used to choose the number of knots. Number
of knots will be increased until the smoothing condition is satisfied:
sum((w[i]*(y[i]-s(x[i])))**2,axis=0) <= s
If None (defaul
|python scipy.stats pdf and expect functions|
The cumulative density function might give you what you want.
Then the probability P of being between two values is
P(inf < area < sup) = cdf(sup) - cdf(inf)
There's a tutorial about probabilities here and here
They are all related. The pdf is the "density" of the probabilities. They
must be greater than zero and sum to 1. I think of it as indicating how
likely something is. The expectation is is a generalisation of the idea of
E[x] = sum(x.P(x))
|Convolution and Deconvolution in Python using scipy.signal|
The rank(x) returns the rank of matrix. In other words number of dimensions
it contains. Please check whether ranks of s and f are the same before call
to signal.convolve(). Otherwise you will receive an exception you quote.
I have no idea why deconvolution may return something with more dimensions
than given input. This requires deeper investigation I don't have time for.
|Python scipy.weave not working anymore|
It is finding python27.lib (in the python folder) but is skipping it as
incompatible most likely as it was built with Visual C++ and you are using
gcc - You probably had a gcc build of python on your path - if so you need
to add it back or set the linker path to include . i.e. the current
|Python Scipy library: saving images|
Try to use the full path when saving:
your error is a permission error, so probably you do not have access to
write in the current directory. You can also change the current directory
using os.chdir( newpath ).
|Quantifying the quality of curve fit using Python SciPy|
You can use the ODRPACK library instead of the curve_fit. The result of
fitting by the ODRPACK contains the estimates of uncertainties for all
fitting parameters in several different ways including standard errors of
the estimated parameters, which exactly you are looking for.
I used to work with the curve_fit, but I've faced the same problem: the
absence of estimates of errors of fitting parameters. So, now I'm using the
|GAE Python bulk uploader is giving error: "ImportError: No module named model"|
Well, I think I found the answer, but it doesn't look like I'm going to be
able to use the bulk uploader anyways since my objects use ndb.
See these answers:
bulkloader not importing ndb.model
dowload app engine ndb entities via bulk exporter / bulk uploader
|Clustering in python(scipy) with space and time variables|
You'll need to define your own metric, which handles "time" in an
appropriate way. In the docs for scipy.spatial.distance.pdist you can
define your own function
Y = pdist(X, f)
Computes the distance between all pairs of vectors in X using the user
supplied 2-arity function f. [...] For example, Euclidean distance between
the vectors could be computed as follows:
dm = pdist(X, lambda u, v: np.sqrt(((u-v)**2).sum()))
The metric can be passed to any scipy clustering algorithm, via the metric
keyword. For example, using linkage:
scipy.cluster.hierarchy.linkage(y, method='single', metric='euclidean')
|Numerical Accuracy with scipy.optimize.curve_fit in Python|
I believe this is to do with the minimisation algorithm used here, and the
maximum obtainable precision.
I remember reading about it in numerical recipes a few years ago, I'll see
if i can dig up a reference for you.
link to numerical recipes here - skip down to page 394 and then read that
chapter. Note the third paragraph on page 404:
"Indulge us a ﬁnal reminder that tol should generally be no smaller
than the square root of your machine’s ﬂoating-point precision."
And mathematica mention that if you want accuracy, then you need to go for
a different method, and that they don't infact use LMA unless the problem
is recognised as being a sum of squares problem.
Given that you're just doing a one dimensional fit, it might be a good
exercise to try just implementing o