|Using SSH keys inside docker container|
In order to inject you ssh key, within a container, you have multiple
1) Using a Dockerfile with the ADD instruction, you can inject it during
your build process
2) Simply doing something like cat id_rsa | docker run -i <image> sh
-c 'cat > /root/.ssh/id_rsa'
3) Using the docker cp command which allows you to inject files while a
container is running.
|How do I run puppet agent inside a docker container to build it out. How do I achieve this?|
Here is another solution. We use ENTRYPOINT docker file instruction as
Using it you can run puppet agent and other services in background before
instruction from CMD or command passed via docker run.
|Install multiple packages on linux (like pip install -r requirements.txt)|
Put the list of packages in a text file(say test.txt) with package names
separated by spaces, like this -
python ruby foo bar
then you can just install with apt-get like this -
sudo apt-get install $(cat test.txt)
|How install ruby with rbenv install 1.9.3-p448 ubuntu 12.04 vagrant box?|
Download the ruby using your browser from
ftp://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p448.tar.bz2 and place it
in ~/.rbenv/caches/ruby-1.9.3-p448.tar.bz2, then run rbenv install
|Run a Docker Image as a Container|
The specific way to run it depends on whether you gave the image a tag/name
$ docker images
root@dockertest:~# docker images
REPOSITORY TAG ID CREATED
ubuntu 12.04 8dbd9e392a96 4 months ago
131.5 MB (virtual 131.5 MB)
With a name (let's use ubuntu):
$ docker run -i -t ubuntu:12.04 /bin/bash
Without a name, just using the ID:
$ docker run -i -t 8dbd9e392a96 /bin/bash
for more information.
|Launch a container with Docker without specifying command|
You can build a Docker image that includes a run command and other
configuration, such that a docker run <image> will start the
container. The easiest way to do this is with CMD from the Docker
Builder. You'll need a recent version of Docker (> 0.4.6?).
Outside of using Docker Builder, check out the flags for docker commit and
docker run (where the command arguments are optional).
|Automatically Start Services in Docker Container|
I guess you can't. What you can do is create an image using a Dockerfile
and define a CMD in that, which will be executed when the container starts.
See the builder documentation for the basics
(https://docs.docker.com/reference/builder/) and see Run a service
automatically in a docker container for information on keeping your service
You don't need to automate this using a Dockerfile. You could also create
the image via a manual commit as you do, and run it command line. Then, you
supply the command it should run (which is exactly what the Dockerfile CMD
actually does). You can also override the Dockerfiles CMD in this way: only
the latest CMD will be executed, which is the command line command if you
start the container using one. The basic docker run -i -t base /bin/bash
|Running and Deploying Rails to Docker Container|
I'm no expert and I haven't even used docker myself, but as I understand
it, your app sits inside a docker container. You would deploy ideally a
whole container with your own ruby version installed and so on.
The big benefit is, that you can test exactly the same container in your
staging system that you're going to ship to production then. So you're able
to test the complete system with all installed C extensions, the exact same
ls command and so on.
|Forward host port to docker container|
If MongoDB and RabbitMQ are running on the Host, then the port should
already exposed as it is not within Docker.
You do not need the '-p' option in order to expose ports from container to
host. By default, all port are exposed. The '-p' option allows you to
expose a port from the container to the outside of the host.
So, my guess is that you do not need '-p' at all and it should be working
|What is the benefit of Docker container for a memcached instance?|
There seems to be two questions here...
1 - The benefit is as you describe. You can sandbox the memcached instance
(and configuration) in to separate containers so you could run multiple on
a given host. In addition, moving the memcached instance to another host is
pretty trivial and just requires an update to application configuration in
the worst case.
2 - docker run -m <inbytes> <memcached-image> would limit the
amount of memory a memcached container could consume. You can run as many
of these as you want under a single host.
|Run a complex series of commands in the same Docker container|
You post 2 questions in one. Maybe you should put 2. in a different post. I
will consider 1. here.
It is unclear to me whether you want to spawn a new container for every
iteration (as you say first) or if you want to "run a complex series of
commands as described above inside the same container?" as you say later.
If you want to spawn multiple containers I would expect you to have a
script on your machine handling that.
If you need to pass an argument to your container (like i): there is being
work done on passing arguments currently. See
https://github.com/dotcloud/docker/pull/1015/files for documentation change
which is not online yet).
|(CoreOS) how to auto restart a docker container after a reboot?|
CoreOS uses systemd to manage long running services:
|out of memory issue in installing packages on Ubuntu server|
Extend your RAM by adding a swap file:
a swap file is a file stored on the computer hard drive that is used
as a temporary location to store information that is not currently
being used by the computer RAM. By using a swap file a computer has
the ability to use more memory than what is physically installed in
Login as root: su - or execute the commands with sudo in front
dd if=/dev/zero of=/swapfile1 bs=1024 count=524288
chown root:root /swapfile1
chmod 0600 /swapfile1
Now the swap file will be activated temporarily, but will be gone after
You should have enough RAM for your installing process
|Docker command fails during build, but succeeds while executed within running container|
The pwd is not persistent across RUN commands. You need to cd and configure
within the same RUN.
This Dockerfile works fine:
RUN apt-get update
RUN apt-get -y install libpcre3 libssl-dev
RUN apt-get -y install libpcre3-dev
RUN apt-get -y install wget zip gcc
RUN wget http://nginx.org/download/nginx-1.4.1.tar.gz
RUN gunzip nginx-1.4.1.tar.gz
RUN tar -xf nginx-1.4.1.tar
RUN wget --no-check-certificate
RUN unzip master
RUN cd nginx-1.4.1 && ./configure
|Docker: Error starting container: Unable to load the AUFS module|
I ran apt-get purge lxc-docker and reinstalled it with curl
http://get.docker.io | sudo sh.
I got the following error, but the installation continued and finished.
Ensuring basic dependencies are installed...
Looking in /proc/filesystems to see if we have AUFS support...
Ahem, it looks like the current kernel does not support AUFS.
Let's see if we can load the AUFS module with modprobe...
FATAL: Module aufs not found.
Ahem, things didn't turn out as expected.
When I run docker run ubuntu echo hello the result is hello, so it looks
like everything is alright.
|how to install php 5.4 install on ubuntu desktop 12.04?|
Maybe this tutorial will help
|Cannot install two packages that use the same namespace|
After installing one of your packages and downloading the other…
You're not including testsuite/__init__.py and
testsuite/prettyprint/__init__.py in the source files.
The docs on Namespace Packages the setuptools/pkg_resources way explains:
Note, by the way, that your project's source tree must include the
namespace packages' __init__.py files (and the __init__.py of any parent
packages), in a normal Python package layout.
If you don't actually install these files, they don't do any good. You just
end up with a testsuite with nothing in it but prettyprint, and that has
nothing in it but outcomes, so testsuite and testsuite.prettyprint are not
packages at all, much less namespace packages.
|How to install atmosphere packages without meteorite?|
You can create a directory called /packages in your project & then
manually install each package and its dependencies. e.g for 'meteor router'
git clone https://github.com/tmeasday/meteor-router.git
mv meteor-router router
git clone --recursive
mv meteor-page-js-ie-support page-js-ie-support
The second is a dependency on meteor router which you can see on the
package's atmosphere page. It's recursive to make sure the submodule
pages-js is also git cloned in.
As pointed out by thatjuan: Once you do this, you just need to add the main
one to your project. You don't have to add the dependencies.
meteor add router
|CertificateError when trying to install packages on a virtualenv|
When I try to connect to pypi I get the following error:
pypi.python.org uses an invalid security certificate.
The certificate is only valid for the following names:
*.addvocate.com , addvocate.com
So either pypi is using the wrong ssl certificate or somehow my connection
is being routed to the wrong server.
In the meantime I have resorted to downloading directly from source URLs.
|How to install Chocolatey packages offline?|
Right now we don't have all of them setup. You can edit the package to
point the installer to the local resource and rebuild the packages as a
workaround for now.
You can follow some of our feed about it here:
|How can I install Leiningen packages behind a firewall?|
In order to figure out which jars your project needs you can do:
$ lein deps :tree
Which will show you something that is called a "dependency tree". It will
look similar to:
Installing Jars with Lein
One simple way to install manually downloaded jars would be to use
$ lein localrepo install [-r repo-path]
|Using install.packages with custom temp dir|
The documentation in help(tempdir) pretty clearly states that TMP, TMPDIR,
... are used:
By default, ‘tmpdir’ will be the directory given by ‘tempdir()’.
This will be a subdirectory of the per-session temporary directory
found by the following rule when the R session is started. The
environment variables ‘TMPDIR’, ‘TMP’ and ‘TEMP’ are
turn and the first found which points to a writable directory is
used: if none succeeds ‘/tmp’ is used.
So if setting one alone does not help, maybe you want to set several, and
make sure the permissions on your 'replacement directory' are permissive
enough etc pp.
|Why does `pub install` keep creating `packages` link in all the subdirs of `web`?|
When Dart sees an import like:
It translates it to:
import '<url of your entrypoint>/packages/foo/foo.dart';
So, say your app's entrypoint is in:
If it has a "package:" import, like above, it will remap it to:
That means that for Dart to be able to find foo.dart, there needs to be a
packages directory inside app that contains foo/foo.dart. Part of pub's job
is to set that up for you.
This is definitely not the nicest part of working with Dart and pub.
Spewing symlinks everywhere is gross, but it deals with the limitations
that the language places on us. Over time, we're hoping to move away from
having to create these symlinks.
More details on this here.
|Packages installed by `pip install -r requirements.txt` are not found|
Are you doing sudo pip install django-mediasync or sudo pip install -r
requirements.txt? If so, it'll install it outside of the virtualenv. See
How to install which programs requires "sudo" in virtualenv?.
Basically because your user should own the virtualenv directory, you don't
need superuser privileges to install anything via pip. Do which pip and
sudo which pip and you will see they are different.
The other possibility may be that your requirements.txt is not installing
correctly. It may output lines like the line you mention, but apparently
pip will scan all the packages in the requirements.txt before installing
anything. If there is any error, it will abort the install for all
|Can't install basic packages on Debian EC2 with Aptitude/apt-get|
If you install a package using dpkg that doesn't have all it's
dependencies, you can fix it by running:
apt-get install -f
Note that it will only find packages in your current repositories. If some
are still missing, try apt-get update. If that still doesn't work, you
will need to find an apt repo which hosts those packages, or install the
|Make apt install packages from a specific repository|
What I ended up doing is simply install all packages and then remove the
ones with an old build. I.e.
sudo add-apt-repository ppa:marutter/c2d4u -y
sudo add-apt-repository ppa:marutter/rrutter -y
sudo apt-get update
sudo apt-get install r-bioc-*
sudo apt-get install r-cran-
And then in R:
which(installed.packages()[,"Built"] < 3.0)
|What packages do I need in order to install scikit-learn on Debian|
The exact list of dependencies is written in the the documentation.
scikit-learn is available in recent versions of Debian, so if you want to
install all the scikit-learn build dependencies at once you can just do:
sudo apt-get build-dep python-sklearn
Also the Neuro Debian APT repository is updated at each scikit-learn
release, so that you can get debian packages for the latest versions from
|Pip doesn't install packages to activated virtualenv, ignores requirements.txt|
My usual workflow is to
pip freeze > someFile.txt
and then install with
pip install -r someFile.txt
So I'm certain that this should work just fine. Unfortunately I can't
really tell you anything besides make sure to check that
You really are in the virtualenv that you think you are in. Make sure to
to activate it just in case that matters.
Make sure to check that pip is within your virtualenv.
Sorry I can't give you a more concrete answer. I have to do this
semi-regularly and I've never had a problem with it skipping dependencies.
Best of luck!
|Can mvn install packages globally (e.g. command line tools like nutch)?|
Since maven is java based and from java you can do anything, and you can
write your own life-cycles and goals, the answer is: yes, it is possible.
Here you can found an example:
It is also possible execute a custom script - or whatever you want - with
Exec Maven Plugin For more, see:
Or on stackoverflow: I want to execute shell commands from maven's
|How to install new packages for pyramid without getting a pkg_resources.DistributionNotFound: once a project has been created|
pip and setup.py develop should not to be mixed. The latter uses
easy_install which is not compatible with pip in the case of namespace
packages (these are packages that are installed as subpackages of another
parent, such as zope.sqlalchemy installing only the .sqlalchemy part of the
full zope.* package). Namespace packages will cause problems between pip
and easy_install. On the other hand, most other packages will work fine
with whatever system you choose but it's better for you to be consistent.
Another thing to double check is that you are actually installing the
packages into your virtualenv. You should be able to open the python cli in
your virtualenv and import the package. If you can't, then it's probably
|AWS Elastic Beanstalk "composer install" fails to find packages|
I found out what the problem was. The composer.lock file was in .gitignore
and so composer on AWS didn't get it... See here for more details:
|Modify preseed file to automatically install packages in /pool/extra|
Why don't you create a metapackage which depends on all the packages you
want installed, and simply set up the preseed to install that.
equivs was designed for this sort of thing, although it's not very hard to
create a metapackage from scratch with the standard packaging tools,
To instruct the installer to install packages, you can include the line
d-i pkgsel/include string package1 package2
in your preseed file. If you just want to install all the *.deb files from
a directory (not a full Apt repository with Packages.gz etc) then maybe
d-i preseed/late_command string in-target dpkg -i /pool/extras/*.deb
but I would actually prefer a full repo; basically it just takes a run of
|how to install viber in Ubuntu?|
You have to use wine to get the Viber working with wizard on ubuntu.
sudo apt-get install wine
You can see the steps with screenshots on:
|How to install mssql in ubuntu 12.04|
sudo apt-get install php5-odbc php5-sybase tdsodbc
# Define a connection to the MSSQL server.
# The Description can be whatever we want it to be.
# The Driver value must match what we have defined in /etc/odbcinst.ini
# The Database name must be the name of the database this connection will
# The ServerName is the name we defined in /etc/freetds/freetds.conf
# The TDS_Version should match what we defined in /etc/freetds/freetds.conf
Description = Microsoft SQL Server
Driver = freetds
Database = XXXXXXXXXX
ServerName = mssql
TDS_Version = 8.0
# Define where to find the driver for the Free TDS connections.
Description = MS SQL database access with Free T
|Install Netbeans 7.3 with JDK 1.6 on Ubuntu|
Just install Java 7, then install Netbeans 7.3.
When running Netbeans you still chan choose version of JDK and set it to
1.6_24 for your projects.
You can install JDK 1.7 in an other folder, you don't have to overwrite
your JDK 1.6 folder. For installation just choose the new installation, for
programming take your older JDK 1.6_24.
|How to install OGRE SDK in Ubuntu 13.04?|
Have a look at the Quick Start Guide. The source code download page is
currently pointing at OGRE 1.8.1 Source for Linux / OSX. Don't forget to
read through the Prerequisites!
|MySQL Won't Install on Ubuntu 11.04|
this question was having the same issue it solved by uninstalling but
before that he tried to set the password that is the question and then he
tried to solve it in chat maybe when you try to set a password it will be
solved , just try
Update: Type the following commands in your terminal in order to make a
complete remove for mysql.
sudo apt-get remove --purge mysql-server mysql-client mysql-common
sudo apt-get autoremove
sudo apt-get autoclean
Also you need to remove the /var/lib/mysql folder if exist by typing the
sudo rm -rf /var/lib/mysql
Then follow the official ubuntu documentation to install. and don't forget
to set your mysql password during installation.
Also see this question its similar to the error you have got during the
|How can I install Zephir on Ubuntu?|
Please note Zephir is currently in Alpha stage and therefore bugs can be
You need to have certain packages installed:
In the command line type:
sudo apt-get install libjson0 libjson0-dev libjson0-dbg
sudo apt-get install re2c
Once you have the required packages installed, you can generate the parser
Compile the extension (this is your code):
The code produced is placed in ext/, there you can perform the standard
|Can not install pyhdf on ubuntu 13.04|
You probably need to install libhdf4-dev package.
And instead of using sudo to install packages into your system, spend some
time to read about virtualenv.
|How to install Ruby 2 on Ubuntu without RVM|
sudo apt-get -y update
sudo apt-get -y install build-essential zlib1g-dev libssl-dev
tar -xvzf ruby-2.0.0-p451.tar.gz
sudo make install
from here How do I install ruby 2.0.0 correctly on Ubuntu 12.04?
for ruby 2.1.5
sudo apt-get -y update
sudo apt-get -y install build-essential zlib1g-dev libssl-dev
tar -xvzf ruby-2.1.5.tar.gz
sudo make install
if you are still seeing an older ruby check your symlink
ls -la /usr/bin/ruby from hector