w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
python-requests - can't login
There are lots of options, but I have had success using cookielib instead of trying to "manually" handle the cookies. import urllib2 import cookielib cookiejar = cookielib.CookieJar() cookiejar.clear() urlOpener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar)) # ...etc... Some potentially relevant answers on getting this set up are on SO, including: http://stackoverflow.com/a/5826033/1681480

Categories : Python

Using python requests library to login to website
@DanAlbert pointed out that I was telling you to use HTTP Auth which isn't what you're trying to do. I assumed since everything looked correct that it might just be HTTP Auth that you needed. However, looking at the form it looks like maybe you're just using the wrong variable name in your dict: import requests s = requests.session() login_data = dict(email='email', password='password') s.post('https://account.guildwars2.com/login', data=login_data) r = s.get('https://leaderboards.guildwars2.com/en/na/achievements/guild/Darkhaven%20Elite') print r.content If you look at the form it's expecting the variables "email" and "password". You have "username" and "password" Anyway, HTH

Categories : Python

Login and upload file using Python 'requests'
Make a session and then use that session to do your requests - sessionObj = requests.session() sessionOj.get(...) # Do whatever ... A session persists your cookies for future requests. And use post parameters for username,password as the parameters are required to login in login.php , not auth username password. Also use files parameter to upload files. So the final code is - import requests sessionObj = requests.session() url1='http://www.abc.com/login.php' r = sessionObj.post(url1, params={'username':'usernamehere' , 'password':'password here'}) print r.status_code //msg:'200' filehandle = open('./tmp.txt') url2='http://www.abc.com/uploader.php' r = sessionObj.post(url2, data={},files = {'upload':filehandle}) print r.text Docs.

Categories : Python

python-requests returning unicode Exception message (or how to set requests locale)
You can try os.strerror, but it would probably return nothing or the same non-English string. This hard-coded English was scraped from here: http://support.microsoft.com/kb/819124 ENGLISH_WINDOWS_SOCKET_MESSAGES = { 10004: "Interrupted function call.", 10013: "Permission denied.", 10014: "Bad address.", 10022: "Invalid argument.", 10024: "Too many open files.", 10035: "Resource temporarily unavailable.", 10036: "Operation now in progress.", 10037: "Operation already in progress.", 10038: "Socket operation on nonsocket.", 10039: "Destination address required.", 10040: "Message too long.", 10041: "Protocol wrong type for socket.", 10042: "Bad protocol option.", 10043: "Protocol not supported.", 10044: "Socket type not supported.",

Categories : Python

How to limit download rate of HTTP requests in requests python library?
There are several approaches to rate limiting; one of them is token bucket, for which you can find a recipe here and another one here. Usually you would want to do throttling or rate limiting on socket.send() and socket.recv(). You could play with socket-throttle and see if it does what you need. This is not to be confused with x-ratelimit rate limiting response headers, which are related to a number of requests rather than a download / transfer rate.

Categories : Python

Consecutive requests with python Requests.Session() not working
In the lastest version of requests, he sessions object is with Cookie Persistence, look the requests Sessions ojbects docs. So you don't need add the cookie artificialy. Just import requests s=requests.Session() login_data = dict(userName='user', password='pwd') ra=s.post('http://example/checklogin.php', data=login_data) print ra.content print ra.headers ans = dict(answer='5') r=s.post('http://example/level1.php',data=ans) print r.content Just print the cookie to look up wheather you were logged. for cookie in s.cookies: print (cookie.name, cookie.value) And is the example site is yours? If not maybe the site reject the bot/crawler ! And you can change your requests's user-agent as looks likes you are using a browser. For example: import requests s=requests.Session() headers

Categories : Python

mod_auth_cas redirects random requests to CAS login
We have just had the same problem. The solution is simple: Configure Apache to use MPM Prefork. The problem happens when Apache is configured to use MPM Worker. When many requests arrive simultaneously (i.e. 20 requests), some of them are randomly redirected to the SSO server.

Categories : Javascript

Devise : How to redirect to login page after user requests new confirmation himself?
You set this path with the following code inside your confirmations_controller: def after_resending_confirmation_instructions_path_for login_path # or whatever you want end However, the default is to redirect to new_session_path(resource_name) which does exactly what you want and so it does for my app. Maybe it depends on the version of devise.

Categories : Ruby On Rails

Perl: How can i test for a URL ( https ) accepting GET requests using "login" parameter
Sending password in a cookie? Nope. Disallow GET for /login. POST username and password to /login, over SSL. In CGI, the GET/POST is indicated via the REQUEST_METHOD environment variable. You cannot stop determined people from issuing a GET request to your server, but you can refuse to process it like so (untested code - you have to fill in details): if ($ENV{REQUEST_METHOD} ne 'POST') { # issue a redirect to a suitable error page, then return. } my $q = CGI->new(); my $user = $q->params('username'); my $password = $q->params('password'); my $encrypted_password = my_password_encryptor($password); unless ( can_log_in($user, $encrypted_password) ) { # issue an error message - redirect&return or fall-through... } else { $session->set_user_

Categories : Perl

python login to website, no "id" only "class" for login button
Use session. Or cookies are not kept. import requests s = requests.session() # <-- url = 'http://company.page.com/member/index.php' values = {'username': 'myusernamehere', 'password': 'mypasswordhere'} r = s.post(url, data=values) # session.post instead of requests.post # Now you have logged in url = "http://company.page.com/member/member.php" (this is the resulting url after logging in) result = s.get(url) # Use session.get and do not specify cookies. print (result.headers) print (result.text)

Categories : Python

Requests library crashing on Python 2 and Python 3 with
This means that the server did not send an encoding for the content in the headers, and the chardet library was also not able to determine an encoding for the contents. You in fact deliberately test for the lack of encoding; why try to get decoded text if no encoding is available? You can try to leave the decoding up to the BeautifulSoup parser: if response.encoding is None: soup = bs4.BeautifulSoup(response.content) and there is no need to pass in the encoding to BeautifulSoup, since if .text does not fail, you are using Unicode and BeautifulSoup will ignore the encoding parameter anyway: else: soup = bs4.BeautifulSoup(response.text)

Categories : Python

Python complex subpackage importing
Did you try: In A.__init__: import B import C In B.__init__: import C, a, b, c In C.__init__: import B, a, b, c I tried this with some test files and it seemed to work fine. In [5]: import A In [6]: A. A.B A.C In [6]: A.B. A.B.C A.B.a A.B.b A.B.c

Categories : Python

Python : Soap using requests
It is indeed possible. Here is an example calling the Weather SOAP Service using plain requests lib: import requests url="http://wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL" #headers = {'content-type': 'application/soap+xml'} headers = {'content-type': 'text/xml'} body = """<?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:ns0="http://ws.cdyne.com/WeatherWS/" xmlns:ns1="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header/> <ns1:Body><ns0:GetWeatherInformation/></ns1:Body> </SOAP-ENV:Envelope>""" response = requests.post(url,data=body,headers=headers) print

Categories : Python

What is the correct way to use python Requests
So, I've looked at the documentation and... I think it automatically keeps your session alive for you. Let me know if you have any problems with dying sessions, but assume that Requests will deal with that for you. I may have misinterpreted the docs, but I don't think you need to worry about it. From the documentation: Keep-Alive Excellent news — thanks to urllib3, keep-alive is 100% automatic within a session! Any requests that you make within a session will automatically reuse the appropriate connection! Note that connections are only released back to the pool for reuse once all body data has been read; be sure to either set stream to False or read the content property of the Response object.

Categories : Python

Complex if expression in list comprehension python
To omit (i,j,k) only when all of them are zero, use the condition (i,j,k) != (0,0,0): S = range(-3,3) x = [(i,j,k) for i in S for j in S for k in S if ((i+j+k==0) and (i,j,k) != (0,0,0))] print(x) prints [(-3, 1, 2), (-3, 2, 1), (-2, 0, 2), (-2, 1, 1), (-2, 2, 0), (-1, -1, 2), (-1, 0, 1), (-1, 1, 0), (-1, 2, -1), (0, -2, 2), (0, -1, 1), (0, 1, -1), (0, 2, -2), (1, -3, 2), (1, -2, 1), (1, -1, 0), (1, 0, -1), (1, 1, -2), (1, 2, -3), (2, -3, 1), (2, -2, 0), (2, -1, -1), (2, 0, -2), (2, 1, -3)] To understand what went wrong with the original condition, (i!=0) and (j!=0) and (k!=0): Consider what happens when i=0 and j=1: | i != 0 | j != 0 | (i!=0) and (j!=0) | (i,j) != (0,0) | | False | True | False | True | (i!=0) and (j!=0) is

Categories : Python

Parsing complex text file in Python
This should do the work: with open(self.file, 'r') as f: self.result = {} for line in f.readlines(): line = line.strip() if line.startswith("#"): parent = line[1:] self.result[parent] = {} if line.startswith("@"): child = line[1:] self.result[parent][child] = {} if '|' in line: key, value = line.split('|') self.result[parent][child][key] = value Then print self.result >>> { 'some_other_line': { 'and_another_line': { 'original_string2': 'new_string2' } }, 'some_line': { 'another_line': { 'original_string1': 'new_string1' } } }

Categories : Python

How to apply a complex formula using Pandas in Python?
You are probably looking for rolling_apply. This is an example from the documentation: mad = lambda x: np.fabs(x - x.mean()).mean() rolling_apply(ts, 60, mad).plot(style='k')

Categories : Python

Flatten complex directory structure in Python
Run recursively through directory, move the files and launch move for directories: import shutil def move(destination, depth=None): if not depth: depth = [] for file_or_dir in os.listdir(os.path.join([destination] + depth, "\")): if os.path.isfile(file_or_dir): shutil.move(file_or_dir, destination) else: move(destination, os.path.join(depth + [file_or_dir], "\"))

Categories : Python

How to get the raw content of a response in requests with Python?
If you are using a requests.get call to obtain your HTTP response, you can use the raw attribute of the response. Here is the code from the requests docs. >>> r = requests.get('https://github.com/timeline.json', stream=True) >>> r.raw <requests.packages.urllib3.response.HTTPResponse object at 0x101194810> >>> r.raw.read(10) 'x1fx8bx08x00x00x00x00x00x00x03'

Categories : Python

How does python know that you need to interface the requests module through api.py?
See here: https://github.com/kennethreitz/requests/blob/master/requests/__init__.py E.g. if 'requests' is a directory, which has __init__.py, Python executes this file each time it sees from requests import ... or import requests. See more in Modues.

Categories : Python

How to loop through API call with requests in python
I think you want to do something like this: for x in coords: loc={'?contains' : x , '&sets' : 'a_parameter'} ... This references the x variable, not the string 'x'.

Categories : Python

How to get response SSL certificate from requests in python?
This, although not pretty at all, works: import requests req = requests.get('https://httpbin.org') pool = req.connection.poolmanager.connection_from_url('https://httpbin.org') conn = pool.pool.get() # get() removes it from the pool, so put it back in pool.pool.put(conn) print(conn.sock.getpeercert())

Categories : Python

HTML 501 error using Python Requests
I looked through the site and it appears that you are using a GET HTTP method to retrieve the data when what you actually need is a POST. Typically an HTTP 501 is sent across as a response to the client, when the web server does not understand the HTTP verb sent across by the client within the request. Try changing the code: r = requests.get('https://venta.renfe.com/vol/inicioCompra.do', data=payload, cookies=cookies, headers=headers) to something like r = requests.post('https://venta.renfe.com/vol/inicioCompra.do', data=payload, cookies=cookies, headers=headers) Note : I have not used Requests, hence you may want to double check the function call parameters. For a quick reference see this link. Hope this helps - and here is a dump of my header as visible in Chrome. Observe that y

Categories : Python

Shutting off URL Encoding in Python Requests
Doesn't look like it can be done any longer. Every URL gets passed through requote_uri in utils.py. And unless I'm missing something, the fact this API wants JSON with spaces in a GET parameter is a bad idea.

Categories : Python

Posting to CloudApp API (AWS) with Python Requests
After several days, I finally figured out the (simple) problem. The CloudApp API requires a "GET" request to the "Location" header in Amazon's response. Pycloudapp was working correctly because it properly authenticated the GET response with return json.load(self.upload_auth_opener.open(request)). I'm not sure why I was able to post correctly using Postman without any authentication -- somehow it was properly following the GET without credentials, even though the CloudApp API specifies that following the redirect requires authentication. I was unable to follow the redirect properly with Requests because I was posting unauthenticated values (if I continued the Session() with s.post, the auth headers throw an error because Amazon doesn't expect them), and therefore the subsequent GET was

Categories : Python

How to log in to Google with Python Requests module?
I think you actually get more interesting data by grabbing the raw JSON that it uses to build the graphs. It includes the related headlines that don't come with the CSV download. This works for a few queries (5?) before you reach the quota. import re import requests _GOOGLE_TRENDS_URL = 'http://www.google.com/trends/trendsReport?hl=en-US&content=1&q=%s&hl=en-US&content=1' term = 'foo' response = requests.get(_GOOGLE_TRENDS_URL % term) if response.status_code == requests.codes.ok: data_line = [l for l in response.content.splitlines() if 'var chartData' in l][0] chart_data = re.sub(r'.*var chartData = (.*?);.*', r'1', data_line) # Fix for date representation chart_data = re.sub(r'new Date((d+), (d+), (d+))', r'"1-2-3"', chart_data) data = json.loads(

Categories : Python

Requests package python is not imported
This appears to be a bug in urllib3. Looking at the source, starting at line 33 of the file that raised the error: def __init__(self, user, pw, authurl, *args, **kwargs): """ authurl is a random URL on the server that is protected by NTLM. user is the Windows user, probably in the DOMAINusername format. pw is the password for the user. """ That u in the middle of the string is illegal. I don't get this error from just import requests or even import requests.packages.urllib3, but if I import requests.packages.urllib3.contrib.ntlmpool, I get it too. I don't know why it's automatically importing ntlmpool for you, but that's not important; it's definitely a bug. The bug was fixed in urllib in change 1f7f39cb on 2013-05-22, and merged to requests in change 2ed976ea on

Categories : Python

Using Python's requests library instead of cURL
You're sending params, not data: p = requests.post(token_url, params = data) When you pass a dictionary as a params argument, requests tries to send it as part of the query string on the URL. When you pass a dictionary as a data argument, requests will form-encode it and send it as the POST data, which is the equivalent to what curl's -F does. You can verify this by looking at the request URL. If print(p.url) shows something like http://api.instagram.com/oauth/access_token?client_id=xxxxxx&client_secret=xxxxx&…, that means your parameters ended up on the URL instead of in the post data. See Putting Parameters in URLs and More complicated POST requests in the quick-start documentation for full details. For more complicated debugging, you may want to consider pointing bot

Categories : Python

In what cases does Python complex exponentiation throw an OverflowError?
Python integer values can autopromote to a long for arbitrary precision: >>> (10**300)**2 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 Float values overflow because of the limitation of the IEEE floating point: >>> float(

Categories : Python

Python TypeError while using xml.etree.ElemenTree and requests
You're not feeding ElementTree the response text, but the requests Response object itself, which is why you get the type error: need string or buffer, Response found. Do this instead: r = requests.get(url) tree = ET.fromstring(r.text)

Categories : Python

Convert Curl "-I --user" to Python Requests
Use requests.get(url, auth=(username, password)) See the section on Basic Authentication in the requests documentation.

Categories : Python

Python file upload from url using requests library
The only way to do this is to download the body of the URL so you can upload it. The problem is that a form that takes a file is expecting the body of the file in the HTTP POST. Someone could write a form that takes a URL instead, and does the fetching on its own… but that would be a different form and request than the one that takes a file (or, maybe, the same form, with an optional file and an optional URL). You don't have to download it and save it to a file, of course. You can just download it into memory: urlsrc = 'http://example.com/source' rsrc = requests.get(urlsrc) urldst = 'http://example.com/dest' rdst = requests.post(urldst, files={'file': rsrc.content}) Of course in some cases, you might always want to forward along the filename, or some other headers, like the Content-

Categories : Python

Python GAE Requests not returning cookies after get request
I tried urlfetch it seems to be showing the cookie headers: import logging from google.appengine.api import urlfetch response = urlfetch.fetch(url) logging.info(response.headers)

Categories : Python

requests python lib adding in Accept header
As stated in comments it seems this is a problem with the requests library in 3.3. In requests there are default headers (which can be found in the utils folder). When you don't specify your own headers these default headers are used. However if you specify your own headers instead requests tries to merge the headers together to make sure you have all the headers you need. The problem shows its self in def request() method in sessions.py. Instead of merging all the headers it puts in its headers and then chucks in yours. For now I have just done the dirty hack of removing the accept header from the default headers found in util

Categories : Python

Constructing requests with URL Query String in Python
To perform GET requests with URL query string: import requests params = { 'action': 'subscribe', 'callbackurl': '', 'comment': '', 'oauth_consumer_key': '', 'oauth_nonce': '', # more key=value pairs as appeared in your query string } r = requests.get("http://wbsapi.withings.net/notify", params=params) With that cleared, now you just need to follow the workflow documented on http://www.withings.com/en/api/oauthguide and implement them Upon receiving your OAuth Key and OAuth Secret, perform a GET request with the following endpoint and query string which will give you back token: https://oauth.withings.com/account/request_token? oauth_callback=http%3A%2F%2Fexample.com%2Fget_access_token &oauth_consumer_key=c331c571585e7c518c78656f41582e96fc1c2b926cf

Categories : Python

Python 3 requests library and json within a list
I'm going to fill out your code snippet a little more for future visitors with the same question. (Your snippet, by the way, will not print what you think it will.) It would look something like: import requests response = requests.get('http://example.com') actual_data = requests.json()['data']['next_Data']['actual_data'] for x in actual_data: print x # prints on separate lines data1, data2, data3 print actual_data # prints ['data1', 'data2', 'data3'] print actual_data[0] # prints data1 print actual_data[2] # prints data3 print actual_data[1] # prints data2 print actual_data[-1] # prints data3 print actual_data[-2] # prints data2 print actual_data[-3] # prints data1 That's how you would access you array of data (actually a list as far as Python is concerned) once you have the ar

Categories : Python

Fork incoming requests using python flask
This happens only because you're using the development server included. Flask is a web framework, not a webserver. Concurrent request serving is a task handled by webservers. You can use a wsgi webserver like uWSGI to serve your flask application. For even more performance you can also delegate static serving to NGINX but for a pure REST server normally is not needed. With uWSGI you can specify the number of workers (processes) handling in parallel. Keep in mind that there's no magic in serving a lot of requests. Even if you use more processes or threads you're bound to that number of concurrent request handling.

Categories : Python

Python requests arguments/dealing with api pagination
Read last_page and make a get request for each page in the range: import requests r_sanfran = requests.get("https://api.angel.co/1/tags/1664/jobs").json() num_pages = r_sanfran['last_page'] for page in range(2, num_pages + 1): r_sanfran = requests.get("https://api.angel.co/1/tags/1664/jobs", params={'page': page}).json() print r_sanfran['page'] # TODO: extract the data

Categories : Python

different results with python requests module and curl
res = requests.get(url, allow_redirects=False) Without proxies=..., I got following output. status_code: 302 response_url: http://www.vevo.com/watch/kesha/crazy-kids/USRV81300226 headers: {'access-control-allow-origin': '*', 'cache-control': 'max-age=0, no-cache, no-store', 'connection': 'keep-alive', 'content-length': '159', 'content-type': 'text/html; charset=utf-8', 'date': 'Mon, 17 Jun 2013 16:32:15 GMT', 'expires': 'Mon, 17 Jun 2013 16:32:15 GMT', 'location': 'http://www.youtube.com/watch?v=xdeFB7I0YH4', 'pragma': 'no-cache', 'server': 'Microsoft-IIS/7.0', 'vary': 'Accept-Encoding', 'x-aspnet-version': '4.0.30319', 'x-aspnetmvc-version': '3.0', 'x-powered-by': 'ASP.NET'} history: [] requests used: 0.13.2 With requests 1.2.3, I got similar result.

Categories : Python

Replicate Curl command with Python Requests
You must not be showing your exact code because the curl command you provide and the code you provide works. See my output: ~ curl -F foo=bar -F file=@setup.py https://httpbin.org/post { "headers": { "Accept": "*/*", "Content-Length": "2370", "Connection": "close", "Host": "httpbin.org", "User-Agent": "curl/7.29.0", "Content-Type": "multipart/form-data; boundary=----------------------------959364026805" }, "files": { "file": "#!/usr/bin/env python import sys import os import re kwargs = {} requires = [] packages = [ "github3", "github3.gists", "github3.repos", "github3.issues", ] try: from setuptools import setup kwargs['test_suite'] = 'run_tests.collect_tests' kwargs['tests_require'] = ['mock', 'expecter', 'coverage==3.5.2'

Categories : Python



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.