Python/Django REST API Architecture |
I think Tastypie will do what you want. And its simple and easy. Check this
out - http://django-tastypie.readthedocs.org/en/latest/!
|
IPC key chose 8 bits from st_dev and 16 bits from st_ino |
Each inode is distinct for each file on a device. Each device number is
unique for each device (partition). Since there are typically vastly more
files per device than devices per system, it makes sense to use more bits
from st_ino than from st_dev, if you are trying to reduce the chances of a
collision.
Unfortunately, since ftok does not guarantee uniqueness, any application
using it must be able to tolerate collisions anyway. This makes it
more-or-less useless, as far as I can tell.
|
Bits of precision for IEE 754 floats(32 bits) between -1 and 1 |
You don't have to do too much work to figure out your answer. A 32-bit
IEEE 754 floating point number has 23 bits of mantissa; counting the
leading 1 gives 24 significant binary digits (watch out for denorms). By
doing some logarithms or looking it up in the table on wikipedia, you'll
see that is about 7.22 decimal digits.
Let's take that fact and apply it to one of your examples. All of the
numbers you want (the ones with accuracy down to 0.000001) are therefore
representable in a range -1 to 1 - those numbers all have 7 or fewer
significant digits.
As to your other question about calculating the theoretical bits of
precision in a range - it's the same everywhere. Precision isn't related
to magnitude - you get the same number of significant digits everywhere
(again, watch out for de
|
using com 32 bits library on 64 bits application |
It could be caused by registry virtualization. Ive had problems like this
in the past. The biggest annoyance is that you cant see the values or keys
that the editor is complaining already exist. They actually exist in a
different part of the registry (likely the users hive).
Good luck
|
Django and Pytz what is going wrong? |
It looks like a bug in distribute package for python3 at the following
location -
pkg_resources.py -> get_resource_string
this method returns bytes and StringIO is expecting for string.
I tried to decode it by using decode("ISO-8859-1") method then I got error
in pytz package.
However I got the workaround for this -
Change "./pytz-2013.7-py3.3.egg" to "./pytz" in
"site-packages/easy-install.pth" file.
Make sure you have unzipped pytz directory in your site-packages
directory.
|
Something wrong when session in django is changed by JS |
it looks like location.reload doesnt allow $.post
$.post(url, data);
// sleep(2000) // own func with timeout works only in chrome for me
location.reload();
I ask myself if posting have enough time to send and found solution in
jQuery.post docs
when I change code to:
$.post(url, data).done(function(data{window.location.reload();})
it works now in every browser :)
|
Django url dispatcher - wrong function |
Your problem is that you are including your homepage urls twice. Remove the
second entry
url(r'^welcome/', include('homepage.urls')),
This is explained in the docs on including other url confs
Whenever Django encounters include() (django.conf.urls.include()), it
chops off whatever part of the URL matched up to that point and sends the
remaining string to the included URLconf for further processing.
In your case, the 'welcome/' is removed from the url, which leaves '',
which is matched by the url pattern for the homepage.
|
Django filter the queryset of ModelChoiceField - what did i do wrong? |
In you views.py, you have this line:
form = AddGame(request.POST or None, instance=game)
So form is a Form object of class AddGame (Side note: you should change the
name to AddGameForm to avoid confusion).
Since home_team is a field in AddGame class, it's not an attribute in form
object. That's why you can't access it via form.home_team.
However, Django Form API provides fields attribute to any form object,
which is a dict contains all form fields. That's why you can access
form.fields['home_team'].
And finally since home_team is a ModelChoiceField, it can contain a
queryset attribute, that's why you can access
form.fields['home_team'].queryset
|
manage.py runserver in virtualenv using wrong django version |
When you run manage.py on its own, Windows is pulling it off of the main
Windows PATH, and then running it off of the main Windows association with
Python, which is your default installed version of Python, and thus outside
your virtualenv.
Inside your virtualenv, try running python manage.py runserver and see what
happens.
|
Django model returning values in the wrong columns |
Don't change the type signature of the model’s __init__ method to take a
name argument. It is called not only when you create an instance in the
shell, but whenever else an instance is initialised, including when you
fetch from the db.
It isn't clear why you need to to override the __init__ method, when you
can use named arguments as follows:
b = Buffer(name='schwab')
If you want a convenience method for when you creating new instances,
consider creating a custom manager with a create method. Look at the
User.objects.create_user() method for example, which takes care of hashing
the password.
|
Django admin OSError (wrong Python Path) |
I had both mod_wsgi and mod_python installed. So, despite my configurations
for mod_wsgi, mod_python initialized first and made Apache use Python of
older version. This caused all the permission issues.
See this doc:
https://code.google.com/p/modwsgi/wiki/InstallationIssues#Python_Version_Mismatch
|
Django with Oracle showing wrong date in template |
In case someone else runs into this issue, it's happening because I was
using DateTimeField model fields. In Oracle, that translates to a
timestamp, and I guess Django converts that to the previous day. Changing
my model fields to DateField, syncdb, and everything is now showing the
correct date, and Oracle is using a Date column type, not timestamp.
|
How to correct wrong num_times in django-taggit-templatetags? |
Seems the name of your app is 'notes', and Text and Note are models inside
this app.
If you want only tags used in model Text, you should use:
{% get_taglist as all_labels for 'notes.Text' %}
If you want only tags used in model Note, you should use:
{% get_taglist as all_labels for 'notes.Note' %}
|
Django - Integrating django-profiles (and django-registration) with django-facebook |
Here's how I suggest you do things. Do away with django-facebook and look
into django-allauth. It will handle accounts (registration, logic,
connecting social accounts).
Also, django-profiles has many issues with 1.5+ for me and I don't bother
with it and instead create my own profiles app and UserProfile model with
any additional fields that wouldn't be handled by django-allauth.
An example from one of my implementations
class UserProfile(models.Model):
user = models.OneToOneField(User)
default_address = models.OneToOneField(Address, blank=True, null=True)
default_tshirt_size = models.CharField(blank=True, null=True,
choices=constants.tshirt_sizes, max_length=50)
default_shoe_size = models.CharField(blank=True, null=True,
choices=constants.shoe_sizes, max_length=50)
Y
|
Django django-haystack cannot import CategoryBase from django-categories on the first run |
Finally managed to fix it!!!!
Root cause was in from categories.models import Category in videos app,
which is another app model importing the Category which extends
CategoryBase, and it's causing circular reference, to fix it, in the
models.py of videos app, change from direct import to lazy import as below:
categories = models.ManyToManyField('categories.Category', null=True,
blank=True)
Update:
Above fix only worked very briefly and then I got other circular import
problems on other models, what finally fixed it was upgrading haystack to
v2.1.0
|
How to set specific bits? |
finalvle = 0;
finalvle = (val1&0x01)<<15;
finalvle += (val2&0x07)<<12;
finalvle += (val3&0x0f)<<8
finalvle += (val4&0xfe)<<1;
finalvle += (val5&0x01);
|
Concatenation bits in value |
You have to evaluate the rank of the rightmost value.
I believe, something like this should work:
// Returns rank of the value increased by one (except, when value == 0)
int rank(int value)
{
int result = 0;
while (value > 0)
{
value /= 16;
result++;
}
return result;
}
int main(int argc, char * argv[])
{
int left = 0x12;
int right = 0x34;
int sum = (left << (4 * rank(right))) + right;
printf("%x
", sum);
}
|
Why 2 GB and not 4 GB on 32 bits limitation? |
It is due to virtual address space organization. Part of 4 GB address space
is reserved for operating system (kernel space), so only part (user space,
usually 2GB or 3GB) is available to the process itself. Memory mapped files
must fit into this limitation.
|
How to manipulate 64 bits? |
What is meant by manipulation in your case? I am thinking you are going to
test each and every bit of variable x. Your x should contain maximum value
because you are going to test every bit of your variable x
int main()
{
unsigned long long x = 0xFFFFFFFFFFFFFFFF;
int cnt = 0;
for(int i =0 ;i<64;++i)
{
if((1<<i)&x)
++cnt;
}
cout<<cnt;
}
|
Getting the first and last 32 bits of a uint64 |
You can typecast() it into 'uint32' and convert to binary:
x64 = uint64(43564);
x32 = typecast(x64,'uint32');
x32 =
43564 0
dec2bin(x32)
ans =
1010101000101100
0000000000000000
|
Using DAO from 64 bits application? |
The only thing I found was to desinstall Office 32 bits and install Office
64 bits, which feels wrong : I only need 64 bits for some other concerns.
The datalayer should thrive to be context independant.
|
Wordpress uploads files in a wrong path (wrong date) |
Are you uploading to a post from 2009?
If so, this may explain (it's a feature not a bug):
"Using WP 2.8.4, when uploading new media files to a page or post that has
already been published, the new file is added in the
/UPLOAD_DIR/year/month/ directory that corresponds to the original
page/post publication date, rather than the file upload date."
From: http://core.trac.wordpress.org/ticket/10752
|
Why does printf pad an 8-bit char to 32-bits? |
A char is signed 8-bit. The "%x
" format says to print an integer. So the value of byte is sign extended to
an integer. Since a char of 0xff is, in that context, an 8-bit value of -1,
printf is just printing the hex integer value of a -1, which is ffffffff.
|
C# - structure of 4-bits values |
Unless there's already some class for this purpose I'm unaware of, your
best bet is probably to use a UInt16, as suggested by leppie, then have
properties to get and set the "sub-values", transparently performing
masking and shifting as needed.
|
Macro expansion to set bits |
It is not portable code, but it is illustrative of the power of C and its
ability to interface with memory-mapped I/O devices.
The cast ((AT91PS_PMC) 0xFFFFFC00) means that the memory address 0xFFFFFC00
will be treated as a pointer to a structure of the type AT91PS_PMC. Within
that structure, there is the Peripheral Clock Enable Register, the PMC_PCER
field, at some suitable offset from the address 0xFFFFFC00.
Thus, the assignment arranges to write 1024 (1 << 10) to the PMC_PCER
register, assuming that the base address 0xFFFFFC00 is correct.
This sets the register to a value containing a single set bit. It is not
the same as what most people would mean by 'setting a single bit'; that
would be done by a line such as:
AT91C_BASE_PMC->PMC_PCER |= 1 << AT91C_ID_PWMC;
Not
|
Shifting a 32 bit integer by 32 bits |
According to section 3.3.7 Bitwise shift operators in the draft of C89 (?)
standard:
If the value of the right operand is negative or is greater than or equal
to the width in bits of the promoted left operand, the behavior is
undefined.
Assuming int is 32-bit on the system that you are compiling the code in,
when n is 0, you are shifting 32 bits. According to the statement above,
your code results in undefined behavior.
|
Only 7 bits for Java char? |
The method Integer.toBinaryString() does not zero-pad the results on the
left; you'll have to zero-pad it yourself.
This value is converted to a string of ASCII digits in binary (base 2)
with no extra leading 0s.
|
Adding a hex value to a string of bits |
First, I think you should take advantage of the # flag, which appends the
0x as necessary, instead of adding it on your own. Also, I'm not sure I
understand the padding (the 4), but I'll leave that be:
proc dec2hex {dec_num} {return [format %0#4X $dec_num]}
I think your brackets and/or spacing was botched in your editing, but
here's the next line, fixed:
set lEndOfAddress [format %02X [expr { 0x64 + [dec2hex $aSourceAddress] }]]
And simplifying your last line,
set lCompareIpAddr "E8 00 00 $lEndOfAddress"
I get the results,
% set aSourceAddress 6
5
% proc dec2hex {dec_num} {return [format %0#4X $dec_num]}
% set lEndOfAddress [format %02X [expr { 0x64 + [dec2hex $aSourceAddress]
}]]
6A
% set lCompareIpAddr "E8 00 00 $lEndOfAddress"
E8 00 00 6A
|
Detect 64-bits in C using size_t |
More importantly, is this test safe and reliable under all circumstances,
keeping in mind OS and compilers portability ?
There is no "portable way" to do this, because C standard let the
environment define SIZE_MAX as large as he wants (as long as it is greater
than 65535). But C standard doesn't define what are "32 bits" and "64 bits"
platforms neither.
However, on common memory models, size_t is 32 bits on 32 bits platforms
and 64 bits on 64 bits platforms.
I'm a bit worried that size_t might only be guaranteed for C99
environments.
size_t is in C89 too. So, as long as your environment is standard, it
should define size_t.
|
Shift a __m128i of n bits |
This is the best that I could come up with for left/right immediate shifts
with SSE2:
#include <stdio.h>
#include <emmintrin.h>
#define SHL128(v, n)
({
__m128i v1, v2;
if ((n) >= 64)
{
v1 = _mm_slli_si128(v, 8);
v1 = _mm_slli_epi64(v1, (n) - 64);
}
else
{
v1 = _mm_slli_epi64(v, n);
v2 = _mm_slli_si128(v, 8);
v2 = _mm_srli_epi64(v2, 64 - (n));
v1 = _mm_or_si128(v1, v2);
}
v1;
})
#define SHR128(v, n)
({
__m128i v1, v2;
if ((n) >= 64)
{
v1 = _mm_srli_si128(v, 8);
v1 = _mm_srli_epi64(v1, (n) - 64);
}
else
{
v1 = _mm_srli_epi64(v, n);
v2 = _mm_srli_si128(v, 8);
v2 = _mm_slli_epi64(v2, 64 - (n));
|
How to work with the bits in a byte |
You're using the wrong constructor (probably).
The one that you're using is probably this one, while you need this one:
var bitArray = new BitArray(new [] { myByte } );
|
How can I know if R is running on 64 bits versus 32? |
Your platform says x86_64-w64 in front of the mingw32. Your arch is
similarly x86_64. That means you're running 64-bit, on 64-bit Windows.
For reference, the corresponding arch for 32-bit R would be i386.
|
I am looking for an algorithm to shuffle the first 25 bits of a (32-bit) int |
First, for the sake of evenness, we can extend the problem to a 26-bit
shuffle by remembering that bit 25 will appear at the end of the
interleaved list, so we can trim it off after the interleaving operation
without affecting the positions of the other bits.
Now we want to interleave the first and second sets of 13 bits; but we only
have an algorithm to interleave the first and second sets of 16 bits.
A straightfoward approach might be to just move the high and low parts of x
into more workable positions before applying the standard algorithm:
x = (x & 0x1ffe000) << 3 | x & 0x00001fff;
x = (x & 0x0000FF00) << 8 | (x >> 8) & 0x0000FF00 | x
& 0xFF0000FF;
x = (x & 0x00F000F0) << 4 | (x >> 4) & 0x00F000F0 | x
& 0xF00FF00F;
x =
|
Get an array of bits that represent an int in c# |
Convert.ToString(value, base)
Converts the value of a 32-bit signed integer to its equivalent string
representation in a specified base. Specify 2 for the base.
|
Convert short bits into an int |
Use ByteBuffer in the java.nio package.
//Convert unsigned short to bytes:
//java has no unsigned short. Char is the equivalent.
char unsignedShort = 100;
//Endianess of bytes. I recommend setting explicitly for clarity
ByteOrder order = ByteOrder.BIG_ENDIAN;
byte[] ary = ByteBuffer.allocate(2).putChar(value).order(order).array();
//get integers from 16 bytes
byte[] bytes = new byte[16];
ByteBuffer buffer= ByteBuffer.wrap(bytes);
for(int i=0;i<4;i++){
int intValue = (int)buffer.getInt();
}
Guava also has routines for primitive to byte conversion if you're
interested in an external library:
http://code.google.com/p/guava-libraries/
Also, I don't know your use-case, but if you're in the beginning stages of
your project, I'd use Google's ProtoBufs for exchanging protocol informati
|
Concat bits into one string |
use StringBuilder
StringBuilder tmp = new StringBuilder(encoded.Count)
foreach (bool bit in encoded)
{
tmp.Append(bit ?"1": "0"));
}
MessageBox.Show(tmp.ToString());
|
How to put bytes(bits) into byte |
Try this
byte a1=0;
byte a2=1;
byte a3=1;
byte a4=0;
byte b = (byte) ((a1 << 7) | (a2 << 6) | (a3 << 5) | (a4
<< 4));
And see this documentation.
|
is it possible to do memcpy in bits instead of bytes? |
If you need to fill fields, you can use C bit-fields with a struct, like
this:
struct box_props {
unsigned first : 1;
unsigned second : 3;
unsigned : 4;
};
Where 1, for instance, means that the field is 1bit long. The last
(unnamed) field means: 4bit padding.
Define struct, memcpy to it and read fields as if they where unsigned. Same
for writing.
NOTE: always pad to integer byte, or memcpy could have unwanted effects.
|
turned on bits counter |
You're making a binary adder. Try this...
Two black boxes for input with one input remaining:
7 6 5 4 3 2 1
| | | | | | |
------- ------- |
| | | | |
| H L | | H L | |
------- ------- |
| | | | |
Take the two low outputs and the remaining input (1) and feed them to
another black box:
L L 1
| | |
-------
| |
| C L |
-------
| |
The low output from this black box will be the low bit of the result. The
high output is the carry bit. Feed this carry bit along with the high bits
from the first two black boxes into the fourth black box:
H H C L
| | | |
------- |
| | |
| H M | |
------- |
| | |
The result should b
|
reading bit mask and bits in the following example |
Well, 0x1 is just the hex value of 1, which in binary is represented as
~001. When you apply a 0 bit shift to 0x1, the value is unchanged because
you haven't actually shifted anything. When you shift 1, you're looking at
a representation of ~010 which in good ol' numerics is a 2 because you have
a 1 in the twos column and zeros everywhere else.
Therefore, uint32_t i = 0x1 << 0; has a lesser value than uint32_t j
= 0x1 << 1;.
uint32_t i = 0x1 << 0;
uint32_t j = 0x1 << 1;
NSLog(@"%u",i); // outputs 1
NSLog(@"%u",j); // outputs 2
|