w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML Categories
MongoDB rename collection fails with "exceeds maximum length of 32, allowing for index names"
I don't know if it's acceptable to have less readable index names, but you can specify the names yourself so they don't get so long(leaving more room for your collection names). db.someReallyLongCollectionName.ensureIndex({ IncludeInLocationBasedSearch: 1, TotalBranches: 1, Categories: 1 }, { name: 'short1'}); You could even name indexes by single characters, and have some reference for what they mean somewhere else. That second argument ensureIndex takes is for options, full docs here: http://docs.mongodb.org/manual/reference/method/db.collection.ensureIndex/

Categories : Mongodb

How to avoid: Warning: POST Content-Length of 47820076 bytes exceeds the limit of 8388608 bytes in Unknown on line 0
One thing what you can do is give a maximum size on the HTML form. <input type="hidden" name="MAX_FILE_SIZE" value="4194304" /> It's not bulletproof, but it has some impact. Why this is not so good is explained in this comment - it is only checked by PHP, so uploading will start and continue up until this limit is reached. My recommendation is that you could use that together with a javascript solution, as described here. It only works if javascript is enabled, of course.

Categories : PHP

Accumulo - Adding collection of mutations to batchwriter that exceeds buffer limit
An Accumulo BatchWriter will auto-flush its current batch when its buffer gets full. The API also allows you to add Mutations with an Iterable. This is not the same as adding a collection. When you use this, you're not adding the whole collection at once. Instead, what you're really doing is providing a source feed of Mutations to be added to the current batch one at a time. It may be that the batch will be flushed before the entire collection has completed its iteration.

Categories : Java

Indexing MongoDB collection in Java
You should construct a new DBObject that represents your index. See the code bellow: DBObject index2d = BasicDBObjectBuilder.start("location", "2dsphere").get(); DBCollection collection = new Mongo().getDB("yourdb").getCollection("yourcollection"); collection.ensureIndex(index2d);

Categories : Mongodb

MongoDB lat long search with 2D sphere indexing in Java application
MongoDB does not care about the field names in loc. It always first uses longitude, and then latitude. You have it the wrong way around. Instead you want this: db.col.ensureIndex( { location.loc: "2dsphere" } ); db.col.insert( { "merchant_id" : "W234FDHDF##234", "location" : { "loc" : { type: "Point", coordinates: [ 30.8, -58.4 ] }, "city" : "Cupertino", } } ); And then you can use a $geoNear query: // location object BasicDBObject myLoc = new BasicDBObject(); myLoc.append("type", "Point"); double[] loc = {-121.97679901123047,37.557369232177734}; // place comma in myLoc.append("coordinates", location); myLoc.append("coordinates" , loc ); // command object BasicDBObject myCmd = new BasicDBObject(); myCmd.append("geoNear", "col"); myCmd.append("near"

Categories : Mongodb

Indexing mongodb collection with 2dsphere indexes and IP address indexes
This should work: db.position.ensureIndex({"geoField.coordinates":"2dsphere","ipaddress":1}) Take a look at this article I hope it helps

Categories : Mongodb

Add field that is unique index to collection in MongoDB
Try creating a unique sparse index: db.users.ensureIndex({username:1},{unique:true,sparse:true}) As per the docs: You can combine the sparse index option with the unique indexes option so that mongod will reject documents that have duplicate values for a field, but that ignore documents that do not have the key. Although this only works for documents which don't have the field, as opposed to documents that do have the field, but where the field has a null value.

Categories : Mongodb

MongoDB : alter an existing index on a collection
I am pretty sure you can't alter indexes once they have been created. You could easily create a new index that would cover the new field. Adding indexes is very easy from the shell you can type db.yourcollection.ensureIndex( { field1: 1, field2: -1 } ) Mongo will look at your indexes and work out which one is best to use for your query. You can see this by adding explain onto the end of your query in the shell, this will tell you if it used an index and what index was used. This will also be a good tool for working out what is slow about your query. See the Mongo Documentation for further details http://docs.mongodb.org/manual/core/indexes/

Categories : Mongodb

Pymongo Not creating collection in mongodb
The collection will not be created until you add data, this is since collections and even databases in MongoDB are done lazily by default. If you wish to explicitly allocate a collection eagerly then use: http://api.mongodb.org/python/current/api/pymongo/database.html#pymongo.database.Database.create_collection

Categories : Python

Creating MongoDB capped collection is too slow
Capped collections are preallocated when created and with ext3 that will block. Ext4 or XFS is preferred[1] because they implement posix_fallocate, which means MongoDB can allocate large files quickly. [1] http://docs.mongodb.org/manual/administration/production-notes/

Categories : Database

Creating database index on django mongodb
Django-nonrel doesn't interact with mongodb on its own; you need the mongodb python driver (pymongo) and/or an object-document mapper (ODM) such as mongoengine or django-mongodb-engine to interact with the database. It's the job of the ODM and driver to create the indexes, and it depends on what you're using to interact with mongodb as far as what syntax you need to use to create indexes. You should see the relevant documentation for creating indexes in pymongo, django-mongodb-engine, or mongoengine.

Categories : Django

PHP Post-Content Length Exceeds Limit
that error is most probably sent by the server. the server will have settings in the php.ini which could be configured. eg memory_limit = 32M upload_max_filesize = 10M post_max_size = 20M the php.ini file is usually stored in /etc/php.ini or /etc/php.d/cgi/php.ini or /usr/local/etc/php.ini but most web hosts provide an option to edit this file within the control panel

Categories : PHP

Wordpress export exceeds memory limit
Wordpress has it's own default value for the maximum memory usage and it sets it admin.php with an ini_set call. See here Wordpress 3.5.2 /wp-admin/admin.php:109 @ini_set( 'memory_limit', apply_filters( 'admin_memory_limit', WP_MAX_MEMORY_LIMIT ) ); To fix the problem, you'll need to adjust the value. I set mine to an arbitrarily high number to quickly get the export @ini_set( 'memory_limit', apply_filters( 'admin_memory_limit', '4096M' ) ); A value of "-1" should make that memory limit uncapped completely.

Categories : PHP

ASP.NET exception: The value's length for key 'initial catalog' exceeds it's limit of '128'
It is likely that this has to do with your connection string in web.config Have a look for the part like: Initial Catalog=aspnet-AuctionWebsite-20130613142210; chances are yours is over 128 chars..

Categories : C#

Display tooltip when a item of ComboBox exceeds the visible limit?
Try this: ' Get the longest element For Each elem As String In CBox1.Items If elem.Length > auxCad.Length Then auxCad = elem Next ' Get the size iSize = CInt(CBox1.CreateGraphics.MeasureString(auxCad, CBox1.Font).Width) + 20 If iSize > CBox1.Width Then 'Show tooltip End If

Categories : Dotnet

When backup size exceeds its limit triggers an email to the users
There's a detail or two missing: What language? What environment? Etc. But, as a possible example: $backupFilePath = 'path to your backup file'; $limit = your limit in bytes; if (filesize($backupFilePath) < $limit) { //log? } else { $to = "someone@example.com"; $subject = "Limit Exceeded"; $message = "Limit of ".$limit". bytes exceeded."; $from = "someonelse@example.com"; $headers = "From:" . $from; mail($to,$subject,$message,$headers); } See: http://php.net/manual/en/function.filesize.php and http://www.w3schools.com/php/php_mail.asp

Categories : Email

maven-compiler-plugin argument string exceeds OS X Terminal limit
I used to get these all the time, I developed the habit of moving the M2_REPO to something closer to root, like /java/m2/r by adding <localRepository>/java/m2/r</localRepository> to my ~/.m2/settings.xml

Categories : Java

Suspected faulty error: DataGridView Column exceeds the MaxLength limit
when creating the "Notes" column what type did you specify for it to use. I myself use varChar(maxStringLenth) Also to able to help you more efficiently can you provide some more technical info about your dataGrid and DataBase

Categories : Vb.Net

editable div in table exceeds its percentage-width when content is very long
If you only wish for your text to wrap, so the width stays the same you need to get rid off white-space:nowrap; Otherwise add table-layout:fixed; to your table css See the updated fiddle here http://jsfiddle.net/k5Erc/1/

Categories : CSS

Running a macro from the command line on a file that exceeds excels row/column limit
You could load chunks of the .csv at a time, putting a prefix or suffix at the end of each import. Run VBA code on each chunk, then add back together. Unfortunately the Workbooks.OpenText method supports importing text starting at a particular row, not a particular column, so you'd need to break up the .csv file into manageable chunks outside of excel, before running VBA code.

Categories : Excel

Solution to use limit in mongodb find without using MongoCursor->limit()?
I have no idea why you want to do this however: $cursor = $collection->find(array('type' => 'test')); $cursor->addOption( '$maxScan', 10 ); $maxScan is $limit and $orderby is $sort. Reference: http://docs.mongodb.org/manual/reference/operator/query-modifier/

Categories : PHP

Matlab error: Index exceeds matrix dimensions
As the error message says, you are trying to access a position in wave tat does not exist. See this example: a = rand(7,1); step = 4; 1:step:7 ans = 1 5 when v = 5, you will try to access position v:v+step, i.e. 5 to 9, but a is only defined up to 7 elements. In your case, wave is defined up to length(wave), but on the last iteration you will go out of bounds. To avoid it, on approach would be to sample the end sequences and subtract the length of the sequence: pos = (1+w_length:w_length:length(wave))-w_length for v = pos % do stuff end However, you will be left with some unprocessed part which you will have to do outside of the loop as last iteration.

Categories : Matlab

Index Exceeds Matrix Dimensions - Canny Edge Detection
Your loop ranges are in the wrong order leading to the error. If you modify your loop ranges to this for u = 3 : r for v = 3 : c sum = 0; for i = -2 : 2 for j = -2 : 2 sum = sum + (ID(u+i, v+j) * filter(i+3, j+3)); end end IDx(u,v) = sum; end end the problem is solved. My guess is that the code worked only for square images with c==r. Note you are not making use of Matlab's vectorization capability, which allows you to shorten the first steps to: ID = [zeros(2,c+4) ; [zeros(r,2) IDtemp zeros(r,2)]; zeros(2,c+4)]; filter=[2 4 5 4 2;4 9 12 9 4;5 12 15 12 5;4 9 12 9 4;2 4 5 4 2]; filter=filter/159; for u = 1 : r for v = 1 : c IDx(u,v) = sum(reshape(ID(u+[0:4], v+[0:4]).* filter,25,1)); end e

Categories : Image

Error: Index exceeds matrix dimension in simulating AR process
The indexing operation, cell addition, and cell/double multiplication operations as posted are not allowed. If a is a cell array (such as y1) generated as follows: >> a={1:256} a = [1x256 double] >> whos a Name Size Bytes Class a 1x1 2108 cell array Grand total is 257 elements using 2108 bytes I cannot index into a(2) because it doesn't exist: >> a(2) ??? Index exceeds matrix dimensions. I cannot add one cell and another as follows: >> a(1)+a(1) ??? Function 'plus' is not defined for values of class 'cell'. and I cannot multiply a cell and type double as follows: >> a*3 ??? Function 'mtimes' is not defined for values of class 'cell'. Error in ==> mtimes at 16 builtin('mtimes',

Categories : Matlab

Matlab using multiple containers.Map raises an Index exceeds matrix dimensions error
This line, which is totally equivalent to "a.mapOfB('one').mapTest('one')", does not raise the error builtin('_paren', a.mapOfB('one').mapTest, 'one') Therefore it's not a "real" error, but a limitation on MATLAB's syntax or implementation of containers.Map's subsref() operator. See also this popular question

Categories : Matlab

Error "Index exceeds matrix dimensions." in MatLab using importdata with a text file
I often find it useful to use matlab's built in GUI for importing a data file, which can help to visualised how the data will be imported. There is an option in here to produce the code required to replicate the options that were selected during the import which will allow you to work out how to dynamically import the data. Just go to: File >>> Import Data...

Categories : Matlab

Xcode - Cannot disable indexing
I actually figured it out finally. The IDEIndexDisable boolean was missing from com.apple.dt.XCode completely. Naturally the value couldn't be set from the command line with the value missing. I added the value manually by editing the plist file and adding the IDEIndexDisable boolean and setting it to yes. Finally, no more indexing! EDIT: In order to edit the plist file -- Open the plist file in Xcode - its located at ~/Library/Preferences/com.apple.dt.Xcode.plist Find the IDEIndexDisable boolean and change it to yes In my case the boolean was actually missing and I had to add it.

Categories : Xcode

mongodb - add column to one collection find based on value in another collection
In MongoDB, the simplest way is probably to handle this with application-side logic and not to try this in a single query. There are many ways to structure your data, but here's one possibility: user_document = { name : "User1", postsIhaveLiked : [ "post1", "post2" ... ] } post_document = { postID : "post1", content : "my awesome blog post" } With this structure, you would first query for the user's user_document. Then, for each post returned, you could check if the post's postID is in that user's "postsIhaveLiked" list. The main idea with this is that you get your data in two steps, not one. This is different from a join, but based on the same underlying idea of using one key (in this case, the postID) to relate two different pieces of data. In gene

Categories : Mongodb

Using MongoDB Collections vs Indexing
I don't think that the document structure is really the issue here. Lets look at this from a different point of view. Think of this like an accounting ledger. line #, Date, Description, Count, Price each 1, 02/25/2013, Super widgets, 1000, $1 // <-- Strike though font 2, 02/25/2013, Super Widgets, 5000, $0.50 // <-- Strike though font 3, 02/26/2013, Super widgets 3500, $0.75 The final line #3, is the current tally. by looking though the dates, you can see what was kept and what was not. The "Transaction" evolves much like your account statement, just add line record item changes to the deal, and "strike" out the ones not kept (don't delete them, just mark them inactive). I think you have 3 collections: Reseller - company Transactions (between reseller and you) Trans

Categories : Mongodb

mongodb indexing subarray values
Can you explain more what you're trying to do? The first schema design is not very good; you have a bunch of arrays that you really have no way to address except for using the array operators which can be very slow. It seems like you were on track with your second idea however your syntax is just a little off. If you can insert documents instead of sub-arrays you'll find the schema much easier to deal with and you can index on the two values in each document as illustrated below: > db.test.insert({field: [{a:1, b:2}, {a:3, b:4}]}) > db.test.ensureIndex({"field.a":1}) > db.test.ensureIndex({"field.b":1}) > db.test.getIndexes() [ { "v" : 1, "key" : { "_id" : 1 }, "ns" : "test.test"

Categories : Mongodb

Indexing coordinates in MongoDB not working
The 2d and 2dsphere index types don't enforce the unique constraint at all. I've created a DOCS issue to clarify this in the documentation: https://jira.mongodb.org/browse/DOCS-1701

Categories : Ruby

Index on calculated column triggers 900 byte index size limit
Try CAST(CASE ... END AS VARCHAR(106))... CAST(CASE WHEN city_id IS NULL THEN name ELSE name + REPLICATE(' ', 100 - LEN(name)) + city_id END AS VARCHAR(106)) COLLATE Modern_Spanish_CI_AI or simply ignore it... It's only a warning.

Categories : Sql Server

The difference between Legacy Indexing/Auto Indexing and the new indexing approach in Neo4j
Yes, auto-indexes are a type of legacy index. Yes. Yes, you can for embedded. See an example here: Neo4j Embedded Fulltext Automatic Node Index The new "schema indexes" are the favored way to define indexes, based on labels. Legacy indexes are... the old way to do it. You can use them both together if needed.

Categories : Java

How to solve memory limit issue in shared host
you can try to add your owen php.ini file in wich you specify : memory_limit = 512M to do so : create a php.ini file in your root folder add memory_limit = 512M create .htaccess file and add this line : SetEnv PHPRC /home/username/folder/php.ini this will load your custom instructions in your php.ini to make sure just run the <?php phpinfo() ?>

Categories : PHP

MongoDB indexing multi-input search
$and won't work as MongoDB can only use one index per query at the moment. So if you create an index on each field that you search on, MongoDB will select the best fitting index for that query pattern. You can try with explain() to see which one is selected. Creating an index for each possible combination is probably not a good idea, as you'd need 6 * 5 * 4 * 3 * 2 * 1 indexes, which is 720 indexes... and you can only have 63 indexes. You could pick the most likely ones perhaps but that won't help a lot. One solution could be to store your data differently, like: { properties: [ { key: 'designer', value: "Designer1" }, { key: 'store', value: "Store1" }, { key: 'category', value: "Category1" }, { key: 'name', value: "Keyword" }, { key: 'gender'

Categories : Database

Do categorical fields need indexing? (MySQL or MongoDB)
An index on such a field is likely not to be useful in MySQL. Actually, such an index could make most queries worse. There is a case where an index will always be faster. This is a query that only uses columns in the index, such as: select count(type) from food where type = 3; This is faster because reading the index should be faster than reading the table, because the data is smaller (presumably, you could include all columns in the index). In other cases, MySQL uses an index for a table when it is available. The question you are asking is about the "selectivity" of an index. Consider your query: SELECT name FROM food WHERE type = 3 ; If all rows have type = 3, then you have to read all the matching records anyway (to get the value of name). If there is one record per page,

Categories : Mysql

How to make a Long out of two Bytes? 0x7 + 0x86 = 0x1c000
You have an issue with operator precedence. + is evaluated before <<. (Conventionally | is used for combining). long myLong = ((buffer[0] & 0xFF) << 8) | (buffer[1] & 0xFF)

Categories : Java

RequireJs Backbone error Collection is not a constructor when creating new Collection
Javascript is case sensitive the variable names used in the definition files are local to the definition, they won't be available when required (if you do things correctly and don't write on the global namespace), you define names for your modules, which probably will lead to problems down the road This means that CollectorCollection won't be available globally, and that in require(["collectorCollection"], function (collectorCollection) { } your collection is actually available as collectorCollection : note the lowercase c. So, your require call could be written as require(["backbone", "underscore", "collectorCollection", "collectorRouter"], function (Backbone, _, CollectorCollection, CollectorRouter) { var collectors = new CollectorCollection(); var router = n

Categories : Backbone Js

Error at BFS search .Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index
When the user enters something that is greater than 220 as a starting position, this line trace[start] = -1; will throw an exception, since start is indeed out of trace bounds. Therefore you need to force user to enter something you can handle. Like this: Console.Write("Please Input the Starting Node : "); starting_position = Convert.ToInt32(Console.ReadLine()); while (starting_position < 0 || starting_position >= trace.Count) { Console.Write("Starting Node is invalid. Should be between 0 and 220. Please enter another one : "); starting_position = Convert.ToInt32(Console.ReadLine()); } This is just an idea, point is - you should think about user input validation so it does not break your program. Update. Actually I did not realize that trace in the code above conta

Categories : C#

Disable Google Indexing the HTTPS version of our website naked domain
GAE doesn't officially support naked domains. What you're seeing is a limitation of GAE, https://developers.google.com/appengine/kb/general#naked_domain

Categories : Google App Engine



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.