Reading Querying database with a Parameter to Display Results in Text Boxes |
Do Like this
SqlCommand command= new SqlCommand();
command.Connection = connection;
command.CommandText = "Select tblAssets.AssetID, tblAssets.Domain,
tsysOS.OSname, tblAssets.SP,"
+ " tblAssets.Memory, tblAssets.Processor, tblAssetCustom.Manufacturer,
tblAssetCustom.Model"
+ " FROM tblAssets"
+ " INNER JOIN tblAssetsCustom ON tblAssets.AssetID =
tblAssetCustom.AssetID "
+ " INNER JOIN tsysOS ON tblAssets.OScode = tsysOS.OScode "
+ " WHERE tblAssets.AssetName = @AssetName";
connection.Open();
command.Parameters.Add("@AssetName",SqlDbType.NVarchar).Value = Aname;
using (SqlDataReader reader = command.ExecuteReader())
{
if (reader.Read())
{
TextBoxAssetID.Text = reader["AssetID"].ToString();
|
Querying sub array with $where |
You don't need a $where operator to do this; just use a query object of:
{ "items.sku": mmm }
As for why your $where isn't working, the value of that operator is
executed as JavaScript, so that's not going to check each element of the
items array, it's just going to treat items as a normal object and compare
its sku property (which is undefined) to mmm.
|
JavaScript array querying |
You can try Underscore.js Find and take a look at the answer: Filtering
through a multidimensional array using underscore.js
|
mongodb querying sub array |
Your query should looks something like this
db.alumnos111.find({"materias.id_materia" : { $in : [ "1234", "5678"] }})
to filter out documents w/ id_materia="1234" OR id_materia="5678"
This example would be javascript executed straight in the mongo shell
|
Querying data results a lock on the querying table |
If I'm not wrong SQL Server does not come with row versioning enabled by
default, hence I think this is why you're seeing this behavior.
What I suspect is when you issue a select query, and before this query
finishes you issue another update query, the update has to wait until the
select is complete.
There are many approach to solve this problem, with one being enabling the
row versioning. Other option include using the least restrictive isolation
level.
|
Querying more than one condition for sub array kind of document with $where |
Using $where you can create a for loop with an if where you will set the
conditions. It will return true if the conditions are satisfied:
db.col_t.find(
{
$where: "for (var i = 0; i < this.items.length; ++i) { if
(this.items[i].sku === 'mmm' && this.items[i].qty > 5) return
true; }"
}).pretty();
Or shorter putting the value of $where directly, like bellow:
db.col_t.find("for (var i = 0; i < this.items.length; ++i) { if
(this.items[i].sku === 'mmm' && this.items[i].qty > 5) return
true; }").pretty();
I recommend you to use $elemMatch and $gt this way:
Query:
{
items: {
$elemMatch: {
qty: { $gt: 5 },
sku: "mmm"
}
}
}
Warning: This query will not retrieve your document because using $gt it
will search for num
|
Mongodb querying from part of object in array |
ElemMatch should do the trick.
col.findOne({user:{$elemMatch:{"_id":"14bfgdsfg0-3708-46ee-8164-7ee1d029a507",
"_id":"aasdfa89-5cfe-4861-8a9a-f77428158ca9" }}})
|
Querying an array inside a mongoose document |
You can use dot-separated paths in a query like so:
User.find({'devices.deviceRegistrationId': deviceRegistrationId}). If that
query doesn't match any documents, no user has that device. Note that mongo
is smart enough to test all members of the devices array when given a query
such as this. You can also add a user ID to your query conditions if you
want to check a specific user.
|
Querying an array field that contains hashes in mongoDB? |
db.collection.find( { "query": { $elemMatch: { "filterId":
"5215b40c0ff5fa111e000001", "subfilterId": "60728003610375795" } } } );
You are most probably looking for elemMatch. Check out the docs
|
Querying Array of objects using javascript - Helper functions |
4 Javascript Helpers
Remove Object from an Array using JavaScript
Is Exist for an Object from an Array using JavaScript
Select Object from an Array using JavaScript
Sum Object values in an Array using JavaScript
var JShelpers = {
removeItemFromArray: function (myObjects, prop, valu) {
return myObjects.filter(function (item) {
return item[prop] !== valu;
});
},
isExistInArray: function (myObjects, prop, valu) {
var i = myObjects.length;
while (i--) {
if (myObjects[i][prop] == valu) {
return true;
}
}
return false;
},
getItemFromArray: function (myObjects, prop, valu) {
var i = myObjects.length;
while (i--) {
if (myObjects[i][prop] == valu
|
querying array of objects using Java Mongodb driver |
The Java drivver along with the python is the most developed one, so you
can check it in the driver DOCS. Usually the idea (structure of the
commands) is the same as in the shell you just need helpers to construct
the command.
In Java this documentation can make some hints about how it works:DOCS
so for
$push:
Mongoshell DOCS
example:
db.students.update(
{ name: "joe" },
{ $push: { scores: 89 } }
)
Where { name: "joe" } is a query identifing the right document to update
and the scores is an array field and 89 will be abbended.
Java DOCS
example:
check out this question : (MongoDB Java) $push into array
$elemmatch:
Mongoshell DOCS
example:
check out this question:Covert MongoDB query into Java
$slice:
Mongoshell D
|
Querying a PostgreSQL multi-dimensional array data type in Rails 4 |
I am using a PostgreSQL multi-dimensional array to mimic an array of
hashes
Those two things aren't really all that similar, and I wouldn't recommend
attempting to use multidimensional arrays to model nested hashes.
Pavel is quite right that hstore is probably a lot closer to what you want,
and it's indexable too. However, the current version of hstore (in Pg 9.3
and older) supports only single-level keys; it's a dictionary/hash that can
contain only scalar string values. A planned enhancement to hstore for
PostgreSQL 9.4 will hopefully bring multi-level nesting and JSON syntax
compatibility.
Ordinary tables
You can model arbitrary-depth key/value chains(and trees/graphs) using
edgelists and recursive CTEs, but this probably rather more complexity than
you really want.
If you only
|
Rails 4 querying against postgresql column with array data type error |
I don't think that it has related to rails.
What if you do the follow?
SELECT * FROM db_of_exercises WHERE 'Arms' = ANY (body_part) OR 'Chest' =
ANY (body_part)
I know that rails 4 supports Postgresql ARRAY datatype, but I'm not sure if
ActiveRecord creates new methods for query the datatype. Maybe you can use
Array Overlap I mean the && operator and then doind something like:
WHERE ARRAY['Arms', 'Chest'] && body_part
or maybe give a look to this gem:
https://github.com/dockyard/postgres_ext/blob/master/docs/querying.md
And then do a query like:
DBOfExercise.where.overlap(:body_part => params[:body_parts])
|
Cassandra: Geospatial support |
The basic b-tree indexes are there in cassandra in the form of wide rows.
PlayOrm has a complete SQL language on cassandra though with partitions so
you are not joining billions of rows with billions of rows and instead join
partitions. I imagine doing geospatial would be similar to what PlayOrm
has but I am guessing. I mean is geospatial doable with b-tree's and such?
If so, it just means you have to do a bit of heavy lifting. I know at one
point, PlayOrm was considering adding some geospatial features but we never
got around to it.....we would always accept work in that area though.
later,
Dean
|
$near $unwind mongodb geospatial |
I found a solution. First I tried use $match and $near, however, without
success.
Then I read about $geoNear aggregation and tried put as the last parameter
of my pipeline, however, according MongoDB page: "You can only use $geoNear
as the first stage of a pipeline."
I was in doubt about $unwind that I needed, but, I found a parameter which
helps me in an array localization.
My shell code:
db.EducationalInstitution.aggregate(
{$geoNear: {near: [-15.795758,-47.892312], maxDistance: 50/111.12,
distanceField: "addresses.calculated", includeLocs: "addresses.locs",
niqueDocs: true}},
{$project: {"name": 1, "addresses.state": 1, "addresses.locs": 1}}
);
My Java code:
public BasicDBList findByCoordinates(double longitude, double
latitude){
BasicDBObject cmdB
|
Integration of talend geospatial component |
Download a fresh copy of the Talend Open Data zip
(http://talend.dreamhosters.com/tos/release/V5.3.0/TOS_DI-r101800-V5.3.0.zip)
Unzip the plugin into the plugins folder
Start Talend.
Do not start talend before step 2 is complete.
|
SpatialIndexProvider in Neo4jclient for Geospatial Searches |
As the lead of the Neo4jClient project, I can say that this is no currently
on our roadmap.
There's no particular reason for that beyond the fact that I don't use it
personally and nobody has asked previously.
To make it happen, your best options are:
1) Create an issue on https://bitbucket.org/Readify/Neo4jClient/issues
2) Describe the expected impact
3) Even better, send a pull request
In the meantime, you can obviously do direct REST calls for the indexing
operations but keep everything else going via Neo4jClient.
Finally, it should be noted that our general direction is to support Cypher
more and more. It would be good to align to any Cypher+Spatial plans, if
they exist.
|
MongoDB - Geospatial intersection performance |
After tearing my hair out trying to figure out the best way to accomplish
better performance in MongoDB, I decided to try our existing standard DB,
SQL Server. I guess my low expectations for SQL Server's geospatial
functionality were unfounded. The query ran in < 12 seconds without an
index, and didn't scale up exponentially like MongoDB for larger drawn
polygons. After adding an index, most queries are in the 1 second range.
I guess I'll be sticking with what I know. I really had high hopes for
MongoDB, but geospatial performance is severely lacking (or severely
under-documented on how to improve it).
|
One-to-Many Geospatial Search Index Design in Solr |
The geospatial multi-valued data is handled easily via location_rpt in
Solr's out of the box schema.
The trickier part here is the weighted tags. As a first cut, I'd index 3
fields, tags05 tags10 tags15, each with 3 separate query-time boosts (via
edismax's qf param) of 0.5, 1.0, and 1.5 respectively. This is a
discretization approach in which you loose some of the weight fidelity
depending on how many buckets you have (3 shown here). If you can, avoid
Solr 4 JOIN queries; they are often quite slow. The IDF scores would be a
little bad due to the data being split up, so you might want to try a
different similarity implementation for these fields that don't consider
IDF, perhaps.
|
MongoDB: geospatial query with additional conditions |
I've tried the query and it seems to work as you intend with the $elemMatch
operator. I think the problem is that you have a typo in your query where
address is used instead of addresses. Your query should look like:
db.coll.find({ 'addresses.loc':{$near:[lat,lng]}, addresses: { $elemMatch:
{context: "office"} } });
|
Optimizing Compound Mongo GeoSpatial Index |
I played with this for a number of days and got the result I was looking
for.
Firstly, given that action types other than "PLAY" CAN NOT have a location
the additional query parameter "actionType==PLAY" was unnecessary and
removed. Straight away I flipped from "time-reverse-b-tree" cursor to
"Geobrowse-polygon" and for my test search latency improved by an order of
10.
Next, I revisited the 2dsphere as suggested by Derick. Again another
latency improvement by roughly 5. Overall a much better user experience for
map searches was achieved.
I have one refinement remaining. Queries in areas where there are no plays
for a number of days have generally increased in latency. This is due to
the query looking back in time until it can find "some play". If necessary,
I will add in a time range
|
How to create an animation of geospatial / temporal data |
Your question is a bit vague, but I will share how I have done this kind of
animation in the past.
Create a function that plots all the subject locations for one time slice:
plot_time = function(dataset, time_id) {
# make a plot with your favorite plotting package (e.g. `ggplot2`)
# Save it as a file on disk (e.g. using `ggsave`), under a regular
name,
# frame001.png, frame002.png, see sprintf('frame%03d', time_index)
}
Call this function on each of your timeslices, e.g. using lapply:
lapply(start_time_id:stop_time_id, plot_time)
leading to a set of graphics files on the hard drive called frame001 to
framexxx.
Use a tool to render those frames into a movie, e.g. using ffmpeg, see for
example.
This is a general workflow, which has been already implemented in the anima
|
How to convert geospatial Point(lat,lon) hash back to lat lon values? |
The answer to your specific question is for you to index full-length
geohashes to the precision you desire. No matter what your programming
language of choice is, I'm sure you can find a library of code snippet to
convert back & forth. Index it as string and facet on it.
You are then faced with how to plot what could be a ridiculous number of
points on a map in a scalable manner. You'll have to use spatial
clustering / heat-map. See http://wiki.apache.org/solr/SpatialClustering
|
Efficiently sorting the results of a mongodb geospatial query |
When there is a huge result matching particular box, sort operation is
really expensive so that you definitely want to avoid it.
Try creating separate index on relevance field and try using it (without 2d
index at all): the query will be executed much more efficiently that way -
documents (already sorted by relevance) will be scanned one by one matching
the given geo box condition. When top 10 are found, you're good.
It might not be that fast if geo box matches only small subset of the
collection, though. In worst case scenario it will need to scan through the
whole collection.
I suggest you to create 2 indexes (loc vs. relevance) and run tests on
queries which are common in your app (using mongo's hint to force using
needed index).
Depending on your tests results, you may even want to
|
Using find() with geospatial coordinates in Mongoose (NodeJS+MongoDB) |
I had to use the Mixed type and added some custom validation to ensure
values were arrays and had a length of 2. I also checked for empty arrays
and converted them to nulls because this is required when using sparse
indices with 2dsphere. (Mongoose helpfully sets array fields to [] for
you, which is not a valid coordinate!)
var schema = new mongoose.Schema({
location: { type: {}, index: '2dsphere', sparse: true }
});
schema.pre('save', function (next) {
var value = that.get('location');
if (value === null) return next();
if (value === undefined) return next();
if (!Array.isArray(value)) return next(new Error('Coordinates must be an
array'));
if (value.length === 0) return that.set(path, undefined);
if (value.length !== 2) return next(new Error('Coordinates should be of
|
bash command line arguments into an array and subset the array based on the parameter value |
Here's one way:
FIRST_SET=("${@:2:$1}")
REST=("${@:$(($1+2))}")
That works directly from the arguments, rather than using an intermediate
array. It would be easy to use the intermediate array, in more or less the
same way but remembering that array indexing starts at 0 while parameter
indexing effectively starts at 1 (because parameter 0 is the command name).
Note that the quotes are important: without them, the command line
arguments would be passed through glob expansion and word splitting an
extra time; in effect, you lose the ability to quote command line
arguments.
|
Why does a pointer to array need to be cast before being passed as parameter to a function with array type argument? |
This is an attempt to answer the question, after a long thought..:) Please
correct me if I am wrong.
The following line p=(int (*)[])p; has no effect on type of p. p is still
of type void *(so your casting is redundant) and since void * is compatible
with any data pointer type so the function call is fine.
As for the first main() function you have figured it write.
Look here(good read to avoid confusion).
EDIT:
In short: You are trying to chage the type of the lhs of expression. This
is never the aim of typecasting.
In detail:
Converting an expression of a given type into another type is known as
type-casting.
So, let us analyse the line p=(int (*)[])p;
Consider the rhs of the expression: (int (*)[])p. It is a pointer to
arrays of integer pointers(as expected). But you want it t
|
PyMongo/MongoDB - Geospatial Query for both Origin and Destination LatLon |
Not sure about the actual MongoDB version that you're using but $within was
deprecated and now you should be using $geoWithin:
http://docs.mongodb.org/manual/reference/operator/geoWithin/
Also see the above link for further options. I'm not an expert on Python
but I hope that $geoWithin will solve your problem.
|
how pass an array as parameter, and the array is defined in the parameters in c++ |
You could rewrite the function to take std::initializer_list:
#include <initializer_list>
#include <iostream>
struct BasicFeature {
} query, keyword;
int DerivationFunc(int id, std::initializer_list<BasicFeature> args)
{
std::cout << args.size() << " element(s) were passed.
";
return id;
}
int main()
{
DerivationFunc(42, { query, keyword });
}
|
Does the MongoDB .Net driver version 1.8.2 support $geoWithin Geospatial Query Selector |
The MongoDB C# driver still uses the (deprecated) $within as opposed to
$geoWithin. Here is the relevant open JIRA ticket if you'd like to keep an
eye on it. :)
|
Array as out parameter in c++ |
You should use std::vector instead of raw c-style arrays, and
pass-by-reference using "&" instead of "*" here. Right now, you are not
properly setting your out parameter (a pointer to an array would look like
"short **arr_ptr" not "short *arr_ptr", if you want to be return a new
array to your caller -- this API is highly error-prone, however, as you're
finding out.)
Your getTrimmedList function, therefore, should have this signature:
ErrCode getTrimmedList(std::vector<short> &lst);
Now you no longer require your "count" parameters, as well -- C++'s
standard containers all have ways of querying the size of their contents.
C++11 also lets you be more specific about space requirements for ints, so
if you're looking for a 16-bit "short", you probably want int16_t.
ErrCode g
|
php when to use array as a parameter/property |
This would be completely subjective; however, I will throw my two cents in
to the conversation. My rule of thumb is if there are more than 4
parameters than try to bundle them arrays if they make sense together. So,
the example's padding and colors go together well as arrays.
Also, try to make the arrays in a fashion that the user does not have to
worry about which index each element is in, such as using an associative
array.
|
Array as constructor parameter |
public void Quicksort(int[] values){
this.number=values;
}
should be
public Quicksort(int[] values){
this.number=values;
}
Your constructor should not have a return type (in your case void).
Otherwise it will be considered as a method
|
Two-parameter Array Sort |
You can sort the entire list of 196 elements by A, then lay out the
elements so that the first row contains the smallest 14 A, the next row
contains the next smallest, etc. In this way, every element from the ith
row is smaller (according to A) than every element from the jth row if i
> j.
Then, go row by row and sort by B.
As a small example, lets do a 3x3 case with pairs (9,1) (8,2) ... (1,9).
The sort by A would yield (1,9) ... (9,1) which you lay out like this:
(1,9) (2,8) (3,7)
(4,6) (5,5) (6,4)
(7,3) (8,2) (9,1)
Then you sort each row by B. Changing the order of the elements of B
doesn't break the core assumption about A because every element in a given
row are less than every element in higher rows (for example, the minimum A
in the third row is 7 and the maximum A
|
Why can't I push the function parameter into an array? |
test_array = [] is out of scope. def creates a new scope so you cannot
access the value of test_array from within the method.
One way around this is to make test_array an instance variable: @test_array
|
get parameter of array in twig loop |
Use TWIG attribute docs. Example:
{% for fol in followers %}
<pre> {{ dump(attribute(fol[0], follower)) }} </pre>
{% endfor %}
Please make you sure that you have getters for follower in
TESTBundleBlogBundleEntityFollow or follower attribute is public.
Or simillarly print value:
{% for fol in followers %}
<pre> {{ fol[0].follower }} </pre>
{% endfor %}
|
C++ array as parameter - why do you only need to specify "outer" dimension |
You actually need to specify all dimensions besides the first one. The
reason is that the compiler won't know how much memory to allocate
otherwise.
For a detailed answer, see the following question:
http://stackoverflow.com/a/2562111/1822214
|
Binding parameter as PostgreSQL array |
Try something like this (untested):
------------------ your connection
V
Array inArray = conn.createArrayOf("integer", new Integer[][]
{{1,10},{2,20}});
stmt.setArray(1, inArray);
Links:
Postgres and multi-dimensions arrays in JDBC
Passing Array from Java to Postgres
|
Pass an array as parameter in ruby |
Well, if you're naming is somehow reflecting the objects you are dealing
with, maybe you should try:
@emails.each do |mail|
mail(:to => mail, :subject => @message.subject, ...)
end
and why not:
User.all.each do |user|
mail(:to => user, :subject => @message.subject, ...)
end
|
Validate Checkbox that has an array as name parameter |
If you are in fact using jQuery you could do:
Make sure you are including jQuery in your document before you use this
function & that the document is loaded before you try to run the
function.
function countMarketing() {
if( $('input[name="events[]"]').filter(':checked').length > 2 ) {
alert('Please choose only two');
}
}
|