w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
  Home » ELASTICSEARCH » Page 1
How to perform inner filter in elastic search
To do everything in a filter: curl -XGET "http://localhost:9200/hubware3/message/_search?pretty" -d' { "filter" : { "and" : [{ "bool" : { "should" :[ { "bool" : { "must" : [ {"term" : { "item_id" : "1" }}, {"term" : { "name" : "John" }} ] }}, {"bool" : { "must" :

Categories : Elasticsearch

Elasticsearch jdbc river plugin with bettermap
Usually you have to explicitly map your field to be of type geo_point. Than there are multiple formats that you can provided your data in. More information about the mapping is here: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-geo-point-type.html#mapping-geo-point-type Beware when providing the point as a string, you have to provide it as: lat,lon. I would firs

Categories : Elasticsearch

Elasticsearch: Possible to process aggregation results?
A bit more complicated, but here it goes (only in 1.4 because of this type of aggregation): { "query": { "filtered": { "query": { "match_all": {} }, "filter": { "term": { "serviceId": 1 } } } }, "aggs": { "executionTimes": { "scripted_metric": { "init_script": "_agg['values'] = new java.util.HashMap();",

Categories : Elasticsearch

ElasticSearch installed---but Installing kibana on localhost?
Not sure if you're still looking for an answer, but for future searchers: What you can do is download elasticsearch - http://www.elasticsearch.org/overview/elkdownloads/ Extract it, and create a plugins subdirectory. Then, within the /plugins directory create a /kibana/_site subdirectory. Then, download kibana using the above mentioned link. Extract the archive, then edit config.js to point to

Categories : Elasticsearch

Why not use maximum replication with Elasticsearch?
The only downsides I can see are: When you get a new document you'll need to write to all nodes - but you say updates are infrequent so this shouldn't be an issue If you add a node or need to restore a node it will receive a lot of data (copies of all shards) - however you can throttle this in the node config file. So - generally it's a good idea to have replication to all nodes.

Categories : Elasticsearch

Elasticsearch find documents by another document
This is a text-book scenario for "more like this" api. Quote from the docs: The more like this (mlt) API allows to get documents that are "like" a specified document. Here is an example: $ curl -XGET 'http://localhost:9200/twitter/tweet/1/_mlt?mlt_fields=tag,content&min_doc_freq=1' The API simply results in executing a search request with moreLikeThis query (http parameters

Categories : Elasticsearch

Analyzing delays from log files using logstash and kibana
Assuming you can change the format of the logs, these tips might be useful for you: There is no way (as far as i know) to compute latency of an operation from two different elastic search documents in kibana 3.1 (current stable version). Maybe in Kibana 4 it'll be possible. What you are trying to do would be trivial if your logs entries contained the operation elapsed time. For example: 2014-10

Categories : Elasticsearch

Elasticsearch - Create field using script if doesn't exist
To check whether a field exists use the ctx._source.containsKey function, e.g.: curl -XPOST "http://localhost:9200/myindex/message/1/_update" -d' { "script": "if (!ctx._source.containsKey("attending")) { ctx._source.attending = newField }", "params" : {"newField" : "blue" }, "myfield": "data" }'

Categories : Elasticsearch

search elasticsearch query with like function
I don't know your data model, but I will suggest 2 solutions for this case: Solution 1 - Create a nested type for the URI: { "nesttype" : { "properties" : { "uri" : { "type" : "nested" , "properties" : { "host" : { "type" : "string" }, "relativePath" : { "type" : "string" },

Categories : Elasticsearch

Elasticsearch Not Logging After Upgrade to 1.4
It seems that ES now reads any file in /etc/elasticsearch/ called logging.* So check it there are any logging.*.dpkg-new or similar there from packaging and move them elsewhere. Otherwise, try sudo su-ing to elasticsearch and starting it manually (look in the init.d script) and see if it reports any errors to stderr on startup. Bug reports: https://github.com/elasticsearch/elasticsearch/issues

Categories : Elasticsearch

elasticsearch › Problems with eliminating null values from query results
I would combine the exists filter with a must_not. Here is a sample searching the field "authResult.address.state" GET index1/type1/_search { "size": 10, "query": { "filtered": { "query": { "match_all": {} }, "filter": { "bool": { "must": [ { "exists": { "field": "authResult.address.state"

Categories : Elasticsearch

Find actual matching word when using fuzzy query in elastic search
if you use highlighting, Elasticsearch will show the terms that matched: curl -XGET http://localhost:9200/products/product/_search?pretty -d '{ "query" : { "fuzzy" : { "value" : "tpad" } }, "highlight": { "fields" : { "value" : {} } } }' Elasticsearch will return matching documents with the fragment highlighted: { "took" : 31, "timed_out" : false,

Categories : Elasticsearch

ElasticSearch average aggregation in results
Top hits will find the ten documents that best match your query (i.e. score highest). The sort is then performed on this result set - so top hits won't work for you. You also can't nest top_hits and avg inside each other - you can only nest inside a bucket (e.g. terms, histogram, range, etc.) To meet your requirements, you could run 2 queries. The first to find the matching documents (top ten sa

Categories : Elasticsearch

Score calculation for query of size = 0
Might not be the answers you are looking for, but still: It could well be that the score is still calculated. In your case I would use another solution. You should use the search type of a query: ?search_type=count More information can be found here: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-search-type.html#search-request-search-type

Categories : Elasticsearch

logstash, syslog and grok
It is hard to know were the problem without seeing an example event that is causing you the problem. I can suggest you to try the grok debugger in order to verify the pattern is correct and to adjust it to your needs once you see the problem.

Categories : Elasticsearch

Elasticsearch mapping - different data types in same field
No - you cannot have different datatypes for the same field within the same type. e.g. the field index/type/value can not be both a string and a date. A dynamic template can be used to set the datatype and analyzer based on the format of the field name For example: set all fields with field names ending in "_dt" to type datetime. But this won't help in your scenario, once the datatype is set

Categories : Elasticsearch

FOSElasticaBundle: prioritizing followers in user search
Ok, so after research it seems that only way is to first make a find on follow document, then eventually merge results with find on user. First added indexing on follow entity like: follow: mappings: follower: ~ followee : type : object properties : id : ~

Categories : Elasticsearch

Elasticsearch node not getting any shards - how to diagnose why?
Thanks to Alcanzar, in the comment above, I do believe the issue here is one he saw - different versions. The node that will not accept shards is running one version earlier than the others. I will upgrade everything to 1.4 this weekend and likely see this go away. Makes total sense now.

Categories : Elasticsearch

How to perform "lowercase filter" along with "char_filter"?
you're correct in the sequence of steps: CharFilter Tokenizer TokenFilter However, the main purpose of the CharFilter is to clean up the data to make the tokenisation easier. For example by stripping out the XML tags or replacing a delimiter with a space character. So - I would put misc_simplifications as a TokenFilter to be applied after the Lowercase filter. { "settings" : { "index"

Categories : Elasticsearch

ElasticSearch matching no more than query terms
Use the query string query: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html { "query_string": { "query": "(w1 OR w2 OR w3 OR (w1 AND w2) OR (w1 AND w3) OR (w2 AND W3) OR (w1 AND w2 AND w3))" } } OR { "query_string": { "query": "(w1 OR w2 OR w3) AND NOT w4)" } } OR using the bool query: http:/

Categories : Elasticsearch

Delete all of type across all indices
If your Logstash indices are named logstash-%{+YYYY.MM.dd} try this: DELETE /logstash*/error or DELETE /logstash*/_query?q=_type:error Just to be on the safe side, make a backup of your data before trying this. In my tests, both those queries deleted only the error type under everything logstash indices.

Categories : Elasticsearch

Getting ElasticSearch Percolator Queries
This should return all percolator documents stored in your elasticsearch cluster: POST _all/.percolator/_search This searches _all indexes (every index you have registered) for documents of the .percolator type. It basically does what you describe above: "a match_all with a type filter". Yet it accomplishes it in a slightly different way. I have not played around with this much more than this

Categories : Elasticsearch

Elasticsearch spatial queries (within particular radius)
Your "latitude": 13.534991, "longitude": 80.015182 fields in your mapping need to have a different kind of type to be able to be used in geo location computations. It needs to be only one field, not two, and it must be like this: "location": { "type": "geo_point" } And then, when indexing, you specify something like this: {"location":{"lat":13.534991,"lon":80.015182}} And then y

Categories : Elasticsearch

ElasticSearch and NEST - How do I construct a simple OR query?
Based on your mapping, you won't be able to search for exact matches on siteName because it's being analyzed with the standard analyzer, which is more tuned for full text search. This is the default analyzer that is applied by Elasticsearch when one isn't explicitly defined on a field. The standard analyzer is breaking up the value of siteName into multiple tokens. For example, Myrtle Street

Categories : Elasticsearch

Logstash parsing unix time in milliseconds since epoch
UNIX_MS is marked on that pages as a "special exception". You can see in the grok debugger that it doesn't work in a "match". %{NUMBER:timestamp} will give you the field.

Categories : Elasticsearch

Elasticsearch Nest and CopyTo
Attribute-based mapping in NEST doesn't support CopyTo. You need to use the fluent API. See my comment here for an explanation.

Categories : Elasticsearch

Seeing latest results in Kibana after the page limit is reached
To have kibana read the latest results, reload the query. To have more pages available (or more results per page), edit the panel. Make sure the table is reverse sorted by @timestamp.

Categories : Elasticsearch

Boosting in more like this elasticsearch
One way to achieve specific boosts if a document is more like a particular doc or if the match is on particular field you can use multiple mlt queries and wrap them in a should clause/dis_max based on whether you want "max of" or "sum of" logic while scoring : Example using dis_max would be : POST test_index/_search?explain=true { "fields": [ "field1", "field2" ], "q

Categories : Elasticsearch

Elastic search Multiple Aggregation
If you are using ES of version less than 1.4.0, you can make use of Filter Aggregations. Query for the same is as below: { "size": 0, "query": { "match": { "payment_type": "paypal" } }, "aggs": { "daily_price_sum": { "sum": { "field": "price" } }, "daily_post_sum": { "sum": { "field": "purchased_

Categories : Elasticsearch

Elasticsearch Hosting for Beginners
Our production site hosts the same data in both elasticsearch and postgres, and we pay half the price for elasticsearch. You can of course host it yourself to bring the price down. And here's some services lower than $50/mo: Bonsai SearchBox

Categories : Elasticsearch

ElasticSearch returns different results for timestamp and datetime
The timestamp and datetime provided are not equivalent check using epoch_converter. 1414799400000 => Fri, 31 Oct 2014 23:50:00 GMT] 1414799999000 => Fri, 31 Oct 2014 23:59:59 GMT Whereas the range query using datetime is : 2014-10-31T23:00:00 2014-10-31T23:59:59 In short the two ranges are not the same.

Categories : Elasticsearch

Adding sort priority to documents matching certain condition
Why not let the default sorting in ES (sort by score) do the job for you, without custom ordering or custom scoring: GET /my_index/media/_search { "query": { "filtered": { "query": { "bool": { "should": [ {"match": {"social_medias": "Linkedin"}}, {"match_all": {}}, {"query_string": { "default_field": "social_medias",

Categories : Elasticsearch

Route lines from file to persistent JMS queue: How to improve performance?
a couple of things to consider... ActiveMQ producer connections are expensive, make sure you use a pooled connection factory... consider using the VM transport for an in process ActiveMQ instance consider using an external ActiveMQ broker over TCP (so it doesn't compete for resources with your test) setup/tune KahaDB or LevelDB to optimize persistent storage for your use case

Categories : Elasticsearch

Build Kibana 4 executable jar from source in Github
You'll need to build it from source. The instructions are here: https://github.com/elasticsearch/kibana/blob/master/CONTRIBUTING.md

Categories : Elasticsearch

Query-Term and Filter-Term Return Zero Results on an Exact Match, but Query-Match Returns a Result. Why?
By default, using the standard analyzer, ES places your "Foo" in an index as "foo" (meaning, lowercased). When searching for term, ES doesn't use an analyzer so, it is actually searching for "Foo" (exact case). Whereas, in its index the "foo" exists (because of the analyzer). The value passed for match instead is analyzed and ES is actually searching for "foo" in its indices, not "Foo" as it does

Categories : Elasticsearch

Elasticsearch: query produced is invalid
You should surround the whole query by a "query" clause: $query = new ElasticaQueryBuilder('{ "query": { "function_score": { "functions": [ { "gauss": { "location": { "origin": "'.$latitude.', '.$longitude.'", "scale": "2km" } } } ],

Categories : Elasticsearch

Neighborhoods Geo Query
You can query it like this: GET /my_index/landmark/_search { "query": { "filtered": { "query": { "match_all": {} }, "filter": { "geo_shape": { "location": { "shape": { "type": "point", "coordinates" : [4.896863,52.374409] } } } } } } }

Categories : Elasticsearch

Kibana returns "Connection Failed"
I have faced similar kind of issue. If you are using elasticsearch-1.4 with Kibana-3 then add following parameters in elasticsearch.yml file http.cors.allow-origin: "/.*/" http.cors.enabled: true Reference, https://gist.github.com/rmoff/379e6ce46eb128110f38

Categories : Elasticsearch

Elasticsearch filtered query not working
If your verb field has been indexed using the default analyzer, then most probably it ended up being indexed as a lower case: "get" instead of "GET", "delete" instead of "DELETE". term filtering just gets the filtered value, GET in your query and it doesn't analyze it (this is how it works). So, basically, you are looking for value GET in a list of documents containing get, delete, post etc. The

Categories : Elasticsearch




© Copyright 2018 w3hello.com Publishing Limited. All rights reserved.