w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Load all documents from RavenDB
I figured it out: I have to wait for non staled results. So if I change my Query out with this: session.Query<Model.Category>().Customize(cr => cr.WaitForNonStaleResults()).ToList(); it works just fine.

Categories : C#

Can RavenDB persist documents to a filesystem?
You don't need RavenDB for that. Just use System.IO.File and related concepts. Raven won't work with individual files on disk. It keeps it's own set of files for its index and data stores. Access from other programs is not allowed. What happens in RavenDB, stays in RavenDB. Most people store big files on disk, and then just store a file path or url reference to them in their RavenDB database. Raven also supports the concept of attachments, where the file is uploaded into Raven's database - but then it wouldn't be available as a single file on disk the way you are thinking.

Categories : File

Solrcloud duplicate documents with id field
In the "/conf/schema.xml" file there is a XML element called "", which seems to be "id" by default... that is supposed to be your "key". However, according to Solr documentation (http://wiki.apache.org/solr/UniqueKey#Use_cases_which_do_not_require_a_unique_key) you do not always need to have always to have a "unique key", if you do not require to incrementally add new documents to an existing index... maybe that is what is happening in your situation. But I also had the impression you always needed a unique ID.

Categories : Solr

Mongo remove last documents
You should be able to use the _id to sort on last inserted, as outlined in the answer here: db.coll.find().sort({_id:-1}).limit(100); It looks like using limit on the standard mongo remove operation isn't supported though, so you might use something like this to delete the 100 documents: for(i=0;i<100;i++) { db.coll.findAndModify({query :{}, sort: {"_id" : -1}, remove:true}) } See the docs for more on findAndModify.

Categories : Mongodb

Elastica or ElasticSearch remove field from all documents
you can do this via http api: curl -XPOST 'http://localhost:9200/goods_city_1/meirong/552899/_update' -d '{ "script" : "ctx._source.remove("text")" }' or you can do this via java api: StringBuilder sb = new StringBuilder(); for(String stringField : stringFields){ sb.append("ctx._source.remove("").append(stringField).append("");"); } updateRequestBuilder.setScript(sb.toString()); I'v tried this, it gonna work. (at 0.90.2 at least.) If you need it works for all doc in a index, you should put all this update action (or every 5000 action) into one BulkRequest, then throw it to sever. Or maybe "elasticsearch-reindex"(https://github.com/karussell/elasticsearch-reindex) could help you, if you use alias to approach you data.

Categories : Elasticsearch

remove first line if there is a duplicate
Add a sequential field to the beginning of each line using paste (1,2,3...), then reverse the list based on the field, and then run uniq ignoring that field. Then sort by that field to insure they have remained in the right order. Then remove the field using cut or colrm.

Categories : Unix

Can't Remove Duplicate Rows
You can use the listagg() function to get rid of your multiple rows problem; however, it will not put each description in separate columns as you have described in your question. Instead it will put all the descriptions in a single column as string separated by the character(s) you specify. The example below will separate the descriptions by a comma and space: SELECT person.id, badge.bid, person.first_name, person.last_name, person.type, listagg(person_user.description, ', ') within group (order by person_user.description) FROM person, badge, person_user WHERE person.id = badge.id AND person.id = person_user.person_id AND badge.bid NOT LIKE "111%" AND badge.access_date >= 20130401 GROUP BY person.id,

Categories : SQL

Add and Remove duplicate Classes
There is a syntax error $(this).find('h3').removeClass('grey'); //missing . Demo: Fiddle Ex: $('.cta ').hover(function(){ $(this).removeClass('light-grey-bg', 800).addClass('light-grey-bg-fade', 400).find('h3').removeClass('grey'); }, function(){ $(this).addClass('light-grey-bg', 400).removeClass('light-grey-bg-fade', 800).find('h3').addClass('grey'); });

Categories : Jquery

Remove duplicate elements from the BST
The brute-force method that you have mentioned has the worst case complexity of O(n^2). A way of achieving a better complexity is by using hash-table. You could do a traversal on the BST and store the counts of each of the nodes in the hash-table. Then, do another pass (in the any fashion that suits you) and delete the nodes the count of which are greater than 1. This method has an improved complexity of O(n*logn), assuming the tree is balanced.

Categories : C++

How to remove duplicate nodes in an XML document in C#?
Here, I've wrote a little function for you. It's quite ugly, but it does get the job done perfectly. public static void RemoveDuplicates(string filePath) { XmlDocument reader = new XmlDocument(); reader.Load(filePath); bool foundApplicable = false; ArrayList removeNodes = new ArrayList(); foreach(XmlNode node in reader.GetElementsByTagName("transaction")) { if (node.FirstChild != null && node.FirstChild.Attributes["type"].Value == "SetActiveLocale") { if (node.SelectSingleNode("action/inputparams/param") != null && node.SelectSingleNode("action/inputparams/param").InnerText == "en") { if (foundApplicable) {

Categories : C#

How do I remove the duplicate for rounded corder
Just add the class to the link: <li class="info roundcorder"><a href="#">Information</a></li> But, I would write just border-radius, without vendor prefixes; http://caniuse.com/border-radius

Categories : HTML

CSS remove duplicate code in both classes?
like this .my_button .left,.my_button .right { float: left; width: 10px; height: 19px; background: url("./images/bl.png") left center no-repeat; } .my_button .right { background: url("./images/br.png") left center no-repeat; } and for hover .my_button:hover .left,.my_button:hover .right { float: left; width: 10px; height: 19px; background: url("./images/bl-active.png") left center no-repeat; } .my_button:hover .right { background: url("./images/br-active.png") left center no-repeat; }

Categories : CSS

Jquery remove duplicate code
function saveToDB() { var value= pageValidation(); var type = "update"; if ($('#jobid').val()==" ") { //set the flag type = "insert" } if(value!=false) { var data = { "type": type, //See this, insert/update, check this on server side "names": $('#names').val(), "os": $('#OS').val(), "browser": $('#browsers').val(), "version": $('#version').val(), "scripttype": $('#testscripts').val(), "server": $('#server').val() }; if (type == "update") { data.jobid = $('#jobid').val(); } $.ajax({ type: 'post', url: "/insertJobs", dataType: "json", data: data, success: functi

Categories : Jquery

Remove duplicate in linq list?
var results = source.GroupBy(x => new { x.Name, x.Age }) .Select(g => g.First()) .ToList(); Or you can use DistinctBy from moreLINQ library.

Categories : C#

Remove duplicate on JSON iteration
You don't want to remove the duplicate entries, you want to merge them per-storyname. var stories = {}; for (var i=0; i<QueryResults.Results.length; i++) { var result = QueryResults.Results[i], name = result.StoryName if (!(name in stories)) stories[name] = {}; stories[name][result.Name] = result.State; } /* console.log(stories): { "FB Integration":{"Tech Design":"Completed","Development":"In-Progress","QA Testing":"Not Started","Front End Development":"Completed"}, "Twitter Integration":{"Tech Design":"Not Started","Development":"Not Started"} } */ Now you can build a table from that. var keys = []; for (var i=0; i<QueryResults.Results.length; i++) { var n = QueryResults.Results[i].Name; if (keys.indexOf(n) == -1) keys.push(n)

Categories : Javascript

XSLT to remove duplicate while concating
If the XML getting bigger a key based solution (with xslt-1.0) would be better. But to keel the solution mostly as it is, only have unique Tank value. You can add two templates: <xsl:template match="item" mode="tank" /> <xsl:template match="item[not(Material = preceding::item/Material and Tank = preceding::item/Tank)]" mode="tank" > <xsl:value-of select="Tank"/> <xsl:text>||</xsl:text> </xsl:template> This ignores any item which has same Material and same Tank value. And apply this templates in replacement for the for-each loop in <Value>: <xsl:apply-templates select="$materials[Material=current()/Material ]" mode="tank" /> Therefore try this: <?xml version="1.0" encoding="utf-8" standalone="no"?> <xsl:stylesheet

Categories : Xslt

remove duplicate value but keep rest of the row values
Try this (note, you need a blank top row (edit: Actually, you're fine you have a header row)): =IF(A2<>A1,A2,IF(D2<>D1,A2,"")) =IF(A2<>A1,B2,IF(D2<>D1,B2,"")) =IF(A2<>A1,C2,IF(D2<>D1,C2,"")) etc in the top row and drag down Edit: Noticed you needed an additional condition.

Categories : Excel

remove duplicate nodes from xml file using xsl
One way to do this via XSLT 1.0 is by utilizing the Muenchian Grouping methodology to output only unique <property> elements (based on their @name attribute). For example, when this XSLT: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output omit-xml-declaration="no" indent="yes"/> <xsl:strip-space elements="*"/> <xsl:key name="kPropertyByName" match="property" use="@name"/> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> <xsl:template match="property[ not( generate-id() = generate-id(key('kPropertyByName

Categories : Xml

Remove duplicate numbers from C array
You are moving the number before it is checked. That's why you are missing it. I suggest sorting them first and then check for duplicates later (current == prev). Maybe other techniques would work too.

Categories : C

Remove duplicate value from dictionary without removing key
This does it: myDict_values = myDict.values() # better than calling this numerous times for key in myDict.keys(): if myDict_values.count(myDict[key]) > 1: myDict[key] = "" This won't guarantee that key5 will be blank instead of key1, because dictionaries are not ordered.

Categories : Python

Remove duplicate string in an array
If the order doesn't matter, you can first sort the array with std::sort, then use std::unique to remove the duplicates. std::sort(std::begin(exampleArray), std::end(exampleArray)); auto it = std::unique(std::begin(exampleArray), std::end(exampleArray)); Here, it points to one past the end of the new, unique range. Note that since you started with a fixed size array, you cannot reduce its size to the number of unique elements. You would need to copy the elements to a container whose size can be determined at runtime. std::vector<std:string> is an obvious candidate. std::vector<std::string> unique_strings(std::begin(exampleArray), it); Note if you started with an std::vector<std::string> instead of a fixed size array, you would be able to avoid the copy and remove el

Categories : C++

Remove duplicate words from the sentence in C
Disclaimer: This will not fix your algorithm, see @Rohans answer for the fix in your algorithm. To fix your code, do the following: *output = (char*) malloc(text_size); ... should be: char *output = malloc(text_size); ... and change: *output[0] = ''; ... to be: output[ 0 ] = ''; ... do not cast a malloced block of memory. You can read more about this here. Observe that output[ 0 ] implies *( output + 0 ). Next, change: strcpy(*output, element); ... to: strcpy(output, element); ... then change: if (strstr(*output, element) == NULL) { strcat(*output, " " ); strcat(*output, element ); } ... to: if (strstr(output, element) == NULL) { strcat(output, " " ); strcat(output, element ); } ... notice that output is already a pointer, using the * as you do dereferen

Categories : C

How to remove the duplicate value in the store extjs
I don't think there is anything built-in. You would need either to enforce it on the model level - making this field unique and handling the errors, or load the store, sort by that field and go through all records comparing last one and next one.

Categories : Javascript

Remove duplicate values from HashMap in Java
ConcurrentModificationException happening,because you are removing from map if (value.equals(nextValue)) { map.remove(key); } You have to remove from iterator if (value.equals(nextValue)) { keyIter.remove(key); } Coming to the duplicate entry issue,Its pretty simple :Find duplicate values in Java Map?

Categories : Java

How to remove duplicate dictionaries from a list in Python?
This answer is not correct for the now-disambiguated problem. Do the dicts all have the same keys? If so, write a function like the_keys = ["foo", "bar"] def as_values(d): return tuple(d[k] for k in the_keys) unique_values = unique_everseen(list_of_dicts, key=as_values) where unique_everseen is defined at http://docs.python.org/2/library/itertools.html If the dicts are not so consistent, use a more general key, such as the FrozenDict I posted to http://stackoverflow.com/a/2704866/192839

Categories : Python

SQL - Find duplicate values and remove in a field
SELECT DISTINCT ArticleCategories FROM Article OR SELECT ArticleCategories FROM Article GROUP BY ArticleCategories AND THIS IS FOR DELETE DUPLICATE VALUES DELETE FROM Article WHERE ArticleCategories NOT IN ( SELECT MAX(ArticleCategories) FROM Article GROUP BY ArticleCategories )

Categories : SQL

Remove duplicate rows from mysql table
you can add url as the unique index.the following query will add the unique index and remove the deuplicates. alter ignore table urls add unique index(url); If you dont want to add unique index, then the alternate is to create a temp table with unique index,copy the data,remove the duplicate and transfer back to your original table.

Categories : Mysql

Remove 'duplicate' element from ggplot2 legend in R
Here's the method I was referring to in my comment: dat <- rbind(gmm.m,gmm.b) dat$variable <- as.character(dat$variable) dat$var1 <- dat$variable dat$var1[dat$var1 %in% c('upper','lower')] <- '99% CI' ggplot(dat,aes(x = x,y = value)) + geom_line(aes(colour = var1,group = variable)) (I only convert to character because its a bit simpler to manipulate than factors, IMHO.) As I said, use one variable to delineate the grouping (for drawing the lines) and another variable to delineate which points get the same color.

Categories : R

Remove Duplicate value from report mysql query
You're mixing join styles (ANSI joins and comma joins) in your query. Don't do that. Try always use ANSI JOIN notation. It's more descriptive. That being said I believe your query should look like SELECT c.lname, c.fname, c.mname, s.date, b.fee, p.amount payment FROM sell s JOIN customer c ON s.cid = c.pid JOIN billing b ON s.cid = b.cid AND s.completion = b.completion LEFT JOIN payments p ON b.completion = p.completion AND b.cid = p.cid WHERE s.date BETWEEN ? AND ? ORDER BY s.date The query obviously has not been tested If that doesn't solve your problem you have to edit your question and post relevant sample data for all your tables involved (a few rows) just to to produce desired output (which you've already posted) in the

Categories : Mysql

Remove duplicate words after stripping punctuation
This can work: $ tr -d "[[:punct:]]" < file | sort -u VSDmaMapInfo VSPortErr Explanation tr -d "[[:punct:]]" < file | sort -u remove puntuation chars read file get unique Update From your comment: I just had an observation: If the input containts VSDmaMapInfo::callMe it is removing the punctuation but joining the next word like VSDmaMapInfocallMe. Is it possible that I have the output as VSDmapMapInfo only without the next word getting appended. We can do the following: $ cat file VSDmaMapInfo VSDmaMapInfo:: VSDmaMapInfo; VSDmaMapInfo;asdfs VSPortErr VSPortErr, VSPortErr:: $ awk -F"[,:;]" '{print $1}' file | sort -u VSDmaMapInfo VSPortErr That is, make awk print the first word before any ,, : or ;. Then, sort it with -u par

Categories : Regex

How to remove duplicate words from same string in csv column
The problem with this: ' '.join(set(col.split())) … is that you're calculating the result, but not doing anything with it. If you want to replace line1['Name'] with the result, you have to do this: outline1['Name'] = ' '.join(set(col.split())) Meanwhile, set returns values in arbitrary order. So, once you fix that, you're going to end up randomly scrambling the words. Worse, it may appear to work as expected with small sets on your system, but then fail with larger sets or on another machine… You can use the OrderedSet recipe linked from the collections docs. However, there's another alternative that seems cleaner: the unique_everseen function from the itertools recipes. While we're at it, I don't understand why you're doing outline = dict(line1), when line1 is already a d

Categories : Python

Remove duplicate phrases in a foreach loop
Use array_unique: $str = "Cars,Bikes,Stuff,Gadget,Cars,Bikes,Bikes,Stuff,Gadget"; $r = explode(",", $str); $unique = array_unique($r); $new_str = implode(",", $unique);

Categories : PHP

Remove duplicate text from multiple strings
def common_prefix_length(*args) first = args.shift (0..first.size).find_index { |i| args.any? { |a| a[i] != first[i] } } end def magic(*args) i = common_prefix_length(*args) args = args.map { |a| a[i..-1].reverse } i = common_prefix_length(*args) args.map { |a| a[i..-1].reverse } end a = "This is Product A with property B and propery C. Buy it now!" b = "This is Product B with property X and propery Y. Buy it now!" c = "This is Product C having no properties. Buy it now!" magic(a,b,c) # => ["A with property B and propery C", # "B with property X and propery Y", # "C having no properties"]

Categories : Ruby

Excel - Remove duplicate row if column value is null
There are a number of ways you could do this including a vba procedure. However, one easy way without needing VBA would be to use the next available column to mark rows for delete. If this was column D using the example above then you would paste the following formula into cell D2: =AND(COUNTIF(A$2:A2,A2)>1, C2="null") This can then be pasted down the remaining rows. The A$2 reference will remain the same (because of the dollar) and the other A2 references will change relative to the cell they are pasted to. You can then set auto filter, to only the true records, delete these rows and then unfilter. Let me know if you would prefer an automated solution as the VBA for this would also be pretty straight forward.

Categories : Excel

Python: Best Way to remove duplicate character from string
Using itertools.groupby: >>> foo = "SSYYNNOOPPSSIISS" >>> import itertools >>> ''.join(ch for ch, _ in itertools.groupby(foo)) 'SYNOPSIS'

Categories : Python

How do I remove duplicate objects from two separate ArrayLists?
Create a Set and addAll from both the ArrayLists. Set<Person> set = new ArrayList<Person>(); http://docs.oracle.com/javase/6/docs/api/java/util/Set.html

Categories : Java

How to remove Duplicate Values from a list in groovy
How about: session.ids = session.ids.unique( false ) Update Differentiation between unique() and unique(false) : the second one does not modify the original list. Hope that helps. def originalList = [1, 2, 4, 9, 7, 10, 8, 6, 6, 5] //Mutate the original list def newUniqueList = originalList.unique() assert newUniqueList == [1, 2, 4, 9, 7, 10, 8, 6, 5] assert originalList == [1, 2, 4, 9, 7, 10, 8, 6, 5] //Add duplicate items to the original list again originalList << 2 << 4 << 10 // We added 2 to originalList, and they are in newUniqueList too! This is because // they are the SAME list (we mutated the originalList, and set newUniqueList to // represent the same object. assert originalList == newUniqueList //Do not mutate the original list def secondUniqueList = ori

Categories : Groovy

XSLT to remove duplicate values while concating
Ok I see your problem. For this a key based solution Using Keys to Group: The Muenchian Method: Try this: <?xml version="1.0" encoding="utf-8" standalone="no"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output encoding="UTF-8" indent="yes" method="xml" version="1.0"/> <xsl:key name="kMaterial" match="item[Tank!='RECV' and Tank!='PARK' and Quantity &gt; 500]" use="Material"/> <xsl:key name="kMaterialTank" match="item[Tank!='RECV' and Tank!='PARK' and Quantity &gt; 500]" use="concat(Material,'|', Tank)"/> <xsl:template match="/"> <Rowsets> <Rowset> <xsl:variable name="materials" select=".//item[Tank!='RECV' an

Categories : Xslt

How to show one result and to remove the duplicate category
One quick trick is to just store them into a 2 dimensional array. Just do the following instead of echo'ing out the item the moment you have it: $clothes[$item->category_title] = 0; An entry is created and when you hit a duplicate it just recreates an entry. Now you can foreach ( $clothes as $item => $count ) { echo $item; } As an added tidbit, you could even increment the value, instead of setting it to 0, in case you need to know how many times the item appears in the list

Categories : PHP

Oracle SQL -- remove partial duplicate from string
It seems to me that you might be pushing SQL beyond what it is capable/designed for. Is it possible for you to handle this situation programmatically in the layer that lays under the data layer where this type of thing can be more easily handled?

Categories : SQL



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.