w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
How to find with lookahead in Visual studio 2012 Regex Find and Replace
You need to add a (?s), which enables multiline matching, and also escape the period in using Example.Foo. The regex should be something along the lines of: (?s)using Example.Foo;(?=.*BaseClass<SomeClass>)

Categories : Regex

How to find the child project of a parent project which is passed in the 'WHERE' clause and also, to find the duplicate objects
This is most of the answer, as we determined in the interactive chat: SELECT A.projectName as PARENT,COUNT(A.PROJECTNAME) AS PARENTPROJECTCOUNT, B.ProjectName as CHILD, COUNT(B.PROJECTNAME) AS CHILDPROJECT from psprojectitem a INNER JOIN psProjectItem B ON a.objecttype = b.objecttype AND a.objectid1 =b.objectid1 AND a.objectvalue1 = b.objectvalue1 AND a.objectid2 = b.objectid2 AND a.objectvalue2 = b.objectvalue2 AND a.objectid3 = b.objectid3 AND a.objectvalue3 = b.objectvalue3 AND a.objectid4 = b.objectid4 AND a.objectvalue4 = b.objectvalue4 WHERE A.projectname in (SELECT ProjectName from psProjectDefn WHERE lastupdoprid <> 'pplsoft') AND a.projectname <> B.projectName and A.PROJECTNAME = 'AAAA_JOB_KJ' ORDER BY B.PROJECTNAME

Categories : SQL

Find duplicate values in an array which creates a new array with key as duplicate keys and values as the dupes
This should do what you want. Loop over the array, and see if the value was already in there. If so, add it to the result. $arr_duplicates = array(); foreach($array as $k=>$v){ // array_search returns the 1st location of the element $first_index = array_search($v, $array); // if our current index is past the "original" index, then it's a dupe if($k != $first_index){ $arr_duplicates[$k] = $first_index; } } DEMO: http://ideone.com/Kj0dUV

Categories : PHP

multiple Regex line changes: duplicate, replace and delete parts
Use the hold space: sed -e 's%/path/to/delete/%%;h;s%/%-%g;x;G;s/ / /' The h copies the pattern space (the name after the leading path is deleted) to the hold space. Replace the slashes with dashes in the pattern space. Exchange (x) the pattern and hold spaces. Concatenate the hold space after the pattern space with a newline (G). Replace the newline by a space. That replaces one too many slashes...but the change is 'trivial' if you know about branching in sed too. sed -e 's%/path/to/delete/%%;h;: redo;s%/(.*/.*)%-1%g;t redo;x;G;s/ / /' The difference is in the presence of : redo which creates a label redo; the t redo which jumps to the label redo if a substitute operation changed anything since the last test; and in the more complex regex which matches a slash (which is no

Categories : Regex

Remove duplicate rows from csv file based on 2 columns with regex in Python
You need a dictionary to track the matches. You do not need a regular expression, only the first 5 characters need to be tracked. Store rows by their 'key', comprised of the first column and the first 5 characters of the second, and add a count. You need to count first, then write out the collected rows and counts. If ordering matters, you can replace the dictionary with collections.OrderedDict() but otherwise the code is the same: rows = {} with open(inputfilename, 'rb') as inputfile: reader = csv.reader(inputfile) headers = next(reader) # collect first row as headers for the output for row in reader: key = (row[0], row[1][:5]) if key not in rows: rows[key] = row + [0,] rows[key][-1] += 1 # count with open('myfilewithoutduplicates.csv'

Categories : Python

Regex - Find all groups where you can find a given number
You can try this regex [^:]*(?<=[-:])2(?=[-:])[^:]* [^:] means match any character except : [^:]* would match 0 to many characters except : 2(?=[-:]) would match 2 only if it is followed by - or : (?<=[-:])2 would match 2 only it is preceded by - or : OR [^:]*2[^:]*

Categories : Regex

Find duplicate values in R
This will give you duplicate rows: vocabulary[duplicated(vocabulary$id),] This will give you the number of duplicates: dim(vocabulary[duplicated(vocabulary$id),])[1] Example: vocabulary2 <-rbind(vocabulary,vocabulary[1,]) #creates a duplicate at the end vocabulary2[duplicated(vocabulary2$id),] # id year sex education vocabulary #21639 20040001 2004 Female 9 3 dim(vocabulary2[duplicated(vocabulary2$id),])[1] #[1] 1 #=1 duplicate EDIT OK, with the additional information, here's what you should do: duplicated has a fromLast option which allows you to get duplicates from the end. If you combine this with the normal duplicated, you get all duplicates. The following example adds duplicates to the original vocabulary object (line 1 is duplicated twice a

Categories : R

Is there way to find duplicate words?
I don't know if you could do this with formulas in Excel unless you know what word you are looking for within the cell. You could try either a UDF, or a Regular Expression. my question and answer with links might get you started: StackOverflow: formula to see if a surname is repeated within a cell and maybe: VBA Express Once you've posted your Excel worksheet with data we see if I've got it wrong!

Categories : Vba

SQL Server Find Duplicate dates with the same ID
If you need to return the rows, then you want to use a window function: select [uniqueID], [requirementId], [number], [description], [dtmexecuted], [amount] from (select t.*, count(*) over (partition by requirementid, dtmexecuted) as cnt from MyTable t ) t where cnt > 1

Categories : SQL

Find duplicate items in csv file
read it in line by line, treat like a plain text file. parse each line using string.split on ',' use one List to track ID's, using .Contains use custom data object structures for the data itself, and make two lists, one for the unique entries and one for the duplicates. (total of 3 lists) if you want actual code examples, please give a list of things you have tried for me to debug along with what the errors are.

Categories : C#

Find Duplicate Rows/Records from Table
GROUP BY will collapse the results on the field you're grouping, in this case content - hence why you only see two results. If you want to keep the GROUP BY technique, you can also use GROUP_CONCAT(niche) to pull a comma-separated list of each niche for a given content value: SELECT content, GROUP_CONCAT(niche) AS niche, COUNT(content) TotalCount FROM table_name GROUP BY content HAVING COUNT(content)>=2; You can then use PHP's explode(',', $row['niche']) to get each distinct value and then use those to determine which one you want to delete. foreach($array as $row) { $niches = explode(',', $row['niche']); foreach ($niches as $niche) { echo $row['content'] . " - " . $niche . " - " . $row['TotalCount'] . "<br />"; } }

Categories : PHP

Ignore duplicate filenames in 'find' command
Here is a solution, using awk: find /home/ -type f -name "*.html" | awk -F/ '{a[$NF]=$0}END{for(i in a)print a[i]}' | zip -j all-html-files -@ If multiple files with the same name are found, the last file found will be stored in the zip file.

Categories : Unix

find duplicate value in multi-dimensional array
$input = array("a" => "green", "red", "b" => "green", "blue", "red"); $result = array_unique($input); print_r($result); use this code $array = array(0 => 'blue', 1 => 'red', 2 => 'green', 3 => 'red'); $key = array_search('green', $array); // $key = 2;

Categories : PHP

SQL - Find duplicate values and remove in a field
SELECT DISTINCT ArticleCategories FROM Article OR SELECT ArticleCategories FROM Article GROUP BY ArticleCategories AND THIS IS FOR DELETE DUPLICATE VALUES DELETE FROM Article WHERE ArticleCategories NOT IN ( SELECT MAX(ArticleCategories) FROM Article GROUP BY ArticleCategories )

Categories : SQL

how to return unique value and also find duplicate values? SQL
I'm not entirely sure if this is what you want, but it sounds like maybe? select min(t2.order_no), t2.field1, t2.field2, t2.field3, t1.cnt from table_name t2, ( select field1,field2,field3, count(*) from table_name group by field1,field2,field3 having count(*) > 1 ) t1 where t1.field1 = t2.field1 and t1.field2 = t2.field2 and t1.field3 = t2.field3 group by t2.field1, t2.field2, t2.field3, t1.cnt For each record returned in your deduplicating subquery, the outer query will attach to that record the smallest "order number" that matches the given combination of fields. If this isn't what you're looking for, please clarify. Some sample data and sample output would be helpful. EDIT: From your posted sample data, it looks like you're looking to just retur

Categories : SQL

How to find duplicate set of mysql ids from bash script
How are you displaying the table above? You might have some success with GROUP_CONCAT Can I concatenate multiple MySQL rows into one field? SELECT group_uid, GROUP_CONCAT(product_uid,separator ','), count(*) FROM <tab> GROUP BY group_uid HAVING count(*) > 1 I'm not sure how it would order the strings as I don't have mysql at present

Categories : Mysql

Most efficient way to find a duplicate key in a list of objects
Since your id is unique, why not to use something like map. You can create and save a separate var map = {}; Then every time new object comes in you do the following: map['123'] = true; More like: if (!map[new_id]) { map[new_id] = true; your_array.push({"value": "3", "id": "235"}); } else { // do what you want... maybe update the value } So in this way, you won't push any objects with existing id.

Categories : Javascript

How to find duplicate entries in database using Propel ORM?
Well, this question suggests using counts... you could replicate that query in Propel (I think) with this: $results = TableNameQuery::create() ->select(array("id", "field", "COUNT(*)")) ->groupBy("field") ->having("COUNT(*) > ?", 1) ->find(); Of course, this gets a little hairy, so you might just want to use straight SQL if Propel fails you. (For reference, here's the SQL:) SELECT field, COUNT(*) FROM table_name GROUP BY field HAVING count(*) > 1

Categories : PHP

How to find duplicate email within a mysql table
If you want to output the data exactly like shown in your question, use this query: SELECT email, COUNT(*) AS count FROM table GROUP BY email HAVING count > 0 ORDER BY count DESC;

Categories : Mysql

How should I find the rows with a duplicate field in a big table?
You need to have an index on the email column. Otherwise, the query has to scan the entire table to count the number of rows for each email. There's no way to make it faster other than with an index.

Categories : Mysql

Find count of duplicate elements and create arrays of them
Irrespective of your solution if I understood your question correctly then your input would be a array (which contains duplicates) and output would be list of duplicate arrays. I have simple approch towards this problem that is a Map where Integer would be the key and List would be the value. Written a little snippet(java 1.4 supported) below. Map map = new HashMap(); int[] array = {121,122,121,122,123,121,122}; for(int i=0;i<array.length;i++){ if(map.get(array[i])==null){ // no entry available List list = new ArrayList(); list.add(array[i]); map.put(array[i],list); }else // entry is already available map.get(array[i]).add(array[i]); } I know you have constraint with java version though this could be easier with google collection library - gu

Categories : Java

Find duplicate items in list based on particular member value
How can I find duplicate items in list based on particular value and group the duplicated items together? Sounds like GroupBy to me. You've already done that grouping in the code in the question - you just need to use the results. The result of GroupBy is a sequence of groups where each group is a key and a sequence of values sharing that key. For example: foreach (var group in Customers.GroupBy(x => x.emailaddress)) { Console.WriteLine("Customers with email address {0}", group.Key); foreach (var customer in group) { Console.WriteLine(" {0}", customer.Name); // Or whatever } }

Categories : C#

Find row that have a duplicate field, the filed type is blob
probable you can use prefix of the column by substr() or left() and compare. How much size you want to take that depends on your data distribution or prefix uniqueness of the column data. for uniqueness check you can fire the below query if the select count(distinct left(answer, 128))/count(*), count(distinct left(answer, 256))/count(*) from answers. This will provide you selectivity or data distribution in your column. suppose 128 gives you answer as 1 i.e. all unique if you take first 128 bytes then choose that amount of data from each row and work. Hope it helps.

Categories : Mysql

Optimizing mysql query to find all duplicate entries
It looks like you are trying to find all duplicates on field2 in TableA. The first step would be to move the in subquery to the from clause: SELECT DISTINCT a.`field1`, a.`filed2` AS field2Alias, a.`field3`, b.`field4` AS field4Alias, a.`field6` AS field6Alias FROM tableA a left join tableC c on c.`idfield` = a`.`idfield` join `tableB` b ON b.`idfield` = a.`idfield` join (SELECT field2 FROM tableA group by field2 having count(*) > 1 ) asum on asum.field2 = a.field2 ORDER BY tableA.field2 There may be additional optimizations, but it is very hard to tell. Your question "find duplicates" and your query "join a bunch of tables together and filter them" don't quite match. It would also be helpful to know what tables have w

Categories : Mysql

Linq to Find and eliminate duplicate file names
Use the GroupBy method: IEnumerable<FileData> dats = FastDirectoryEnumerator .EnumerateFiles(myDirectory.FullName, "*.zip", SearchOption.AllDirectories) .Where(f => f.Size / 1024 > 750) .Where(f => !f.Name.EndsWith(".reg.zip")) .Where(f => f.Name.StartsWith("2001")) .GroupBy(f => f.Name) .Select(g => g.First()); Or in query syntax: IEnumerable<FileData> dats = from f in FastDirectoryEnumerator.EnumerateFiles(…) where f.Size / 1024 > 750 && !f.Name.EndsWith(".reg.zip") && f.Name.StartsWith("2001") && group f by f.Name into g select g.First(); This will return the first FileData with each name. If you want to get just the unique Name values, it's actually a bit easier

Categories : C#

mysql find duplicate values across columns in different tables
Since MySQL doesn't have FULL OUTER JOIN, you have to use a UNION. SELECT COUNT(*) as unique_id FROM (SELECT id FROM A WHERE id = '$id' UNION ALL SELECT id FROM B WHERE id = '$id' UNION ALL SELECT id FROM C WHERE id = '$id') x

Categories : Mysql

Powershell to find and rename duplicate values in list items
This should do it: #Add-PSSnapin microsoft.sharepoint.powershell $web = Get-SPWeb -Identity "siteURL/" $list = $web.Lists["Products"] $AllDuplicateNames = $list.Items.GetDataTable() | Group-Object SAPMaterial | ?{$_.count -gt 1} | %{$_.Name} foreach($duplicate in $AllDuplicateNames) { $dupsaps = $list.Items | ?{$_["SAPMaterial"] -eq $duplicate} $count = 1 foreach($sap in $dupsaps) { $sap[“SAPMaterial”] = $duplicate + "_" + $count $sap.Update() $count++ } } Edit: just found a bug should work now but don't have a sharepoint site to test let me know if it works. You should probably backup before running this to be sure.

Categories : Powershell

The best way to find the starting index of each occurrence of a duplicate word in a string with javaScript
The indexOf method does not work with regexes, you would need to use the search method to find a match index. However, to make multiple matches you need to exec the regex: function findIndexes(s,kw){ var result = [], kw = new RegExp('\b' + kw + '\b', 'ig'), // case insensitive and global flags ^^ r; while (r = kw.exec(s)) { result.push(r.index); } console.log.apply(console, result); return result; }

Categories : Javascript

Find duplicate items in the specific column and update the item which has MIN row number with '-'
The fundamental problem is that the value you want to compare on to generate the appropriate '-' in the stop time column is obtained by looking forward one row. You can do that by using a join with the right criteria... if it works to use RN+1, you can JOIN the table to itself where a.RN+1 = b.RN. Then, you will be able to CASE on a.StopTime = b.StartTime to determine whether or not you should use the '-'. SELECT a.RN ,CASE WHEN CONVERT(VARCHAR,a.StartTime,120) = CONVERT(VARCHAR,a.StopTime,120) THEN '-' ELSE CONVERT(VARCHAR,a.StartTime,120) END StartTime ,CASE WHEN CONVERT(VARCHAR,a.StopTime,120) = CONVERT(VARCHAR,b.StartTime,120) THEN '-' ELSE CONVERT(VARCHAR,a.StopTime,120) END StopTime FROM CTE a LEFT JOIN CTE b on a.

Categories : Sql Server

To find duplicate files on a hard disk by technique other than calculating hash on each file
You may want to use multiple levels of comparisons, with the fast ones coming first to avoid running the slower ones more than necessary. Suggestions: Compare the file lengths. Then compare the first 1K bytes of the files. Then compare the last 1K bytes of the files. (First and last parts of a file are more likely to contain signatures, internal checksums, modfication data, etc, that will change.) Compare the CRC32 checksums of the file. Use CRC rather than a cryptographic hash, unless you have security measures to be concerned about. CRC will be much faster.

Categories : File

[only equal operator]what are the fast algorithms to find duplicate elements in a collection and group them?
For your answer, though I am not 100% sure that you want this is only. If you want good algo try Binary search tree creation. as it is a group,and according to BST properties you can easily group elements. For Example BST() { count = 0; if(elementinserted) count = 1; if(newelement == already inserted element) { count++; put element in array upto count value; } } I hope this explanation can help you.

Categories : C++

Find duplicate data in sheet1 and if it is found in sheet2 then it has to highlight with some specific design(Color)
ws1 and ws2 are declared as Workbooks, but the should be declared a Worksheets. Change the first line of your code to: Dim ws1 as Worksheet, ws2 as Worksheet Furthermore youe have to remove the double quotes when referring to the Worksheets: Set ws1 = Sheets(s1) Set ws2 = Sheets(s2)

Categories : Vba

Python : How to find duplicates in a list and update these duplicate items by renaming them with a progressive letter added
Considering the list is sorted, this will modify the list in-place. If the list is not sorted then you can sort it first using lis.sort(): >>> from string import ascii_uppercase >>> from itertools import groupby >>> from collections import Counter >>> lis = ['T1', 'T2', 'T2', 'T2', 'T2', 'T3', 'T3'] >>> c = Counter(lis) >>> for k, v in groupby(enumerate(lis), key = lambda x:x[1]): l = list(v) if c[k] > 1: for x, y in zip(l, ascii_uppercase): lis[x[0]] = x[1] + y ... >>> lis ['T1', 'T2A', 'T2B', 'T2C', 'T2D', 'T3A', 'T3B']

Categories : Python

Using Regex to Find and Replace
Your example code has an extra '(' that does not belong. This is what the line should be: new_line = re.sub(r'd{4}+d{2}', FeetFramesToTimecode(found.group()), line) This produced output for me like you want.

Categories : Python

Regex Help to find matches
Depending on the implementation of REGEX: '!<([^>]+)>!' or without delimeters '<([^>]+)>' With look arounds '!(?<=<)[^>]+(?=>)!'

Categories : C#

Why can't Eclipse find regex on Mac OSX?
On mac you have to use the flag: -stdlib=libc++ and even then I believe generally only clang is updated enough(so use clang instead of gcc), if you've just been using the Xcode updates. You should also make sure that your Xcode command line tools are updated, because I would guess that is the compiler eclipse is using.

Categories : Regex

Regex to find non matches
You can check that with notepad++: search: <a class="web" type="fig(d+)">Fig (?!1)d+</a> And you can do a replaceAll: search: (<a class="web" type="fig)(d+)(">Fig (?!2)(d+)</a>) replace: $1$4$3 Or you can do a blind search/replace, that replaces the attribute with the content in all cases: search: (<a class="web" type="fig)d+(">Fig (d+)) replace: $1$3$2

Categories : Regex

How to `find` with `-regex` in shell on a Mac
You're confusing regexp and shell globbing. Replace the * with .* (or better, with d{2} - two digits), the . with . (as . means any character), etc. Reading some regexp documentation would be great.

Categories : Shell

RegEx find and replace
Try using: ^(d+,$ For find and nothing for the replace? EDIT: As per update, try using this: (?<=()d+,s* And replace with nothing.

Categories : Regex

find and replace with regex C#
You know that in |d| the | means or? Try escaping it! @"|d|" Your regular expression was equivalent to "" (empty string). This because you must see it as (match of empty string) or (match of letter d) or (match of empty string) always matched from left to right, stop at the first positive match. As we will see, just the first condition (match of empty string) is positive everywhere, so it won't ever go to the other matches. Now, the regex parser will try to match your regex at the start of the string. It is at "/" (first character): does it match? is "" part of "/"? Yes (at least in the "mind" of the regex parser). Now the funny thing is that Regex.Replace will replace the matched part of the string with 17... But funnily the matched part of "/" is "", so it will replace an imagina

Categories : C#



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.