w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Excel 2010 comparing multiple columns (2 columns to 2 other columns)
Here is a User Defined Function that will perform a 2 column lookup. Treat it like a vlookup. LookupPair is a pair of cells. (C2:D2) in your example. Anything other that a pair of side-by-side cells will cause an error. LookupRange is the columns that include both the matching pair columns and the return column. Something like (I1:K101) in your example. LookupRange must contain at least two columns or an error is generated. ReturnCol is the column number in LookupRange that contains the value to be returned. (same as Col_index_num in a vlookup). The function only does exact matches. Function DoubleColMatch(LookupPair As Range, LookupRange As Range, ReturnCol As Integer) As Variant Dim ReturnVal As Variant Dim Col1Val As Variant Dim Col2Val As Variant Dim x As Long If LookupPair.R

Categories : Excel

Matching data from two csv files based on two columns and creating a new csv file with selected columns
This is a quite common question on SO. As of myself, same answer: for a medium-term solution, import in a DB, then perform a query using a JOIN ... Try a search: http://stackoverflow.com/search?q=combining+csv+python

Categories : Python

Ruby on rails How to remove added columns and insert new columns through a migration file
You should add your remove / add column in a separate migration file. class FooMigration < ActiveRecord::Migration def down remove_column :account_number, :transaction_reference_number, :information_to_account_owner end def up add_column :mt940_batches, :created_by, :updated_by, :integer end end Please note that your up and down method should be idem potent. You should be able to go from one to the other when calling rake db:migrate:down and rake db:migrate:up. This is not the case here. However here, it seems that you want to achieve 2 different things in a single migration. If you want to add AND remove columns, consider moving each one in a different migration file: Please read here for more details You would end up with 2 migrations file like this: class Rem

Categories : Ruby On Rails

Mysql query to dynamically convert rows to columns on the basis of two columns
If you had a known number of values for both order and item, then you could hard code the query into: select id, max(case when `order` = 1 then data end) order1, max(case when `order` = 2 then data end) order2, max(case when `order` = 3 then data end) order3, max(case when item = 1 then price end) item1, max(case when item = 2 then price end) item2, max(case when item = 3 then price end) item3, max(case when item = 4 then price end) item4 from tableA group by id; See Demo. But part of the problem that you are going to have is because you are trying to transform multiple columns of data. My suggestion to get the final result would be to unpivot the data first. MySQL does not have an unpivot function but you can use a UNION ALL to convert the multiple pairs of columns into

Categories : Mysql

MySQL: get distinct values from two columns and check whether they occur in two other columns of the same table
It is quite easy, you need only to check if a child have became parent... and because each family has two child you need to do it twice (once for child one and another for child two) and you should have the list of all child whom became parents. And after you have them in the list you only need to count them, see no need to complicate what is easy ;) select count (*) from ( select child1 from fam where child1 in (select parent1 from fam union select parent2 from fam) union select child2 from fam where child2 in (select parent1 from fam union select parent2 from fam) );

Categories : Mysql

How to restrict other columns data getting updated while updating specific columns of a table
In your code you're merging a new object with all null fields except remarks with an object already persistent in the db. That's not the right way: from the Hibernate reference manual: A straightforward way to update the state of an entity instance is to find() it, and then manipulate it directly, while the persistence context is open: Modifying persistent objects If you need to modify a persistent object, first you have to load it, then you edit only the fields you need to update and then you save it. So, you could just load the Project entity from the DB and update only the fields you need and then save() it. The save() actually isn't necessary as Hibernate checks that the object is modified and automatically saves it. @Test public void updateProject() { try {

Categories : Hibernate

Excel VBA: hide/unhide columns in day-over-day tracker, skipping weekend columns
If your data has a row of Dates, you can indeed use Weekday(Cells(r,c)) to determine if its a weekend, then use Select Case to hide/unhide columns. The current can be retrieved by Date(). Then you can set to codes in Sub Workbook_Open() in ThisWorkbook object. So when the file is opened, it runs the code so today and 9 previous days are not hidden. EDIT: Add these 2 sub into "ThisWorkbook" object, change as per your sheet name, row and column. If you are to want 9 previous "Weekdays", then you will have to change the Case or use different approach to determine whether a column should be hidden or not. This is as far as I will go. Good luck! Sub Workbook_Open() ShowTodayPlusPrevious End Sub Private Sub ShowTodayPlusPrevious() ' Assuming Row 1 contains the dates, stating from co

Categories : Vba

InvalidRequestException(why:No indexed columns present in by-columns clause with "equals" operator)
yes...the reason you get exception is the newly added column in not indexed to perform equality or similar operations ; while altering column family you have to write e.g. column_metadata = [{column_name : 'username', validation_class : UTF8Type, index_name : 'username_idx', index_type : 0}] so your final alter query would be like this UPDATE COLUMN FAMILY your_CF_name with column_type = 'Standard' and comparator = 'UTF8Type' and default_validation_class = 'UTF8Type' and key_validation_class = 'UTF8Type' and column_metadata = [ {column_name : 'your_all_existing_columns', validation_class : your_all_existing_validation_class, index_name : 'user_defined_name', index_type : 0}, . . . {

Categories : Java

How to retrieve specific columns rather than auto generate all columns or attributes from Entity
You are telling JPA to fetch Project and its referenced Employees, so it needs to return complete entities otherwise it corrupts the context. I don't know how/if Hibernate supports lazy basic mappings, but that might be one option - mark the attributes in Employee that you don't always want as lazy. This will affect both requestedBy and empNumber relationships equally though. If you do not want complete Employee data, you might have the query return only the data you want. Leave the query to fetch the empNumber but make the requestedBy relationship lazy, and have the query return Project and the Project_.requestedBy data that you want. Linda has a great example of using multiselect to return data at https://blogs.oracle.com/ldemichiel/entry/the_typing_of_criteria_queries .

Categories : Hibernate

Comparing two csv files using given columns and build a third one using specific columns from the matching lines
I would read both csv files into lists of lists so that you have csv1, and csv2. Then to loop over all of them you would do: for e1 in csv1: for e2 in csv2: distance = d(e1[0],e1[1], e2[0], e2[1]) #using a function call to your distance formula To save the results you might use a dictionary so that you can output later in a simple fashion. So when saving a new entry: output_dict[(e1[0], e1[1])] = [e1[3], e2[3]]

Categories : Python

Convert rows in table to columns for displaying into Header columns for Gridview
Assuming you are working with MS SQL server you need to use the PIVOT keyword. Link for the same: PIVOT AND UNPIVOT

Categories : C#

VBA. Comparing values of columns names in two separate worksheets, copy nonmatching columns to third worksheet
If you are going to have this code within the same workbook every time, try using ThisWorkbook.Sheets("Top_Bottom") instead of Workbooks("Complete_Last_Six_Months_Q_Results.xlsx").Sheets("Top_Bottom") replicate that through your code and see if that fixes the problem.

Categories : Excel

Does columns order in multiple columns unique constraint make any difference? Is it justifiable to have duplicate indexes?
Your question is: Would it be justified, in certain situations, to create two indexes? One, multiple column index, on (index_id, index_date) and second, single column index, on (index_date)? The answer is "yes". The first index will be used to satisfy queries with conditions like: filtering on index_id in the where clause filtering on index_id and index_date in the where clause filtering on index_id in the where clause and the ordering by index_date The second index would not be used in these circumstance. It would be used for: filtering on index_date in the where clause And the first index would not be used in this case. The ordering of columns in indexes is important. They are used from the left to the right. So, these two indexes are useful. However, a third ind

Categories : SQL

How do I combine two booleans columns into one varchar columns in a SELECT query?
Try this: select CASE WHEN master = 0 and edition = 0 THEN 'u' WHEN master = 1 and edition = 0 THEN 'm' WHEN master = 0 and edition = 1 THEN 'e' ELSE '???' -- when either are one??? END from myTable

Categories : Mysql

read two columns from a text file and sort both columns in java
You can go for this approach, Create a List<Item> where Item is a type containing 2 Column values. (x1 and x2) Then write a compareTo(Item o) which compares the x1 values of the two Item objects presented to it in compare, and if that gave a definite answer, returned that answer. public class Item implements Comparable<Item> { private Integer int1; private Integer int2; @Override public int compareTo(Item o) { return int1 > (o.int1); } } Hope this helps.

Categories : Java

After converting MySQL key columns to a FOREIGN Columns the site slowed down
The foreign key constraint means that any insert to the FK column has to check if the value exists in the referenced column of users. There could be some overhead to this, but it's an index lookup by definition (probably a PK lookup) so the cost shouldn't be high. Foreign keys also create a shared lock on the parent table during some updates on the child table. This can get in the way of concurrent updates against that table, and make it seem like the system has slower performance. See http://www.mysqlperformanceblog.com/2006/12/12/innodb-locking-and-foreign-keys/ The foreign key also implicitly created an index on the FK column, if no index already exists. Every insert, update, delete has to modify all the indexes of a table at the time of the change, so there is a bit of overhead.

Categories : Mysql

transpose a subset columns in dataframe (not groupby, need to create new columns)
Although this feels a little hacky, you could use a groupby: In [11]: df Out[11]: site_index state 0 1 a 1 1 b 2 1 a 3 2 a 4 2 a 5 2 b In [12]: g = df.groupby('site_index') In [13]: g.apply(lambda x: x.state.reset_index(drop=True).T) Out[13]: 0 1 2 site_index 1 a b a 2 a a b This may also be possible using unstack...

Categories : Python

How to select columns and sum of columns using group by keyword from data table in c#
DtTest .AsEnumerable() .GroupBy ( x=> new { BNO = x.Field<int>("BNO"), INO = x.Field<int>("INO"), Desp = x.Field<string>("Desp"), Rate= x.Field<decimal>("Rate") } ) .Select ( x=> new { x.Key.BNO, x.Key.INO, x.Key.Desp, Qty = x.Sum(z=>z.Field<int>("Qty")), x.Key.Rate, Amount = x.Sum(z=>z.Field<decimal>("Amount")) } );

Categories : C#

Turn vector output into columns in data.table along with other columns?
Try this: featuresDT <- quote(cbind(list(x = mean(X), y = mean(Y), z = mean(Z)), as.data.table(t(quantile(X)))))

Categories : R

aggregation: sum columns by ID and average columns by id (gender and location unchanged)
You can get the result by using the aggregate functions sum() and avg(): select id, sum(cash) SumCash, sum(charge) sumCharge, sum(total) sumTotal, avg(proportion) avgProportion from yt group by id; See SQL Fiddle with Demo Edit, with the new columns that you added you can still get the result by using the aggregate functions. You will just need to include the gender and location columns in the GROUP BY clause: select id, sum(cash) SumCash, sum(charge) sumCharge, sum(total) sumTotal, avg(proportion) avgProportion, gender, location from yt group by id, gender, location; See SQL Fiddle with Demo

Categories : Sql Server

Comparing Two Columns with Different Two Columns in same sheet and Hightlighting which is not matched
Select columns C, D, H, and I (by using ctrl) and then apply this conditional format formula: =AND($C1<>"",$D1<>"",$H1<>"",$I1<>"",NOT(AND(SUM(COUNTIF($H1,"*"&TRIM(MID(SUBSTITUTE($C1," ",REPT(" ",255)),255*(ROW(INDIRECT("1:"&LEN($C1)-LEN(SUBSTITUTE($C1," ",""))+1))-1)+1,255))&"*"))>0,SUM(COUNTIF($I1,"*"&TRIM(MID(SUBSTITUTE($D1," ",REPT(" ",255)),255*(ROW(INDIRECT("1:"&LEN($D1)-LEN(SUBSTITUTE($D1," ",""))+1))-1)+1,255))&"*"))>0))) Example workbook here: https://docs.google.com/file/d/0Bz-nM5djZBWYX0EwMk1GN1NjMmc/edit?usp=sharing

Categories : Excel

copy columns and rows based on two criteria in two columns
Although I am not sure I understand your question, you can filter on columns H and AC by changing the field references in the two Autofilter statements to Field:=8 and Field:=29, respectively.

Categories : Excel

How to select only specific columns from a DataFrame with MultiIndex columns?
You can use either, loc or ix I'll show an example with loc: data.loc[:, [('one', 'a'), ('one', 'c'), ('two', 'a'), ('two', 'c')]] When you have a MultiIndexed DataFrame, and you want to filter out only some of the columns, you have to pass a list of tuples that match those columns. So the itertools approach was pretty much OK, but you don't have to create a new MultiIndex: data.loc[:, list(itertools.product(['one', 'two'], ['a', 'c']))]

Categories : Python

Mapping a few numerical columns into a new columns of tuples in Pandas
I don't recommend this, but you can force it: In [11]: df2.apply(lambda row: pd.Series([(row[0], row[1])]), axis=1) Out[11]: 0 0 (10, 2) 1 (10, 1) 2 (20, 2) Please don't do this. Two columns will give you much better performance, flexibility and ease of later analysis. Just to update with the OP's experience: What was wanted was to count the occurrences of each [0, 1] pair. In Series they could use the value_counts method (with the column from the above result). However, the same result could be achieved using groupby and found to be 300 times faster (for OP): df2.groupby([0, 1]).size() It's worth emphasising (again) that [11] has to create a Series object and a tuple instance for each row, which is a huge overhead compared to that of groupby.

Categories : Python

merging data of different columns into specific columns in oracle
If you are willing to have them in one column rather than separate columns, then the code is pretty easy: select name, number, ((case when reg_type1 = 'Y' then 'reg_type1 ' else '' end) || (case when reg_type2 = 'Y' then 'reg_type2 ' else '' end) || (case when reg_type3 = 'Y' then 'reg_type3 ' else '' end) || (case when reg_type4 = 'Y' then 'reg_type4 ' else '' end) || (case when reg_type5 = 'Y' then 'reg_type5 ' else '' end) || (case when reg_type6 = 'Y' then 'reg_type6 ' else '' end) || (case when reg_type7 = 'Y' then 'reg_type7 ' else '' end) ) from t; If you really want them in separate columns, you can do something like this: select name, number, substr(regtypes, 1, 10) as level1, substr(regtypes, 11, 10)

Categories : SQL

update two columns failures - failed to set two columns to null
There are any number of reasons why that might fail. col3 may have a NOT NULL constraint (a), col2+col3 may be the composite primary key, there may be a trigger on the table that disallows both being NULL and so on. Short of seeing all the database setup (table definition, triggers and so on) and the actual error you're getting when you try, it's a little hard to be definitive. (a) Keeping in mind that the statement is atomic - either both will be set to NULL or neither will be changed, there is no halfway state possible in a proper transactional database.

Categories : SQL

Operating on selected columns of dataframe in R, without affecting other columns
Just use [<-. It is vectorised, e.g. set.seed(123) df <- data.frame( V1 = sample(5), V2 = sample(5), V3 = sample(5), V4 = sample(5) ) V1 V2 V3 V4 1 2 1 5 5 2 4 3 2 1 3 5 4 3 4 4 3 2 4 3 5 1 5 1 2 df[ , c(1,4) ] <- df[ , c(1,4)] + 10 V1 V2 V3 V4 1 12 1 5 15 2 14 3 2 11 3 15 4 3 14 4 13 2 4 13 5 11 5 1 12 Using column numbers is generally thought of as bad practice. What if the ordering changes in future file versions etc? Better to use names, e.g. c("V1" , "V4") then the ordering does not matter.

Categories : R

Merge common columns in two tables into a single set of columns
You LEFT JOIN both tables and then use CASE on Type to select from appropriate table. Here is sample for few columns, you can do it for all columns you need. SELECT c.ID ,c.Name ,c.Type ,i.Month ,i.[Total Amount] ,CASE WHEN c.Type = 'ITS' THEN its.Price ELSE tp.Price END AS Price --construct for common columns ,CASE c.Type WHEN 'ITS' THEN its.Volume WHEN 'TP' THEN tp.Volume ELSE NULL END AS Volume -- or something like this ,CASE WHEN c.Type = 'ITS' THEN COALESCE(its.Description, dits.Name) ELSE dtp.Name END AS Description -- this for description FROM Customer c LEFT JOIN Invoice in ON in.CustomerID = c.ID LEFT JOIN InvoiceItemITS its ON its.InvoiceID = in.ID LEFT JOIN InvocieItemTP tp ON tp.InvoiceID = in.ID LEFT JOIN [Descr

Categories : SQL

R - Summing columns based on multiple other factor columns
First, I'd convert the data frame to a long form in which you have 3 columns: value, location, case. case should indicate from which case (e.g. row) the data came from. order doesn't matter. so your data frame will look something like: Value Loc Case 20 East 1 20 South 2 ... 10 East 1 and so forth... one way to do that is to stack your values and locations, and then manually (and easily) add case numbers. suppose your original dataframe is called df, and has values in columns 2,4 and locations in columns 3,5 v.col = stack(df[,c(2,4)])[,1] v.loc = stack(df[,c(3,5)])[,1] v.case = rep(1:nrow(df),2) long.data = data.frame(v.col,v.loc,v.case) # this is not actually needed, but just so you can view it now use tapply to create the columns you need s = tapply

Categories : R

How to convert the query result from multiple columns to two columns?
Try this for your query SELECT 'Success' as transactionType, count(*) from user_transaction where transaction_status = 'success'; UNION SELECT 'In Process' as transactionType, count(*) from user_transaction where transaction_status = 'inprocess' UNION SELECT 'Fail' as transactionType, count(*) from user_transaction where transaction_status = 'fail' UNION SELECT 'Cancelled' as transactionType, count(*) from user_transaction where transaction_status = 'Cancelled'; Uses the union operator to combine the result sets from multiple queries into a single result set.

Categories : PHP

Sticky header columns do not match table columns
This solved my question. /* apply a natural box layout model to all elements */ *, *:before, *:after { -moz-box-sizing: border-box; -webkit-box-sizing: border-box; box-sizing: border-box; }

Categories : Javascript

Hibernate set index on multiple columns while one of the columns is the ID
See this post (googled java hibernate jpa index): JPA: defining an index column Specifying an index (non unique key) using JPA http://www.objectdb.com/java/jpa/entity/index Then see that (googled java hibernate jpa index composite): How to define index by several columns in hibernate entity?

Categories : Java

Select Columns IF condition else return as columns
No, you cannot change the output columns in the middle of a query. One option is to return ALL columns and fill them based on your conditions: select CLP.Segment_GUID as CLPSegmentGuid, CASE WHEN NM1.nm101 = 'IL' THEN NM1.NM102 ELSE NULL as [INSURED_Entity_Type_Qualifier], CASE WHEN NM1.nm101 = 'IL' THEN NM1.NM103 ELSE NULL as [INSURED_Entity_Last_Name], CASE WHEN NM1.nm101 = 'IL' THEN NM1.NM104 ELSE NULL as [INSURED_Entity_First_Name], CASE WHEN NM1.nm101 = 'IL' THEN NM1.NM105 ELSE NULL as [INSURED_Entity_Middle_Name], CASE WHEN NM1.nm101 = 'IL' THEN NM1.NM108 ELSE NULL as [INSURED_Entity_Identification_Code_Type], CASE WHEN NM1.nm101 = 'IL' THEN NM1.NM109 ELSE NULL as [INSURED_Entity_Identification_Code], CASE WHEN NM1.nm101 = '74' THEN NM1.

Categories : SQL

Binning and Naming New Columns with Mean of Binned Columns
colnames <- c("599.773", "599.781", "599.789", "599.797", "599.804", "599.812" ,"599.82" ,"599.828" ) mat <- matrix(scan(), nrow=4, byrow=TRUE) 0 0 0 0 0 2 1 4 0 0 0 0 0 1 0 3 0 0 0 0 2 1 0 1 3 0 0 0 3 1 0 0 colnames(mat)=colnames rownames(mat) = LETTERS[1:4] sRows <- function(mat, cols) rowSums(mat[, cols]) sapply(1:(dim(mat)[2]/4), function(base) sRows(mat, base:(base+4)) ) [,1] [,2] A 0 2 B 0 1 C 2 3 D 6 4 accum <- sapply(1:(dim(mat)[2]/4), function(base) sRows(mat, base:(base+4)) ) colnames(accum) <- sapply(1:(dim(mat)[2]/4), function(base) mean(as.numeric(colnames(mat)[ base:(base+4)] )) ) accum #------- 599.7888 599.7966 A 0

Categories : R

subracting two columns after firstly adding two columns together
Looks like Dreamwaver is somehow rewriting your query, such that there is this WHERE 0 = 1 appended. If this is appended direct after the SELECT-clause, it is obviously an SQL syntax error. Assuming your is_dead column has only values 0 and 1, you can do the maths a little easier and have a from clause which should lead Dreamwaver to a correct syntax: SELECT SUM(is_dead) - ( SUM(bandit_kills) + SUM(survivor_kills) ) FROM survivor Still this is no explanation, why WHERE 0 = 1 is appended. See http://sqlfiddle.com/#!2/336f4/2 for playing around

Categories : Mysql

Melt the data frame with 4 columns to three columns
I just tried it using reshape2 package and I get the same error. The problem seems to be recycling. That is, columns of class POSIXlt POSIXt don't seem to recycle. Let me explain in detail now. First type reshape2:::melt.data.frame to have a look at this function. If you type now: debugonce(reshape2:::melt.data.frame) melt(newdata, id.var = "date") Then, you'll be in debug mode. Keep hitting enter until you see this output: debug: df <- data.frame(ids, variable, value, stringsAsFactors = FALSE) Browse[2]> Before you hit enter, check what ids looks like. It's a data.frame with one column that's exactly the same as your first column (date). That is it's dimensions are 336*1. Now, if you hit enter one more time, you'll see the error message you posted: Error in data.frame(ids,

Categories : R

How to split a cell into columns so they have the same width as columns below them in a row
It is very easy. All you need to do is use the rowspan attribute on the first th and add another row to the table. Like so: http://jsfiddle.net/skip405/CPSs9/1/

Categories : HTML

Easier way to convert Access Columns to SQL Columns
You are using appropriate tools, so performance for these operations should be as designed by MS. There is a "not so good/not advised" trick you can try - implicit conversion by SQL Server. If you know that there will be no truncation when column is shortened, then there will be no error. No error - remove that conversion. SSIS will complain with a warning, but will proceed. Same thing for Unicode transformation. If you know that there are no Unicode symbols in Access - remove that conversion. SSIS will flag it with a warning as well, but will keep running until encounter a Unicode character. To speed things up you can install SSIS on the same machine as SQL Server. Although some DBAs do not like managing complex installations and prohibit this. Copy Access db to the same machine as SSI

Categories : Sql Server

SqlBulkCopy Column Mappings 500 Columns Plus New Columns
I ended up using this and works great. //sqlbulkcopy does a blind insert into the db table. //we have to add a column mapping property to tell //sqlbulkcopy how to map each column from source to destination correctly foreach (string sColumn in columnNamesNew) { string sNewColumn = sColumn.Replace(' ', '_'); bulkcopy.ColumnMappings.Add(sColumn, sNewColumn); }

Categories : SQL

Getting columns from join columns with EF query
You definitely should post what you've tried before asking the question ... It should look like that: var results = from j1 in context.users join j2 in context.points on j1.UserId equals j2.UserId join j3 in context.addresses on j1.UserId equals j3.UserId select new { j1.Username, j2.points, j3.address1 } It will give you a collection of anonymous objects with your 3 columns values.

Categories : C#



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.