w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
In Django, how do I make a model that queries my legacy SQL Server and Oracle databases and sends that to the view?
You can query different databases in your ORM calls by leveraging the using statement: https://docs.djangoproject.com/en/1.5/ref/models/querysets/#using This would allow you to set up as many database definitions as you need in settings.py, then specify which DB to query at the view level. That way, you wouldn't have to change your model definition should you decide to consolidate your databases, etc.

Categories : Django

Good Coding Practice with Databases: One connection / query vs one connection / all queries
It is recommended to keep connections 'open' only for as long as they are needed i.e. as long as the unit of work requires. As you have suggested, when you 'close' them they are not actually closed, but just returned to the pool for re-use by other threads/applications. This applies to any ADO.NET data provider. If your application is single threaded, then you probably won't notice pooling occurring, but in situations when there are many threads requiring data-access then the quicker connections are returned to the pool then the quicker they can be re-used by other threads.

Categories : C#

SQL copy unique records between two equal databases in two separate databases
Try using this referencing a.recipeId and b.recipeId: SELECT a.* FROM ADatabaseCX.dbo.Recipes AS a LEFT JOIN ADatabaseRH.dbo.Recipes AS b ON a.recipeId = b.recipeId WHERE b.recipeId IS NULL Or this would also work using NOT IN: SELECT * FROM ADatabaseCX.dbo.Recipes WHERE recipeId NOT IN ( SELECT recipeId FROM ADatabaseRH.dbo.Recipes )

Categories : SQL

Causes of MySQL error 2014 Cannot execute queries while other unbuffered queries are active
I am hoping for a better answer than the following. While some of these solutions might "fix" the problem, they don't answer the original question regarding what causes this error. Set PDO::ATTR_EMULATE_PREPARES=>true (I don't wish to do this) Set PDO::MYSQL_ATTR_USE_BUFFERED_QUERY (didn't work for me) Use PDOStatement::fetchAll() (not always desirable) Use $stmt->closeCursor() after each $stmt->fetch() (this mostly worked, however, I still had several cases where it didn't) Change PHP MySQL library from php-mysql to php-mysqlnd (probably what I will do if no better answer)

Categories : PHP

Is it possible to combine MySQL queries to multiple tables into a single query based on the results from one of the queries?
You want to look at MySQL Joins. I think this may do something like what you're after, but it will almost definitely need debugging! SELECT DISTINCT s.ownerid, s.message FROM statusupdates s LEFT JOIN friends f ON ($userid = f.requestfrom) LEFT JOIN friends f ON ($userid = f.requestto) ORDER BY s.createddate;

Categories : PHP

externalize hibernate queries or sql queries in properties file in spring
If I understood right, it sounds like regular Spring usage. You may have a class like: class userDao { String findActiveUsers; //... Getters and Setters public List<User> findActiveUsers() { return getCurrentSession().createQuery(findActiveUsers).list(); } } So, your application context would look like: <bean id="userDao" class="my.package.UserDao"> <property name="findActiveUsers" value="FROM User u WHERE u.active=true"/> </bean>

Categories : Java

'PDOException' with message 'SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active
Took a bit of fiddling, but I found that when I took the ATTR_EMULATE_PREPARES=false out (the default is to emulate), it worked: <?php $db = new PDO ($cnstring, $user, $pwd); $db->setAttribute (PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); //$db->setAttribute (PDO::ATTR_EMULATE_PREPARES, false); $db->setAttribute (PDO::MYSQL_ATTR_USE_BUFFERED_QUERY, true); $st = $db->query ("CREATE TRIGGER `CirclesClosureSync` AFTER INSERT ON Circles FOR EACH ROW BEGIN INSERT INTO CirclesClosure (ancestor, descendant) SELECT ancestor, NEW.ID from CirclesClosure WHERE descendant=NEW.Parent; INSERT INTO CirclesClosure (ancestor, descendant) values (NEW.ID, NEW.ID); END;"); $st->closeCursor(); ?> Hope this helps someone

Categories : PHP

mssql multiple queries insert queries and results
I think that you are talking about an OUTPUT clause from the insert statement. http://msdn.microsoft.com/en-us/library/ms177564.aspx CREATE TABLE #t (id int identity (1, 1), f1 nvarchar(20 ) ) --the table that has the identities CREATE TABLE #ids ( id int ) --the table to store the inserts into table #t INSERT INTO #t ( f1 ) OUTPUT INSERTED.id INTO #ids SELECT N'AAAA' INSERT INTO #t ( f1 ) OUTPUT INSERTED.id INTO #ids SELECT N'BBBB' INSERT INTO #t ( f1 ) OUTPUT INSERTED.id INTO #ids SELECT N'CCCC' SELECT * FROM #t SELECT * FROM #ids Another way is to use @@IDENTITY or SCOPE_IDENTITY() SQL Authority link discussing/comparing them CREATE TABLE #t (id int identity (1, 1), f1 nvarchar(20 ) ) CREATE TABLE #ids ( id int ) INSERT INTO #t ( f1 ) SELECT N'AAAA' INSERT INTO #ids SELECT @@ID

Categories : Sql Server

Generate UPDATE queries from results of SELECT queries
The closest thing in pgAdmin is the query tool (see http://www.pgadmin.org/docs/1.16/query.html). This would not take your select statements and turn them into queries, but you can graphically build queries if you don't want to parse and concatenate. If this is going to be a big, repetitive task, I would look at writing a Perl script to parse a query and rewrite it as needed. This would require some inside knowledge. It isn't clear what you want to do regarding updating the values so you'd have to design your solution around that. More likely you would want to write a functional API (a UDF) to do what you want, and then write calls to that, probably not in a config file directly (since it is not clear you can trust that) but through an interface.

Categories : Postgresql

Optimizing sub-queries, making two queries become one
The main thing you seem to be checking is the last name with a leading % in the like. This renders the index on that column useless, and your SQL is searching for it twice. I am not 100% sure what you are trying to do. Your SQL appears to get all the members who match on name to the one required, then get the last registration_history record for those. The one you get could be from any one of the matching members, which seems strange unless you only ever expect to get a single member. If this is the case the following minor tidy (removing and IN and changing it to a JOIN) up will possibly slightly improve things. SELECT COALESCE(NULLIF(Registration_History.RegYear, ''), NULLIF(Registration.Year, '')) AS RegYear, COALESCE(NULLIF(Registration_History.RegNumber, ''), NULLIF(Registr

Categories : Mysql

Where should I do database queries and why am I not getting the results of the queries?
That's because the asynchronous nature of Node. Everything that has networking involved (DB queries, webservices, etc) is async. Because of this, you should refactor your selectalldate() method to accept an callback. From that callback you'll be able to render your template successfully with the data fetched from the DB. In the end, it'll be something like this: exports.selectalldate = function(callback) { connection.query('SELECT * FROM date', function (err, rows, fields) { if(rows.length > 0) { for(i = 0; i < rows.length; i ++) { rows[i].date = dateFormat(rows[i].date, "yyyy-mm-dd") } } callback(err, rows); }); } app.get('/dashboard', function(req, res) { db.selectalldate(function(err, datee) { if (err) { // Handle the error in s

Categories : Node Js

jax-ws client distributed in a jar
Since you didn't mention any specific app server, I'll just answer from a WAS perspective (since that's what I know and what I guess you're using). For a service client to be configured using Policy Sets and Bindings, the client must be injected (@WebServiceRef annotation) into some container managed component. This can be a servlet, portlet, EJB, etc. Each application then needs to apply the policy set and binding to their instance of the client. So essentially who ever is using it needs to configure it. You can't just to do it once and everyone shares. Ateast not how you're trying to. I ran into problem on a project before. My solution was to use an EJB as the container managed component that would have the client instance injected into it. I configure the client once for the EJB comp

Categories : Java

How to implement an API for a distributed map in c++?
I'd say something like option 3 is best. You could just emulate it using one of the standard smart pointer types introduced in C++11, so you still create a pointer, but the user doesn't have to free it. So something like: std::unqiue_ptr<Person> person = map.get("sample"); if(person) { person->makeMeASandwitch(); }

Categories : C++

Distributed Testing with Jmeter
I have resolved this issue, replaced localhost:8080 with computerName:8080 and start application server via start.bat instead run.bat now it is working fine.

Categories : Testing

How does one run Spring XD in distributed mode?
Communication between the Admin and Container runtime components is via the messaging bus, which by default is Redis. Make sure the environment variable XD_HOME is set as per the documentation; if it is not you will see a logging message that suggests the properties file has been loaded correctly when it has not: 13/06/24 09:20:35 INFO support.PropertySourcesPlaceholderConfigurer: Loading properties file from URL [file:../config/redis.properties]

Categories : Spring

how does the query get distributed in Hive?
Hive generates a MapReduce job or several MapReduce jobs based on the query that you submit. MapReduce jobs are then distributed by the Hadoop JobTracker according to the algorithms that Hadoop uses to distribute tasks of the MapReduce jobs. Hope this helps.

Categories : Hadoop

Add Gem to Distributed Cache in Hive
The best way I have found to do this is to manually add the files of the gem to the distributed cache. Here is an example of using the browser Ruby gem: I download and unzip browser-master.zip from GitHub. Then I add the entire unzipped folder to the distributed cache: ADD FILE /home/user/browser-master In the Ruby script that I am using in Hive, I have to tell Ruby where to find the needed files from the gem: $.push File.expand_path("../browser-master/lib", __FILE__) require "browser"

Categories : Ruby

AMQP 1.0 and distributed transactions
Unfortunately you are correct in that the specification for AMQP 1.0 distributed transactions is not complete and consequently there is no support for distributed transactions over AMQP 1.0 in Apache Qpid or Azure Service Bus. Regards, Dave. (Service Bus team)

Categories : Misc

Gradient in css not distributed evenly
Try using this code background: linear-gradient(bottom, #D6D6D6 0%, #FFFFFF 50%); background: -o-linear-gradient(bottom, #D6D6D6 0%, #FFFFFF 50%); background: -moz-linear-gradient(bottom, #D6D6D6 0%, #FFFFFF 50%); background: -webkit-linear-gradient(bottom, #D6D6D6 0%, #FFFFFF 50%); background: -ms-linear-gradient(bottom, #D6D6D6 0%, #FFFFFF 50%); Gradient property explanation: linear-gradient(Gradient Starting Position,Color & Offset,Color & Offset); So in your code the color #D6D6D6 started form 0% and moves upwards, then color #FFFFFF stated form 100% as offset is set as 100%(and it ends there too). So to get the consistent flow from one color to other you should set the offset of second color to 50%. Check this link to better understand CSS Gradient property. Regards

Categories : HTML

Efficient distributed counting
my guesses: cassandra supports counters - i think i saw some incr operation which should work concurrently - by using free running counter on your event, you just need to setup something which samples all counters at specified intervals (5 min?) then you can give estimations between two samples (http://wiki.apache.org/cassandra/Counters) cassandra can timeout a column..i never really used it, but it might worth a try

Categories : Algorithm

Distributed locking mechanism .NET
If you are OK using Windows Azure Storage, then you can use a Windows Azure Blob to acquire a lease on that blob with a timeout (the max you can use is 1 minute). The server that acquires the lease can run the task, and it can keep renewing the lease before it expires. Other servers will continue to try and acquire the lease, in case the leader process dies. How often you poll depends on how critical it is for other processes to immediately resume, but can probably lead to some extra cost (which might not be marginal if its too frequent and depending how many servers you have polling competing for that lease).

Categories : Dotnet

Distributed sessions with GlassFish
Obviously you want to have some sort of Single Sign-On. Have a look here for a description using an authentication realm in GlassFish: User Authentication for Single Sign-on The blog post Session Sharing between Java Web Applications explains how to not violate standards (Servlet, Java EE) and advises to use some backend like Memcached.

Categories : Java

Using Hibernate in a distributed system
Approach # 1 For us, scalability is important and we use your approach #1. We deploy isolated instances of Hibernate (as a service behind Tomcat servers in EC2). When demand increases and more instances are needed, it's easy to add more instances. Benefits of this approach: encapsulation (not in the OO sense), as each instance is configured in isolation easier configuration, as only a single instance image is needed. scalable About transaction management: this is done by the DB engine itself; not by Hibernate, and not by EC2 as you mentioned. You can have one large instance of MySQL as an RDS and it can handle transactions from several "clients" (Hibernate, JDBC, SQL Concoles, etc.). Hibernate in this case 'doesn't matter'. The DB will also gracefully handle concurrency and deadlo

Categories : Hibernate

Distributed Transaction on mysql
I think you can google two-phase commit, it is a very famous and useful protocol for distributed transactions, and this is the wiki from wikipedia Two-phase commit

Categories : Mysql

HBase distributed mode
Add the IP and hostname of HMaster into the /etc/hosts file of RS and restart HBase daemons. One possible reason could be that your HMaster is assuming that the RS has the IP of 127.0.0.1(which implies localhost) and hence resolves to its own localhost. And yes, JD is absolutely correct. hbase.master is an extinct property now.

Categories : Hadoop

How to connect to a database using a distributed API?
Are the database Servers running on these machines? Are u starting the database server programmatically? If you try to connect a database Server with: jdbc:derby://localhost:1527/KempDB This Server needs to be up and running. For your case you should use an embedded Database. For the case your Database is already an embedded Database, then you can try using this URL: jdbc:derby:KempDB instead of: jdbc:derby://localhost:1527/KempDB Hav a look on this one

Categories : Java

What does the CMSA/CD method being deterministic and distributed mean?
CSMA/CD is not deterministic, while Token Ring is deterministic. This means that with TokenRing you are sure you will be able to send your packet at some point (that might be in a long time but it will happen for sure). While with CSMA/CD if you are very very not lucky, every time you try to send your packet someone else try on the same time, therefore you can't be sure that you will be able to send your packet. Of course the more you try, the higher the probability of being able to send the packet gets ... though it is never sure (you would need an infinite time for that).

Categories : Networking

SSE subscribers - distributed across multiple servers?
There is presently no free lunch/ silver bullet for multi-node parallelism. Most people use an SOA approach (REST, queues, etc) to parallelize their application. Of course in the large you lose ability to coordinate access to resources, and the work-arounds can be hackish. I have heard good things about Immutant's (Jboss's) XA transactions that automatically apply to how their caching and messaging work, but have not used this approach personally. Another tool you may find useful is Storm, which allows you to set up a topology for distributed processing, adding some declarative abstraction in place of tedious manual development and provisioning of equivalent service architecture.

Categories : Clojure

Azure distributed cache for IaaS VMs
If you're using IaaS with Windows Server, you may want to explore using Server AppFabric Caching. This gives you dedicated resources, co-hosted in your Azure VM's at a lower latency and without the quota's imposed by the shared caching service.

Categories : Azure

Is the filesystem on Azure Websites distributed?
The file system that is used by Windows Azure Web Sites sits on top of Windows Azure BLOB Storage. They expose the space as something similar to a SMB Share that the instances running the websites point to. So, as far as replication goes, there are three copies of the data just as there are three copies of all things in BLOB storage. The storage accounts that hold this data are maintained and owned by Microsoft, it isn't in your own subscription which is one of the reasons that you are capped to the 10GB of space across all of your websites in a subscription. I'm not sure if they have geo replication turned on for their storage accounts, but my guess is that they do. Like anything though, I'd highly recommend a process to backup anything you put into this space. You have access to it

Categories : Performance

distributed architecture or flow needed
Every change to the database must be tracked just like any other part of your application (the most obvious "other part" being your code base, of course). Log every change made to the database in a SQL script. This script will be applied on the target environments as part of your standard upgrade procedure -- applying a patch then means "install new code and apply database upgrade script, and this processed can be easily scripted. If you foresee the possibility of having to install new instances of your application in the future, you will need to be able to create new empty databases. For this purpose, you will also need to mainain an "initialisation script" that creates the default structures of your application. This includes tables, triggers, stored routines creation scripts, and poss

Categories : Mysql

Central vs. Distributed Authentication and Authorization
OAuth 2.0 is a specification for authorization, but NOT for authentication. RFC 6749, 3.1. Authorization Endpoint explicitly says as follows: The authorization endpoint is used to interact with the resource owner and obtain an authorization grant. The authorization server MUST first verify the identity of the resource owner. The way in which the authorization server authenticates the resource owner (e.g., username and password login, session cookies) is beyond the scope of this specification. Authentication deals information about "who one is". Authorization deals information about "who grants what permissions to whom". Authorization flow contains authentication as its first step. It is the reason people are often confused. There are many libraries and services that use

Categories : Api

How is processor speed distributed across threads?
These are all interesting questions, but there is, unfortunately, no straightforward answer, because the answer will depend on a lot of different factors. Most modern machines are multi-core: in an ideal situation, a four-thread process has the ability to scale up almost linearly in a four-core machine (i.e. run four times as fast). Most programs, though, spend most of their time waiting for things: disk or database access, the memory bus, network I/O, user input, and other resources. Faster machines don't generally make these things appreciably faster. The way that most modern operating systems, including Windows, Unix/Linux, and MacOS, use the processor is by scheduling processor time to processes and threads in a more-or-less round-robin manner: at any given time there may

Categories : Multithreading

How to connect a distributed database in a web application?
One option is to simply use a centralized database. Why is this not an option? Another option is to use a database at each location, and have them synchronize to a central database (master-slave), or with each other (master-master). Look at replication for your database of choice, or db sync tools like SymmetricDS.

Categories : Database

Architecture for a globally distributed Neo4j?
The distribution mechanism in Neo4j Enterprise edition is indeed master-slave style. Any write request to the master is committed locally and synchronously transferred to the number in slaves defined by push_factor (default: 1). A write request to a slave will synchronously apply it the master, to itself and to enough machines to fulfill push_factor. The synchrous slave-to-master communication might hit performance thats why it's recommended to do redirect writes to the master and distribute reads over slaves. The cluster communication works fine on high-latency networks. In a multi-region setup I'd recommend to have a full (aka minimum 3 instances) cluster in the 'primary region'. Another 3-instance cluster is in a secondary region running in slave-only mode. In case that the primary reg

Categories : Neo4j

Re-use files in Hadoop Distributed cache
DistributedCache uses reference counting to manage the caches. org.apache.hadoop.filecache.TrackerDistributedCacheManager.CleanupThread is in charge of cleaning up the CacheDirs whose reference count is 0. It will check every minute (default period is 1 minute, you can set it by "mapreduce.tasktracker.distributedcache.checkperiod"). When a Job finishes or fails, JobTracker will send a org.apache.hadoop.mapred.KillJobAction to the TaskTrackers. Then if a TaskTracker receives a KillJobAction, it puts the action to tasksToCleanup. In the TaskTracker, there is a background Thread called taskCleanupThread which takes the action from tasksToCleanup and do the cleanup work. For a KillJobAction, it will invoke purgeJob to clean up the Job. In this method, it will decrease the reference count used

Categories : Hadoop

Unique keys in distributed RDBMS
The canonical solution is to use uuid() (see here) rather than an integer for such a unique identifier. This is guaranteed to be unique in space as well as time. A more "hacked" solution is to use two-part primary keys. Have the first be an identifier of "what system am I on" and the second be an auto-incremented number, unique to that system. Another "hacked" solution is to give each system ranges. Say you are using big integers, then 1,000,000,000 might start the value on one system, 2,000,000,000 on another, and so on. I would not recommend that you actually try to implement an auto-incremented number across a distributed system. This would basically entail having a single system that maintained the most recent number, and having the other systems ask it for the next number. How

Categories : Mysql

CSS: Randomly distributed background image?
There's not really a way to do exactly what you're asking in pure CSS. I have however seen people introduce "noise" into a site's background using multiple images. Here's an example of using multiple backgrounds with CSS. Here's a stackoverflow question regarding noise in gradients. Hopefully this gives you some ideas to get a feel for what you want on your site.

Categories : CSS

Is it possible to deploy a distributed system on cloud?
Absolutely - depending on how your distributed application works. The Azure Auto-Scaling Block is the tool we use to scale (add additional Cloud Services) when memory or CPU reaches certain thresholds.

Categories : Azure

Azure WebSites and Distributed caching
Well the easiest way is to just use Azure Shared Caching for distributed caching. You will have to provision the caching in the Azure Management Portal but then after that the API is pretty straightforward. Here is a link to the Nuget package for the libraries: Windows Azure Caching The prices for distributed caching are pretty outrageous still: Azure Cache Pricing

Categories : Azure



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.