w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
google drive api - copyfile working only with permission for accessing all files in drive (but jst need to copy)
According to the documentation, Files.copy() requires at least one of the following three permissions: https://www.googleapis.com/auth/drive: "View and manage the files and documents in your Google Drive" which is the one you want to avoid https://www.googleapis.com/auth/drive.file: "View and manage Google Drive files that you have opened or created with this app." Which means you can freely create any file but only open the files your app created. You can only copy a file you created, but you cannot copy any other files even if it is public. https://www.googleapis.com/auth/drive.appdata: "View and manage its own configuration data in your Google Drive" Which only lets you control your application-specific Appdata folder which is probably not what you want. You can only copy a file with

Categories : Javascript

Google drive: Get (root) file and folders with javascript api (get only deleted files on drive)
Same situation as following question: google drive api, javascript list files returns nothing I needed to add to the scope the path 'drive' at authorizing, I had only drive.file which is intent to create/edit files only. The weird thing is that the API returns deleted files when you don't have permissions to view files, I think it's a bug in the API (a serious one). Post this bug on google drive: https://productforums.google.com/forum/#!searchin/drive/security$20issue$20permission$20view/drive/pupjKxTz9FU/cUarGIl_Ah0J

Categories : Javascript

Error while Retriving files from google drive using drive API. “Calling this from your main thread can lead to deadlock
You should do any code that could take a long time such as I/O or network activity on a different thread. In your situation your file retrieval would be best done within an ASyncTask. The reason for this warning is because when your code is being executed to retrieve the file from Drive, the code will block until the file retrieval has completed, this means that nothing else within the program will be able to respond, i.e. GUI will not update and input from the user trying to do stuff with the GUI will not be registered. After a few seconds, if the file retrieval still isn't complete, then the user will be presented with an ANR (Application Not Responding) message. This will either give the user the choice of waiting, if their patient, or most likely force closing your app.

Categories : Android

Google Drive Javascript API: Detect drive changes - return changes only
Start with zero on your first call. Within the response is the current largest change id, which you need to store and use on the next request. In your code, it will materialise as "resp.largestChangeId".

Categories : Javascript

Monitoring changes in Google Drive files for whole domain using Drive API
Domain-wide delegation simplifies the authentication portion for your app but you'll still need to auth as each user to get their files and changes. It may be possible for your app to "grab" all files and make itself (or a special user) the owner, then you just have one user account to scan but the "grab" process would still need to run at regular intervals to find new files created or uploaded by end users. I believe most apps that need to scan content at this level do it via cron jobs or AppEngine task queues which make it easy to chunk up the scans throughout the day. Generally they expect to scan each user account once every 24 hours or so.

Categories : Google App Engine

Upload file from my virtual machine to another virtual machine using hadoop hdfs
You might find webHDFS REST API useful. I have tried it to write content from my local FS to the HDFS and it works fine. But being REST based it should work fine from local FS of a remote machine as well, provided both machines are connected.

Categories : Hadoop

SQL Server Backup fails in drive where OS is installed
Not sure this will work or not but You can try to run the program as administrator through code. There is a good article how to run program with admin rights. how to force my C# Winforms program run as administrator on any computer?

Categories : C#

Mapping drive on a remote machine in powershell
You need to pass the variable $password2 into the script block, otherwise its value will be empty: Invoke-Command -Computer $name ... -ScriptBlock { net use \hostshare $args[0] /user:otherdomainotheruser } -ArgumentList $password2

Categories : Powershell

[Qt][Linux] List drive or partitions
You need to use platform specific code. And, please, read the docs! Returns a list of the root directories on this system. On Windows this returns a list of QFileInfo objects containing "C:/", "D:/", etc. On other operating systems, it returns a list containing just one root directory (i.e. "/").

Categories : C++

Windows: How to symlink drive to another drive?
No, mklink isn't going to do it for you. What you need to do is to create a virtual hard drive (VHD) and copy the client's data to it. (Or modify the export script, which is the best thing to do.) I used Windows 7 to test my instructions below. Start-> run-> diskmgmt.msc (accept all defaults... I'm not doing anything special below) From the menu bar select Action -> Create VHD Choose the location and name the file (which will be the vhd) and specify the size and click OK. Right click on the Disk # (underneath will be Unknown and the size and "Not Initialized"). Select "Initialize Disk" & click OK Right click on the black bar of the unallocated disk space and select "new simple volume". A wizard opens up an on the second page it lets you assign the drive letter. Complete the wizar

Categories : Windows

How do I create & edit a spreadsheet file on google drive using Google Drive APIs only from my Android app?
You should take a look at Google Spreadsheet API 3.0. It supports JAVA which you can use in your Android application.

Categories : Java

Google Drive sdk for android, Getting error 400 Bad Request when uploading file to Google Drive
You seem to be setting the mime type in the File object, but you need to set it in the FileContent object. When you create your file content, you can try passing in the type in the constructor new FileContent("image/jpeg", localFile); or using the method setType("image/jpeg"); but in your mediaContent instance, not in your body instance.

Categories : Android

How can we use Linux from a small storage pen drive? Does it work on micro-controllers also?
I generally hear that LINUX OS can be downloaded on flash, pen drive (floppy disk?) etc. How > we can do that? If you can't get it to work on your own, you can buy a ready made Linux on a USB drive from a site like http://www.osdisc.com or http://www.cheapbytes.com Not all PCs, especially older PCs, can boot from the USB Drive. Even some newer PCs are beginning to ship with security features that can interfere with booting code. When it does work, you have to find out the proper way to boot the USB drive. You might have only a few seconds during reboot to enter the right key, or it will boot Windows (if Windows is installed). The key to get to the BIOS Boot Menu might be delete or escape or F10 or some other key (varies with PC motherboard manufacturer). A message on the screen

Categories : Linux

How to remove file permissions on linux from an external hard drive?
To make the partition accessible under Linux run: sudo chmod -R 777 /media/<drive-name>/ this should make the mac drive accessible for you. Changing ownership would do about the same, but you need not worry about it.

Categories : Linux

Using Subclipse with share drive specifically an office-like share drive
You should start with the SVN book, chapter 1: http://svnbook.red-bean.com/en/1.7/svn.basic.html You need a SVN server and repository. That is what you will "share" with. All users that want to collaborate on the versions of a file will need to use a SVN client so that they can checkout and edit files stored in your repository and commit their changes back.

Categories : Eclipse

Running tests on IE driver with Jenkins installed on Linux machine
You can do this by using the hub option on your main selenium server java -jar selenium-server-standalone-2.25.0.jar -role hub -hubHost localhost -hubPort 4444 And then on your windows machine (this is a chrome example as I am on my mac) do something like this java -jar selenium-server-standalone-2.25.0.jar -role node -hubHost <ip of hub> -hubPort 4444 -browser "browserName=chrome,maxinstance=2,platform=MAC" -Dwebdriver.chrome.driver="driver/chromedriver" You will also need to download the iedriver to make it work selenium downloads You then connect to selenium through port 4444 (as a convention) on the main selenium server and it places requests accordingly

Categories : Linux

Determine list of non-OS packages installed on a RedHat Linux machine
yum provides some useful information about when & from where a package was installed. If you have the system installation date then can you pull out packages that were installed after that, as well as packages that were installed from different sources & locations. Coming at it from the other direction you can query rpm to find out which packages provides each of the binaries in /sbin /lib etc ... - any package that doesn't provide a "system" binary or library is part of your initial set for consideration.

Categories : Linux

Listing of all folders of Google Drive through IOS Google Drive SDK
Assuming this code works, there is a problem in the query. Multiple queries should be combined by and. query.q = @"mimeType='application/vnd.google-apps.folder' and trashed=false"; For more examples of sample queries, take a look at Search for Files in official documentation. Also, in case this code doesn't work, you want to use Files.list() with query above. Check the link and there is a sample code for Object-c you might want to use.

Categories : Objective C

org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop/dfs/name is in an inconsistent state
This can be resolved by specifying your namenode dir to a different location is "hdfs-site.xml" in your Hadoop Configuration . Generally it takes default file://${hadoop.tmp.dir}/dfs/name .. So , after every reboot the /tmp directory is cleared and NameNode data is gone

Categories : Hadoop

Issue when both apache2 server and apache tomcat server installed in my linux machine
Change the default port for Tomcat to something else, for example 8181 Current versions of web browsers recognize port 8080 like 80, so that's why it's forward you to 80 or cut the port option from URL. To change Tomcat port open server config file server.xml search for "8080", which is the current port in use, and replace it with something else (make sure the new port not in use), save and restart Tomcat.

Categories : Java

C# :Does Client machine need SQL Server installed on it while connecting to other machine having SQL Server installed on it (the Server machine)
The pieces needed to connect to SQL server are built into the .net framework. As long as you used those when coding the client piece you can connect to a SQL Database without MSSQL installed on the client. using System.Data; using System.Data.SqlClient; SqlClient PS: I am asking beforehand to avoid the last minute hassle on the day of installation. You should always test locally first. You can probally get your hands on a random test machine to verify that everything is good to go.

Categories : C#

Executing command ls from windows computer to linux computer
I'm certain you could do something with sockets and system calls, but it would probably be easier to use the built in facilities and/or programs available for each. If both are on the same network, you could run an FTP or SSH server program on the Ubuntu computer and connect to it via a FTP/SSH client, like PuTTY. Sending 'ls' through PuTTY would then yield what you want it to. OpenSSH vsFTPd

Categories : C#

Difference between HBase and Hadoop/HDFS
Hadoop is basically 2 things, a FS (Hadoop Distributed File System) and a computation framework (MapReduce). HDFS allows you store huge amounts of data in a distributed (provides faster read/write access) and redundant (provides better availability) manner. And MapReduce allows you to process this huge data in a distributed and parallel manner. But MapReduce is not limited to just HDFS. Being a FS, HDFS lacks the random read/write capability. It is good for sequential data access. And this is where HBase comes into picture. It is a NoSQL database that runs on top your Hadoop cluster and provides you random real-time read/write access to your data. You can store both structured and unstructured data in Hadoop, and HBase as well. Both of them provide you multiple mechanisms to access the da

Categories : Hadoop

Is it viable to use Hadoop with MongoDB as Database rather than HDFS
It is totally viable to do that. But it mainly depends on your needs. Basically on, what do you want to do once you have the data? That said, MongoDB is definitely a good option. It is good at storing unstructured, deeply nested documents, like JSON in your case. You don't have to worry too much about nesting and relations in your data. You don't have to worry about the schema as well. Schema-less storage is certainly a compelling reason to go with MongoDB. On the other hand, I find HDFS more suitable for flat files, where you just have to pick the normalized data and start processing. But these are just my thoughts. Others might have a different opinion. My final suggestion would be analyze your use case well and then finalize your store. HTH

Categories : Mongodb

Copy file to hadoop hdfs using scala?
Scala can invoke Hadoop API directly. For example, val conf = new Configuration() val fs= FileSystem.get(conf) val output = fs.create(new Path("/your/path")) val writer = new PrintWriter(output) try { writer.write(firstRow) writer.write(" ") writer.write(restData) } finally { writer.close() }

Categories : Scala

saving json data in hdfs in hadoop
You can use Hadoop's OutputFormat interfaces to create your custom formats which will write the data as per your wish. For instance if you need data to be written as a JSON object then you could do this : public class JsonOutputFormat extends TextOutputFormat<Text, IntWritable> { @Override public RecordWriter<Text, IntWritable> getRecordWriter( TaskAttemptContext context) throws IOException, InterruptedException { Configuration conf = context.getConfiguration(); Path path = getOutputPath(context); FileSystem fs = path.getFileSystem(conf); FSDataOutputStream out = fs.create(new Path(path,context.getJobName())); return new JsonRecordWriter(out); } private static class JsonRec

Categories : Java

Can I run Hadoop streaming applications without setting up HDFS?
Streaming scripts read from the stdin and write to the stdout. The below scripts can be used to make the streaming scripts read from a local file system. Note that it doesn't work in a distributed fashion and is mainly used for unit testing of the scripts. cat ./input.txt | ./word_count_map.py | sort -k1,1 | ./word_cound_reduce.py > output.txt

Categories : Hadoop

How to copy only file permissions and user:group from one machine and apply them on another machine in linux?
How about this? #!/bin/bash user="user" host="remote_host" while read file do permission=$(stat -c %a $file) # retrieve permission owner=$(stat -c %U $file) # retrieve owner group=$(stat -c %G $file) # retrieve group # just for debugging echo "$file@local: p = $permission, o = $owner, g = $group" # copy the permission ssh $user@$host "chmod $permission $file" < /dev/null # copy both owner and group ssh $user@$host "chown $owner:$group $file" < /dev/null done < list.txt I am assuming that the list of the files is saved in "list.txt". Moreover you should set the variables "user" and "host" accordingly to your setup. I would suggest to configure ssh to have "automatic login". Otherwise you should insert the password twice per loop. Here

Categories : Shell

R+Hadoop: How to read CSV file from HDFS and execute mapreduce?
mapreduce(input = path, input.format = make.input.format(...), map ...) from.dfs is for small data. In most cases you won't use from.dfs in the map function. The arguments hold a portion of the input data already

Categories : R

Sentiment analysis on JSON tweets in Hadoop HDFS
This example should get you started https://github.com/cloudera/cdh-twitter-example Basically use hive external table to map your json data and query using hiveql

Categories : Java

Hadoop: How do blocks get deleted after running HDFS benchmarks?
The run() method of the Mapper class (provided by the Hadoop framework) calls the cleanup method: public void run(Context context) throws IOException, InterruptedException { setup(context); while (context.nextKeyValue()) { map(context.getCurrentKey(), context.getCurrentValue(), context); } cleanup(context); }

Categories : Hadoop

Hadoop - streaming data from HTTP upload (PUT) into HDFS directly
The feasible options which I can think of right now are : HttpFS WebHDFS FTP client over HDFS HDFS over WebDAV Choosing the "best" one is totally upto you, based on your convenience and ease.

Categories : Hadoop

Error on starting HDFS daemons on hadoop Multinode cluster
Make sure your NameNode is running fine. If it is already running see if there is any problem in the connection. Your DataNode is not able to talk to the NameNode. Make sure you have added the IP and hostname of the machine in the /etc/hosts file of your slave. Try telnet to 192.168.0.1:54310 and see whether you are able to connect or not. Showing us the NN logs would be helpful. Edit : See what the wiki has to say about this problem : You get a TCP No Route To Host Error -often wrapped in a Java IOException, when one machine on the network does not know how to send TCP packets to the machine specified. Some possible causes (not an exclusive list): The hostname of the remote machine is wrong in the configuration files. The client's host table //etc/hosts has an invalid IPAddress for

Categories : Hadoop

Google Drive SDK for OCR
I just finished making this Google Drive Quickstart example from the documentation work. The code is outdated and didn't even compile at first. Once that succeeded I had some other issues when running the app on a device. I have listed the changes required to get this working and committed to the following github project. It's an Eclipse ADT project so feel free to check out and compare with your code. I've tested with the OCR option enabled and verified the result. https://github.com/hanscappelle/more-android-examples/tree/master/DriveQuickstart The readme file has an overview of all the required changes.

Categories : Android

Searching a drive with vb.net
You need recursion here which handles each folder. EDIT: As requested by the OP, a little example: Public Sub DirSearch(ByVal sDir As String) Try For Each dir as String In Directory.GetDirectories(sDir) For Each file In Directory.GetFiles(dir, "yourfilename.exe") lstFilesFound.Items.Add(file) Next DirSearch(dir) Next Catch ex As Exception Debug.WriteLine(ex.Message) End Try End Sub Also take a look at the following answers: Looping through all directory's on the hard drive (VB.NET) How to handle UnauthorizedAccessException when attempting to add files from location without permissions (C#) Also note, if you have enough access rights, you could simplify your code to this: Dim di as New D

Categories : Vb.Net

JS Google Drive SDK
If you want a separate app that edits the spreadsheet externally, you can use Javascript in conjunction with the Spreadsheet API at https://developers.google.com/google-apps/spreadsheets/ Alternatively you can embed Javascript within your spreadsheet using Apps Script as described here https://developers.google.com/apps-script/reference/spreadsheet/ It really depends what you are trying to achieve.

Categories : Javascript

hadoop,hdfs,source code reading,what Class.method() tell where to write?
When a clients wants to write to a DataNode it contact the NameNode. NameNode in turn, with the help of block location map generated based on the block reports sent by the DataNodes, tells the client which particular DataNode has free blocks where data can be written. Then the client starts writing to that node directly without having to interact with the NameNode. So this is random based on the availability of space. It can be any one node among the n nodes in cluster. As a particular DataNode accumulates some considerable amount of data it starts pushing the data to other nodes to create replicas(based on your replication factor). So a DataNode might be both reading and writing at the same time. The class org.apache.hadoop.hdfs.server.namenode.BlocksMap maintains the map from a block t

Categories : Hadoop

Not able to access the directory created in HDFS after all the Hadoop deamons are stopped and restarted again
This error occur because of multiple reason, I have been playing around with hadoop. Got this issue multiple times and cause was different if main nodes are not running -> check the logs if proper ip is not mentioned in host file[after setting hostname, provides its ip in hosts file so that other nodes can access it]

Categories : Hadoop

Hadoop HDFS Copy Files from multiple folders to one destination folder
There is DistCp which is an map-reduce job which copies files from one or multiple source folders to one target folder in an parallel manner. However, its not merging files. But maybe you could use filecrush to do that! (let me know how this goes!)

Categories : Hadoop

Any possible way to re-program a USB drive's microcontroller?
What I was thinking was if one could reprogram them to say, start an application upon insertion into a USB port, then it could be useful for a multitude of applications (such as copying files automatically). You are mistaken. When you reprogram a USB flash microcontroller, you cause a program to run inside the USB flash stick. You do NOT cause any program to run on the computer's main processor. The USB flash stick's processor only interacts with the main computer by responding to USB transactions initiated by the USB host controller in the main computer, which is under the control of the OS. In effect, your capabilities are limited to changing what kind of USB device it is reported as (mass storage or imaging or network or ...) and changing the content of the data returned when t

Categories : Assembly



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.