w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
ffmpeg Getting image or thumbnail from video error
You did not give ffmpeg a name :-) So you tried to execute a \bin folder ! $ffmpeg = "C:\Ffmpeg\ffmpeg-20130605-git-3289670-win64-static\bin"; you forget ffmpeg.exe $ffmpeg = "C:\Ffmpeg\ffmpeg-20130605-git-3289670-win64-static\bin\ffmpeg"; I do it for a .avi with following command ffmpeg -i Echo2012.avi -r 1 -s 1024x576 -f image2 -vframes 1 foo-001.jpg Don't execute your command twice ! $command = "$ffmpeg -i $video -an -ss $second -s $size -vcodec mjpeg $image"; echo $command; shell_exec($command); if(shell_exec($command)){ EDIT : your command string : ffmpeg -i upload.tmp -an -ss 12 -s 150x90 -vcodec mjpeg manu.jpg -vcodec codec (output) : Set the video output codec. It's a switch for a output video. You want as output an image. -an : You can disable Audio

Categories : PHP

How to get video frames from mp4 video using ffmpeg in android?
compile ffmpeg.c and invoke its main() via jni. For more details see how to Use FFMPEG on Android . Also refer this github project. And this tutorial too will help you. I hope this will help you.

Categories : Android

ffmpeg + play video loop
From documentqtion lavf_decoding.html#ga4fdb3084415a82e3810de6ee60e46a61">http://ffmpeg.org/doxygen/trunk/group_lavf_decoding.html#ga4fdb3084415a82e3810de6ee60e46a61 0 if OK, < 0 on error or end of file so if <0 then call av_seek_frame to return to start (same doc)

Categories : C++

Watermark in video with ffmpeg Center-Top
ffmpeg -i input1 -i input2 -filter_complex "overlay=main_w/2-overlay_w/2" out Use -filter_complex when you have multiple inputs and or outputs; this option also allows you to omit the movie source filter. See the overlay video filter documentation for more info.

Categories : PHP

Video Trimming via ffmpeg twice crash
Your problem may be related to discussion here The linked thread is old but symptom there was that android/JNI call to ffmpeg worked first time but not the second time. As noted in the thread, the solution is to explicitly unload/load the library in between successive calls to ffmpeg. you could try it.

Categories : Android

Video player for OS X (QTKit, ffmpeg)
You can easily compile ffmpeg by regular "./configure && make" and add built static libraries into your xcode project. More sophisticated approach is to create xcode projects for each ffmpeg library (libavcodec/format/util/etc) and add them as nested projects. It is posible to add external codecs to QTKit? Check Perian project: http://trac.cod3r.com/perian/browser http://perian.org/

Categories : Osx

using ffmpeg to display video on iPhone
You should not be using ffmpeg in an iOS for a number of reasons. First, there are real license issues that put including ffmpeg in a legal grey area when it comes to iOS apps. Second, performance will be very very poor. Third, iOS already includes APIs that have access to the h.264 hardware on the device. You can find my example Xcode project at AVDecodeEncode, this is an example of using my library to decode from h.264 and then encode back to h.264.

Categories : Iphone

Android encode video with ffmpeg while it is still recording
Your real time requirement may lead you away from ffmpeg to webrtc and or to html5. some resources; http://dev.w3.org/2011/webrtc/editor/getusermedia.html (section5) https://github.com/lukeweber/webrtc-jingle-client ondello .. they have api rather than going native and trying to get at the video stream or getting at the framebuffer to acquire an xcopy of what is in the video buffer, and to then duplicate the stream an manage a connection (socket or chunked http), you may want to look at api type alternatives....

Categories : Android

Concat video files of different resolutions using FFmpeg
For merging videos you need to work with the same resolution, you should scale the 640x480 video or compress the 1280x720, it is up to you. I would recommend you compress the bigger one it is faster. Moreover, for merging videos you need a media file with audio and video part. You can create a silence audio with the same duration of your video and after you can add it to your video. Videos should be merged with audio. ffmpeg -ar 48000 -t 60 -f s16le -acodec pcm_s16le -i /dev/zero -ab 128K -f mp2 -acodec mp2 -y silence.mp2 ffmpeg -i video_without_audio.mpg -i silence.mp2 video_to_merge.mpg

Categories : Android

C api on capturing video stream from webcam using ffmpeg
You are basically repeating the question you are referring to. FFmpeg is basically the name of the library and the ready to use tool (command line interface). Its back end is a set of libraries: libavcodec, libavformat, swscale etc. There is no comprehensive documentation on these libraries, instead there are samples, there is mailing list and other sparse resources. Also, the libraries are open source and all these are quite usable once you get the pieces together. Specific questions on ffmpeg are asked/answered on StackOverflow as well.

Categories : C

Convert Video in html5 with ffmpeg on aruba
If ffmpeg is installed then you should able to convert using ffmpeg on your server. Logon, change to the directory video1.mpg is located and run: fmpeg -i video1.mpg video.m4v

Categories : Misc

FFmpeg - C - Encoding video - Set aspect ratio
to set the output aspect ratio, you can use the `-aspect' option, see ffmpeg documentation. -aspect[:stream_specifier] aspect (output,per-stream)’ Set the video display aspect ratio specified by aspect. aspect can be a floating point number string, or a string of the form num:den, where num and den are the numerator and denominator of the aspect ratio. For example "4:3", "16:9", "1.3333", and "1.7777" are valid argument values. If used together with ‘-vcodec copy’, it will affect the aspect ratio stored at container level, but not the aspect ratio stored in encoded frames, if it exists.

Categories : C

Using ffmpeg to watermark a capturing video on Android
Here is an example, how to capture a webcam video on my Windows system and draw a counting timestamp. To list all devices that can be used as input : ffmpeg -list_devices true -f dshow -i dummy To use my webcam as input and draw the timestamp (with -t 00:01:00 1 minute is recorded): ffmpeg -f dshow -i video="1.3M HD WebCam" -t 00:01:00 -vf "drawtext=fontfile=Arial.ttf: timecode='00:00:00:00': r=25: x=(w-tw)/2: y=h-(2*lh): fontcolor=white: box=1: boxcolor=0x00000000@1" -an -y output.mp4 The font-file Arial.ttf was located in the folder I was with the terminal in. (Source : http://trac.ffmpeg.org/wiki/How%20to%20capture%20a%20webcam%20input and http://trac.ffmpeg.org/wiki/FilteringGuide) I hope it may help. Have a nice day ;)

Categories : Android

can i use ffmpeg with actionscript to convert a webcam video flv into mp4?
The only option i can think of is to use Air and Nativeprocess as described here http://www.purplesquirrels.com.au/2013/02/converting-video-with-ffmpeg-and-adobe-air/ Otherwise I'm afraid this cannot be done unless someone does a port of FFMPEG to flash. The easiest way to do the video conversion you mention is to install FFMPEG on a server, send a request, and get the result back later.

Categories : Actionscript

Plugin for create a image and video galleries
You might want to take a look at Flexslider by WooThemes: http://www.woothemes.com/flexslider/ Best damn slider ever, period.

Categories : Wordpress

how to get width/ height of video file using S3FS, FFMPEG in ec2
I would like to recommend to take a look at the new project RioFS (Userspace S3 filesystem): https://github.com/skoobe/riofs. This project is “s3fs” alternative, the main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “testing” state, but it's been running on several high-loaded fileservers for quite some time. We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features. Regarding your issue, RioFS is able to read "requested" blocks of files, for example I tried to get properties of a video file stored on S3: Mounted RioFS: ./riofs -c ../riofs.conf.xml http://s3.amazonaws.com bucket_nam

Categories : Amazon

Android FFMPEG command line for video filter
Your ffmpeg build is too old to support the curves video filter with presets. Presets were added on 2013-03-25 (lavfi/curves: add presets support) bumping libavfilter to version 3.48.103. You will need to update your ffmpeg.

Categories : Android

Fastest way to extract a specific frame from a video (PHP/ffmpeg/anything)
Of course you could code up some C/C++ and link to -lav*, basically creating a simplified version of ffmpeg just for extracting frames, and maybe even do it as a php extension (also I wouldn't run it as the same user, let alone in the same process). But the result is very unlikely to be faster, because you would only avoid some forking and setup overhead, but your likely problem is actually the decoding, which would still be the same. Instead, you should first look into using ffmpeg in fast seeking mode (or fast/accurate hybrid mode). Their wiki states about fast seeking: The -ss parameter needs to be specified before -i: ffmpeg -ss 00:03:00 -i Underworld.Awakening.avi -frames:v 1 out1.jpg This example will produce one image frame (out1.jpg) somewhere around the third min

Categories : PHP

Playing videos on iPhone after video conversion using ffmpeg
I just wrote a blog post that covers encoding_h264_for_ios that shows examples of h.264 encoding that do and do not work on iOS hardware. The specific command line that I use is this: ffmpeg -y -i INPUT.mov -c:v libx264 -pix_fmt yuv420p -preset:v slow -profile:v baseline -crf 23 OUTPUT.m4v If I were you, I would try to encode the videos on the command line and test them on iOS hardware without the web interface. Then, go step by step with specific command line options until you found the one that was breaking the iOS playback.

Categories : Android

how to change the frame rate for a part of video using ffmpeg
Found a software named openshot.It perfectly matched my requirements. It is built over ffmpeg. I guess this is the best video editing and mixiing tool in linux for lame users. It can help us doing things very fast.

Categories : Ubuntu

How do I create an Rx sequence by running tasks over original sequence's values?
This seems to work for me so far: public static IObservable<U> Select<T, U> ( this IObservable<T> source, Func<T, CancellationToken, Task<U>> selector) { return source .Select (item => Observable.Defer (() => Observable.StartAsync (ct => selector (item, ct)) .Catch (Observable.Empty<U> ()) )) .Concat (); } We map a deferred task-based exception-swallowing observable to each item, and then concat them. My thought process went like this. I noticed that one of the SelectMany overloads does almost exactly what I wanted and even has exactly the same signature. It didn't satisfy my needs though: it creates tasks as original items come up, whereas I neede

Categories : C#

recording live video stream from tv card using ffmpeg in windows
First be sure that the video label you use is really the label return by: ffmpeg -list_devices true -f dshow -i dummy More info here But another solution should be ti use the old "Video For Windows" (VFW). To try that, list your device with: ffmpeg -y -f vfwcap -i list And use your device number as value of the -ioption: ffmpeg -y -f vfwcap -r 25 -i 0 out.mp4 And if finally you are able to record your stream, there is different options, but in your case everything is clear describe here ffmpeg -y -f vfwcap -r 25 -i 0 -f image2 -vf fps=fps=1 out%d.jpg

Categories : Windows

How to split accurately a LONG GOP video (h264/XDCAM...) with FFMPEG?
Please refer to the ffmpeg documentation. You will find an option -frames. That option can be use to specify for a given input stream (in the following the stream 0:0 is the 1st input file, first video stream) the number of frame to record. That option can be combined with other options to start somewhere in the input file (time offset, etc ....) ffmpeg -i intput.ts -frames:0:0 100 -vcodec copy test.ts that command demux and remux only the first 100 frame of the video (no re-encoding). as said you can combine it with a jump. Using ' ‘-ss offset (input)’ ' you can specify a "Frame Accurate" position ie. frame 14 after 1min10seconds = 0:1:10:14. that option should be use before the input like below. ffmpeg -ss 00:00:10.0 -i intput.ts -frames:0:0 100 -vcodec copy test.ts ffmpeg dis

Categories : Bash

Unable to concatenate video files using FFMPEG, Paperclip, Rails
thanks for the reply :) There is no error message, I have the ffmpeg command in the segment.rb file, I am not even sure if that is the right place? In segment.rb I have class Segment < ActiveRecord::Base attr_accessible :name, :source_video, :the_other_video has_attached_file :source_video has_attached_file :the_other_video end def append_to_video(the_other_video, output_file) system "ffmpeg -i concat: "#{the_other_video.source_video.path} | #{self.source_video.path}" -c copy #{output_file}" end for _form.html.erb <%= form_for(@segment) do |f| %> <% if @segment.errors.any? %> <div id="error_explanation"> <h2><%= pluralize(@segment.errors.count, "error") %> prohibited this segment from being saved:</h2> <ul> <

Categories : Ruby On Rails

ffmpeg generate thumbnail with aspect ratio, but X seconds into video
You can do that with a command line similar to: ffmpeg -i inputVideo -vf scale='min(300,iw)':-1 -ss 00:00:05 -f image2 -vframes 1 thumbnail.jpg So in your script, add -vf (video filter) before scale and reorder input and output parameters like below: $cmd = "/usr/local/bin/ffmpeg -i ".$this->getUploadRootDir()."/".$fname." -vf scale='min(300, iw):-1' -ss 00:00:5 -f image2 -vframes 1 ".$this->getUploadRootDir()."/thumb_".$output;

Categories : PHP

Encode x264 video with ffmpeg for Android with starting offset
Upgrading to ffmpeg 1.2.1 fixed the compatibility issue. After that, my Android phone was able to play the videos just fine. The -ss option is trickier than it at first looks. It has a different meaning based on whether it is before or after -i. It turns out, to make it work, you have to use both. You put the main offset before -i, which makes ffmpeg skip to that point in the stream. But, then you also need a small non-zero offset AFTER -i to make it seek to that point within the stream so audio and video will be in sync. For reference, the final working command is: ffmpeg -ss 00:03:52.00 -i in.mp4 -ss 0.1 -t 01:28:33.00 -c:v libx264 -preset medium -crf 20 -maxrate 400k -bufsize 1835k -c:a libvorbis -sn out.mkv

Categories : Android

Image IO memory keeps growing
You should at least wrap the background processing into an autorelease pool: dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0ul), ^(void) { @autoreleasepool { NSData *data = [NSData dataWithContentsOfFile:path]; UIImage *image = [UIImage imageWithData:data]; dispatch_async(dispatch_get_main_queue(), ^{ imageView.image = image; }); } }); This way you make sure any autoreleased objects on the background thread go away as fast as possible.

Categories : IOS

Is android ffmpeg library able to play video located in the assets folder
Assets are read-only parts of your APK, and there is no way another APK can access these assets. You can use the native Android classes to play the video(s) you embed in your APK (use mp4 format). If you insist on ffmpeg-assisted playback, you can unpack the asset to private file, or you should write your io handlers for ffmpeg that will use AAsset_read().

Categories : Android

ffmpeg commands to concatenate different type and resolution videos into 1 video and can be played in android
You can use concat to append all the videos one by one after converting them to a single format. You can also use the below command to convert differently formatted video to one format: ./ffmpeg -i 1.mp4 -acodec libvo_aacenc -vcodec libx264 -s 1920x1080 -r 60 -strict experimental 1.mp4 Convert everything to mp4 and then follow the instructions given in the link above. This will enable you to join all the videos in a single file.

Categories : Android

How to create unpublished page post for video using Ads api and what are the params required for create video unpublished pagepost?
Change service call from ← https://graph.facebook.com/ to ← https://graph-video.facebook.com/ then it will Works fine

Categories : Facebook

Auto Growing Div in other Auto Growing Div with dynamic floating content
You can't force floats to line up unless they have a set width. That's kind of the point of floats. They will "float" depending on the size of the elements. You could try using the min-width property. This would force the element to be at least a certain size. If you set this to the minimum width needed for the elements to line up as you want then it should work whatever the content. Of course, if your elements get too big then the layout might change again so you may well need to set max-width too. I would also suggest using percentages for these values to make things flexible. Note that you will need to set these properties on the parent elements too for it to work.

Categories : HTML

how to write multiple video files together to create one video file
You can't glue the video files expecting to produce a single working video. Try using video manipulation libraries such as the ones mentioned in this question: Libraries / tutorials for manipulating video in java

Categories : Java

Create a video from another video by taking some frames only but without writing them onto a file
Create a cap = cv2.VideoCapture(file_name). Get the width and height of your movie with h = cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT) w = cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH) Create a numpy array frames = np.zeros((h, w, 3, number_of_frames), np.uint8) and save the frames you want to keep to this array: error, frame = cap.read() frames[:,:,:,i] = frame If you don't know how many frames you have beforehand, just concatenate them in a Python list. Then, display your frames.

Categories : Python

ffmpeg capture image two different sizes
Yes it's possible to convert/capture 1 source to multiple destination FFmpeg Documentation clearly describe that http://ffmpeg.org/ffmpeg.html The syntax you should use is: $cmd = "ffmpeg -i $video_source -an -ss $second -t 00:00:01 -r 1 -y -vcodec mjpeg -f mjpeg -s 40x25 " . escapeshellarg($video_thubmnail_destinaion) -r1 -y -vcodec mjpeg -f mjpeg -s 128x192 " . escapeshellarg($video_big_thubmnail_destinaion) . " 2>&1";

Categories : PHP

HTML5 Video Chrome - ffmpeg - mp4 working in all but Chrome
The solution is to use the parameter "-pix_fmt yuv420p": ps>ffmpeg.exe -i $input$file -y -strict experimental -acodec aac -ac 2 -ab 160k -vcodec libx264 -s 640x480 -pix_fmt yuv420p -preset slow -profile:v baseline -level 30 -maxrate 10000000 -bufsize 10000000 -b 1200k -f mp4 -threads 0 $output$file.iphone.mp4 ffmpeg version N-46936-g8b6aeb1 Hopes this can help you with getting chrome compatible videos Update1 See reference it may help you

Categories : HTML

Android FFMPEG: Could not execute the ffmpeg from Java code
Do you have root on the device? Mount '/data' and then enter your same 'ffmpeg' command in the shell and see whether the error is the same. Try using the shell to test out different command expressions. Try 'ffmpeg' alone and with just one input file. See whether those commands produce expected output. My wild guess would be that there is an issue with calling 'ffmpeg.main()' that relates to the details of your build.

Categories : Android

How to put an embedded youtube video over the top of an image (as a frame around the video)
You can use CSS to apply a margin or padding to the Youtube video HTML element (probably an iframe tag if you are embedding it) So, your CSS may look something like this, depending on your HTML structure: iframe { margin-top: 10px; } You'll need to play around with the value of margin-top to make it align the way you like. If you post the relevant HTML, I could give you specific CSS to use.

Categories : HTML

create multiple movie thumbnails using ffmpeg (one at a time) failing
doh! really dumb on my part. I was using the same file name for each thumbnail and there was a dialog coming up in the command line asking if I wanted to overwrite my existing thumb image. when I changed the filename to be dynamically created, everything is peachy.

Categories : C#

FFMPEG using wrong arguements when refering to image files
%0 in a batch file will expand to the path to the batch file. In your second code sample you have a "img%02d.jpg" -- that will expand to "img + the path to your batch file + 2d.jpg". When you can't figure out what a batch file is doing (especially when you have long command lines), I find it helpful to echo the command line instead of (or as well as) calling it directly. That makes it much easier to see the problem yourself. For example: for %%a in ("*.rm") do ( echo ffmpeg -f image2 -r 1/5 -i "img%02d.jpg" -i "%%a" -c:v libx264 -preset slow -crf 20 -c:a libvo_aacenc -b:a 48k -b:v 16k "newfiles\%%~na.mp4" ) That prints out the command line instead of calling it directly, and you would immediately see where the command line is not what you expect it to be. Tweak the code until th

Categories : Windows

Building FFMpeg Error in IOS6.1 unable to create an executable file
You are using... --cc=/applications/xcode.app/contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc That's wrong for armv7, you should be doing... --cc=/applications/xcode.app/contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/arm-apple-darwin10-gcc-4.2.1 Change the above to whatever arm-apple-darwin10-gcc-x.x.x version you have that bin folder. As well, change the gas-preprocessor's target compiler to the same. --as='gas-preprocessor/gas-preprocessor.pl /applications/xcode.app/contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/arm-apple-darwin10-gcc-4.2.1' Additionally, don't put an -arch armv7 in your --extra-cflags, you don't need it and you may get an error: unrecognized command line option "-arch"

Categories : Iphone



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.