w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
CoreAudio, iOS: Failed to use mono Input and stereo output with RemoteIO
I seemed to have figured it out: I was using signed-integer sample format, and with that format, the input argument AudioBufferList *ioData of the RenderCallback has only one AudioBuffer in it with interleaved audio samples (two output channel buffers concatenated into one), i.e., the AudioBufferList::mNumberBuffers is one. Its only AudioBuffer member in turn has an attribute mNumberChannels which corresponds to the true channel count. In my case, that field is two. An additional finding to support the above: Signed-integer format cannot be interleaved (tested with Xcode4.6 on OSX Mountain Lion), i.e., the property kAudioFormatFlagIsNonInterleaved cannot be combined with kAudioFormatFlagIsSignedInteger when setting the format flag of the ASBD. If using float plus non-interleaved sample

Categories : IOS

Sending sine wave values from array to audio output
You have configured the sample format SND_PCM_FORMAT_U8, but the actuall buffer contains 32-bit floating-point samples. Use SND_PCM_FORMAT_FLOAT, or define the buffer as an array of unsigned char. Furthermore, you have confused the loop to initialize the buffer and the loop to play the data, and many bytes/frames numbers, and fs is wrong; you need to use something like this: for (i = 0; i < BUFFER_LEN; i++) buffer [i] = sin(2*M_PI*f/48000*i); // sine wave value generation for (i = 0; i < 10 * 48000 / BUFFER_LEN; i++) { // 10 seconds frames = snd_pcm_writei(handle, buffer, BUFFER_LEN); if (frames < 0) frames = snd_pcm_recover(handle, frames, 0); if (frames < 0) { printf("snd_pcm_writei failed: %s ", snd_strerror(frames)); break;

Categories : C

Load wave into array + Subtract channels + Save as wave/mp3
If you are3 giving raw audio file as input or reading raw audio samples from audio device file, You can do following 1.Open raw audio file in binary format and read raw data in to a buffer,if you are using a file to give raw audio data. (or) Read Raw audio samples from audio device fire in to a buffer. 2.We know that Right audio channel is always followed by left audio channel in stereo audio format. So you can simply separate left and right audio channels. For example, if your device is giving 16-bit pcm pulses, that means, first 16-bits are left channel and next 16-bits are right channel. 3.You can simply open a normal binary file and you can make it as a wav file by defining wav header at the stating of the file. Define a wav header and write audio data in to wav file. For wav file

Categories : C

Can ffmpeg be used to output to bytes
Yes, it is. You gotta learn how to use the other protocols FFmpeg has. Input/output to a file is just a particular protocol... but you can output, for example, to a socket, to a FTP/HTTP, and so on... It's very easy for anyone to create a new protocol (in C, of course) and register it with FFmpeg. I don't think FFmpeg has a built-in solution to write to a buffer, but for sure it's possible. I've done this once.

Categories : C#

Building ffmpeg with an executable output
Consider using scratchbox to statically cross-compile for ARM (and test) FFMPEG to your requirements on your desktop (still inside SB). Once your happy, get enough space on your droid to keep the larger than otherwise binary and adb that exe up in there. Don't forget to chmod +x

Categories : C++

Android FFMPEG: Could not execute the ffmpeg from Java code
Do you have root on the device? Mount '/data' and then enter your same 'ffmpeg' command in the shell and see whether the error is the same. Try using the shell to test out different command expressions. Try 'ffmpeg' alone and with just one input file. See whether those commands produce expected output. My wild guess would be that there is an issue with calling 'ffmpeg.main()' that relates to the details of your build.

Categories : Android

FFMpeg - Merge multiple rtmp stream inputs to a single rtmp output
Copy video stream and merge two mono steams Try the amerge audio filter: ffmpeg -i rtmp://ip:1935/live/micMyStream7 -i rtmp://ip:1935/live/MyStream7 -codec:v copy -filter_complex "[0:a][1:a]amerge" -codec:a aac -strict -2 -f flv rtmp://ip:1935/live/bcove7 ...or simply use -ac 2: ffmpeg -i rtmp://ip:1935/live/micMyStream7 -i rtmp://ip:1935/live/MyStream7 -codec:v copy -ac 2 -codec:a aac -strict -2 -f flv rtmp://ip:1935/live/bcove7 I added -codec:v copy to stream copy the video instead of re-encoding it. I am unable to try the commands right now, so they are untested, and I will probably not be able to reply until Monday.

Categories : Java

Float to 16bit, Stereo. Improve?
It would need testing, but I would probably try with some unsafe: fixed(byte* sourcePtr = e.Buffer) fixed(byte* targetPtr = newArray16Bit) { float* sourceTyped = (float*)sourcePtr; short* targetTyped = (short*)targetPtr; int count = e.BytesRecorded / 4; for(int i = 0 ; i < count ; i++) { targetTyped[i] = (short)(sourceTyped[i] * short.MaxValue); } } To show that working identically: using System; static class Program { static void Main() { byte[] raw1 = new byte[64 * 1024]; new Random(12345).NextBytes(raw1); // 64k of random data var raw2 = (byte[])raw1.Clone(); // just to rule out corruption var result1 = OriginalImplFromTopPost(raw1, raw1.Length - 20); var result2 = MyImpl(raw2, raw2.Length - 20);

Categories : C#

AVAssetExportSession merge videos with stereo
I found that you can replace AVAssetExportSession with SDAVAssetExportSession. You can then specify audio settings as you would for the AVAssetWriter while leveraging the benefits of the AVAssetExportSession. I had to change __weak typeof(self) wself = self; to __weak SDAVAssetExportSession * wself = self; on line 172 of SDAVAssetExportSession.m.

Categories : IOS

How to convert audio from stereo to mono in Android?
Do you load .wav- files respectively PCM- data? If so then you could easily read each sample of each channel, superpose them and divide them by the amount of channels to get a mono signal. If you store your stereo signal in form of interleaved signed shorts, the code to calculate the resulting mono signal might look like this: short[] stereoSamples;//get them from somewhere //output array, which will contain the mono signal short[] monoSamples= new short[stereoSamples.length/2]; //length of the .wav-file header-> 44 bytes final int HEADER_LENGTH=22; //additional counter int k=0; for(int i=0; i< monoSamples.length;i++){ //skip the header andsuperpose the samples of the left and right channel if(k>HEADER_LENGTH){ monoSa

Categories : Android

no stereo audio in speakers and in one earphone using waveout
you shold change Channels parameter from 1 to 2 format.wFormatTag = 1 ' PCM format.nChannels = 1 '1=mono , 2=Steero <<<<<< format.nSamplesPerSec = 8000 ' 12000 format.wBitsPerSample = 16

Categories : Misc

How to PAN (alter the sound balance) the stereo channel of an flv file
Decided to change tack and use multiple mp3 files (one for each language) with a video playing, which has no audio. I am testing the selectLang variable and playing different mp3 files depending upon the value. Start the process by clicking the video button, them you can stop, pause or use the scrubber bar to move the video and mp3 to the appropriate synced position. <?xml version="1.0" encoding="utf-8"?> <s:View xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" title="HomeView" creationComplete="view1_creationCompleteHandler(event)"> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <fx:Script> <![CDATA[ import mx.core.SoundAsset

Categories : Actionscript

Windows phone 8 media element Left/Right Stereo
You can use the MediaElement.Balance property. Setting it to -1 will have 100% of the volume go to the left speaker and 1 will move the volume to the right. 0 is center so the volume is evenly distributed to both speakers (this is the default). Let's say you want to move all the sound to the left speaker, you can set it in XAML like this <MediaElement x:Name="MySound" Balance="-1" Source="/sound/haha.mp3" Visibility="Collapsed" ></MediaElement> Or from the code behind like this MySound.Balance = -1.0;

Categories : Windows Phone 8

How to convert 44100 stereo to 11025 mono programmatically?
sample-by-sample averaging is correct for stereo to mono conversion, but for sample rate conversion, why not use a library like libsamplerate? If even that is too heavy, averaging is indeed fast, but it is not the correct method. Whether it is acceptable or not depends on your application. An alternative method is described in my answer to this SO post: How to convert pcm samples in byte array as floating point numbers in the range -1.0 to 1.0 and back?

Categories : Android

how can we compute rotation and traslation two stereo cameras to use in opencv StereoRectify(r,t argument)
Why don't you use triangulatePoints? You said you have extrinsic and intrinsic calibration for both image, which means, that you have their projection matrices. And thats all you need as parameters for triangulatePoints. http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#triangulatepoints

Categories : C++

Wave to Caf format using AVAssetWriter
Try this, NSString *wavFilePath = [[NSBundle mainBundle] pathForResource:@"sampleaudio" ofType:@"wav"]; NSURL *assetURL = [NSURL fileURLWithPath:wavFilePath]; AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil]; NSError *assetError = nil; AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:songAsset error:&assetError] ; if (assetError) { NSLog (@"error: %@", assetError); return; } AVAssetReaderOutput *assetReaderOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:songAsset.tracks audioSettings: nil]; if (! [assetReader canAddOutput: assetReaderOutput]) { NSLog (@"can't

Categories : IOS

Play wave file every 500 ms
You should not do that in the gui thread, since it blocks nearly everything. That is the reason, why your app does not respond anymore. Start a new thread like this public class ViewBeat1 extends MasterView implements Runnable { ... protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); ... // instead of calling run() directly, start in new thread Thread thread = new Thread(this); thread.start(); } public void run() { ... } }

Categories : Android

Playing a wave file in QNX
The following should work: wave /full/path/to/wavefile.wav The same utility is present in your development environment under ${QNX_TARGET}/armle-v7/usr/bin/wave and includes use information: bash-3.2$ use ./wave wave Options] * Options: -a[card#:]<dev#> the card & device number to play out on -f<frag_size> requested fragment size -v verbose -c<args>[,args ..] voice matrix configuration -n<num_frags> requested number of fragments -p<volume in %> volume in percent -m<mixer name> string name for mixer input Args: 1=<hw_channel_bitmask> hardware channel bitmask for application voice 1 2=<hw_channel_bitmask> hardware channel bitmask for application voice 2 3=<h

Categories : Misc

.wav questions and python wave
.wav files are actually RIFF files under the hood. The WAVE section contains both the format information and the waveform data. Reading the codec, sample rate, sample size, and sample polarity from the format information will allow you to play the waveform data assuming you support the codec used.

Categories : Python

CSS border-style: wave
No. There is no support for the border-style:wave property (along with the dot-dash and dot-dot-dash properties) in any browser. If you wanted a wavey border, a solution could be to utilise the border-image property using an image of a wavey border of your choice. Be sure to check out @David Starkey's jsFiddle for a nice example of that. One thing to note, however, is that the border-image property isn't supported in any versions of IE. A good workaround for getting IE6-IE9 to support border-image is to use CSS3Pie.

Categories : CSS

Drawing a wave java
To modify the wave length, you could use this formula F (x) = a * sin ( (1/b)*x ) Where a is amplitude, b is wavelength. Looking at your code, you have amplitude in there. You just need a new parameter to specify b.

Categories : Java

DirectSound Wave format not being set
Unless it's a typo, the following line is a possible problem: WaveFormat.nSamplesPerSec = 44,100; It should be WaveFormat.nSamplesPerSec = 44100L;

Categories : Misc

Can't apply FFT on a simple cosine wave
Plotting just the real component of the FFT output is not very meaningful - plot the magnitude instead (sqrt(re*re+im*im)). Even better, plot the log magnitude in dB (10*log10(re*re+im*im))

Categories : C++

Making a SINE WAVE program
I'll begin with 2, which is basically a duplicate of What is the behavior of integer division in C? . In short: division operator takes both operands and returns a value in a type that is big and precise enough to hold any of the operands. Therefore, integer division will always yield integer result because any integer is precise enough to hold an integer. As you are aware, integer division may result in a real value, therefore at least one of the operands must be real. For example #define TWOPI (2*(22.0/7)) // implicit conversion or #define TWOPI (2*((float)22/7) // explicit conversion. This explains square shaped graph C offers two types of container variables: array and struct. Arrays are collections of values having the same type and structs are collections of values having arbitrar

Categories : C

Carrier Wave Version Store URL
You can probably do this overriding the full_filename and full_original_filename methods as shown on this carrierwave wiki page Here's the example they show for changing filenames from version_foo.jpg to foo_version.jpg. Customize it to suit your needs. module CarrierWave module Uploader module Versions def full_filename(for_file) parent_name = super(for_file) ext = File.extname(parent_name) base_name = parent_name.chomp(ext) [base_name, version_name].compact.join('_') + ext end def full_original_filename parent_name = super ext = File.extname(parent_name) base_name = parent_name.chomp(ext) [base_name, version_name].compact.join('_') + ext end end end end

Categories : Ruby On Rails

jQuery wave animation on row of icons
You can use .stop() before the animations to stop the current animation or .stop(true) to cancel all animations in the queue. http://jsfiddle.net/nZqLy/9/ $('#icons > li').hover(function() { $(this).stop(true).animate({ 'top': (-1 * hover_distance) }, hover_speed); }, function() { $(this).animate({ 'top': 0 }, hover_speed); });

Categories : Javascript

Water 2D wave effect in JavaFX
In this tutorial you can find how to use custom GLSL/HLSL pixel shaders for JavaFX. And the code for a simple distortion procedural wave in screenSpace in HLSL form: uniform extern texture ScreenTexture; sampler ScreenS = sampler_state { Texture = <ScreenTexture>; }; float wave; // pi/.75 is a good default float distortion; // 1 is a good default float2 centerCoord; // 0.5,0.5 is the screen center float4 PixelShader(float2 texCoord: TEXCOORD0) : COLOR { float2 distance = abs(texCoord - centerCoord); float scalar = length(distance); // invert the scale so 1 is centerpoint scalar = abs(1 - scalar); // calculate how far to distort for this pixel float sinoffset = sin(wave / scalar); sinoffset = clamp(sinoffse

Categories : Java

One sine-wave vector with different frequencies
If reversed engineer your codes correctly, it seems like you wanted to generate a chirp frequency. It could be more efficient if you do it as follows fr = linspace(2.0118e4, 1.9883e4, 784); % Frequency content %fr = linspace(2e4, 1e4, 784); % Try this for a wider chirp fs = 48e3; phi = cumsum(2*pi*fr/fs); s1 = sin(phi); spectrogram(s1, 128, 120, 128, fs); % View the signal in time vs frequency

Categories : Arrays

What chart type do I need for a wave line
Here's an example that works for me: var data = new List<Tuple<double,double>>(); for (double x = 0; x < Math.PI * 2; x += Math.PI / 180.0) { data.Add(Tuple.Create(x, Math.Sin(x))); } chart1.ChartAreas.Add("area1"); var series = chart1.Series.Add("series1"); series.ChartType = SeriesChartType.Line; series.ChartArea = "area1"; series.XValueMember = "Item1"; series.YValueMembers = "Item2"; chart1.DataSource = data; Result:

Categories : C#

Mixed wave stream to mp3 using Lame
An NAudio WaveStream is a stream of sample data with a specified format, not a RIFF WAV file as expected by LAME. To convert it to a RIFF WAV file you need to add RIFF headers and so on. The NAudio.Wave.WaveFileWriter class does this. If you're working with smallish output files that aren't going to blow your memory, you can do something simple like (assuming Yeti's LAME wrapper or similar): (code updated 19-Aug-2013): public byte[] EncodeMP3(IWaveProvider ws, uint bitrate = 128) { // Setup encoder configuration WaveLib.WaveFormat fmt = new WaveLib.WaveFormat(ws.WaveFormat.SampleRate, ws.WaveFormat.BitsPerSample, ws.WaveFormat.Channels); Yeti.Lame.BE_CONFIG beconf = new Yeti.Lame.BE_CONFIG(fmt, bitrate); // Encode WAV to MP3 int blen = ws.WaveFormat.AverageBytesPe

Categories : C#

Dumping wave audio to stdout using Windows API
The handle returned by mmioOpen is not an "output handle". It is only useful for passing to other mmio functions. But why are you using mmioOpen? It is for reading a WAV file. To get the real time audio data use the waveInOpen and related waveIn... functions.

Categories : C++

Merging two WAVE files on Android (concatenate)
Try with this code for concatenating the wav files: public class ConcateSongActivity extends Activity { Button mbutt; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); mbutt = (Button)findViewById(R.id.button_clickme); mbutt.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { try { FileInputStream fis1 = new FileInputStream("/sdcard/MJdangerous.wav"); FileInputStream fis2 = new FileInputStream("/sdcard/MJBad.wav"); SequenceInputStream sis = new SequenceInputStream(fis1,fis2); FileOut

Categories : Java

how to change sample rate in wave file using c#
to change sample rate with the ACM codec (which is what WaveFormatConversionStream uses), you must not change anything else at the same time. Your new format has a bit depth of 8 which looks suspicious. Also you have specified two channels - so the input file must be stereo for this to work.

Categories : C#

Rect Function/Square wave in MatLab
So, I would use a rect function to set up a slit, like this: x = linspace(-400,400,10000); width = 83.66; % create a rect function rect = @(x) 0.5*(sign(x+0.5) - sign(x-0.5)); % create the time domain slit function rt = rect(x/83.66); plot( x, rt); % change it to a causal rect x0 = width/2 + 20; % move the left edge to be 20 units to the right of the origin plot( x, rect( (x-x0)/width ) )

Categories : Matlab

No update_attribute method on carrier wave direct
I got it to work by calling update_attribute on the model and not the uploader. In the below snipped the @uploader the subclass of CarrierWave::Uploader::Base and @video is the model. def upload @uploader = Video.new.asset @uploader.success_action_redirect = videos_upload_successful_url end def upload_successful @video = Video.new @video.update_attribute :key, params[:key] # different than documentation!! @video.save end This seems to be contrary to the documentation where it is documented they way you tried it.

Categories : Ruby On Rails

JProgressBar displaying weird orang wave
I didn't look deep into it, but it might be a bug of Nimbus LaF. Anyway, in order for the orange blocks to stop moving (when its value is set to 100), you also seem to need to call: prog.setIndeterminate(false); If you want to "automate" this, you could subclass JProgressBar, e.g.: prog = new JProgressBar(0, 100) { public void setValue(int newValue) { super.setValue(newValue); if (newValue >= this.getMaximum()) { this.setIndeterminate(false); } } }; prog.setValue(0); ...

Categories : Java

create sine wave audio with as3 - sweep up and down frequency
I believe the fault is with the approach, the sweep needs to be gradual, when you step it like that you create an abrupt change in the sound wave, which is interpreted as a short high-frequency signal - a pop or click. The way I'd recommend you do this modulation would be inside the callback loop. set a destination freq (dF) and a current frequency(cF), and instead of doing an abrupt change set cF = cF*0.8 + dF*0.2 inside the loop, this should remove the abrupt change and have it happen over several samples.

Categories : Actionscript

How to save sound produced from text as mp3 or wave in python
In general when issuing a subprocess.call you are doing exactly the same as typing the commands in in the directory that your python code would run. You need to be able to cope with things like: The other program is not installed It is not on the path It has not been installed to the standard location etc.

Categories : Python

How to get the duration of a .WAV file that is not supported by the wave module in python?
Looking at the comments, this works (I made a few changes for my own readability). Thanks @Aya! import os path="c:\windows\system32\loopymusic.wav" f=open(path,"rb") # read the ByteRate field from file (see the Microsoft RIFF WAVE file format) # https://ccrma.stanford.edu/courses/422/projects/WaveFormat/ # ByteRate is located at the first 28th byte f.seek(28) a = f.read(4) # convert string a into integer/longint value # a is little endian, so proper conversion is required byteRate = 0 for i in range(4): byteRate += a[i] * pow(256, i) # get the file size in bytes fileSize = os.path.getsize(path) # the duration of the data, in milliseconds, is given by ms = ((fileSize - 44) * 1000)) / byteRate print "File duration in miliseconds : " % ms print "File duration in H,M,S,mS : " % ms

Categories : Python

How to generate and play a 20Hz square wave with AudioTrack?
You should use Pulse-code modulation. The linked article has an example of encoding a sine wave, a square wave is even simpler. Remember that the maximum amplitude is encoded by the maximum value of short (32767) , and that the "effective" frequency depends on your sampling rate.

Categories : Android



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.