OpenGL Geometry Extrusion with geometry Shader |
If the geometry indeed changes every frame, you should do it on the GPU.
Keep in mind that every other solution that doesn't rely on the immediate
mode will be faster than what you have right now. You might not even have
to do it on the GPU.
But maybe you want to use shadow mapping instead, which is more efficient
in some cases. It will also make it possible to render shadows for alpha
tested objects like grass.
But it seems like you really need the resulting shadow geometry, so I'm not
sure if that's an option for you.
Now back to the shadow volumes.
Extracting the shadow silhouette from a mesh using geometry shaders is a
pretty complex process. But there's enough information about it on the
internet.
Here's an article by Nvidia, which explains the process in detail:
http://http.develo
|
DirectX Compile Shader from Memory? |
D3DX APIs are deprecated, you should use the D3DCompile APIs instead from
D3DCompiler.h. Mostly the same things with D3DX11 replaced by D3D so
transition is simple.
Edit your message with at least a callstack or output log because wihtout
more information, it is hard to be more specific on an answer.
|
How to pass textures to DirectX 9 pixel shader? |
To use texture in pixel shader, you may following below steps
Create texture in your C/C++ file by D3DXCreateTextureFromFile or other
functions.
if( FAILED( D3DXCreateTextureFromFile( g_pd3dDevice, "FaceTexture.jpg",
&g_pTexture ) ) )
return E_FAIL;
Declare a D3DXHANDLE and associate it with the texture in your shader
file.(you should compile your effect file before this step, effects_ here
is a pointer to ID3DXEffect)
texture_handle = effects->GetParameterByName(0, "FaceTexture");
Set the texture in render function
effects_->SetTexture(texture_handle, g_pTexture);
Declare a texture in your shader file
texture FaceTexture;
Declare a sampler in your shader file
// Face texture sampler
sampler FaceTextureSampler = sampler_state
{
Texture = <FaceTexture>
|
GLSL How to show normals with Geometry shader? |
As it seems, you're doing it all in a single pass and you actually emit 6
vertices per incoming triangle. This is not what you want.
Either do it in two passes, i.e. one pass for the mesh, the other for the
normals, or try to emit the original triangle and a degenerate triangle for
the normal. For simplicity I'd go for the two-pass version:
Inside your render loop:
render terrain
if and only if debug geometry is to be rendered
enable your debug normals shader
render the terrain mesh a second time, passing POINTS to the vertex shader
To make this work, you'll need a second program object that is made up like
in the blog post you previously linked to, consisting of a simple pass
trough vertex shader, the following geometry shader and a fragment shader
for coloring the lines represen
|
Pixel shader with SharpDX and DirectX toolkit outputting pure red color |
If I'm not wrong, you need to get the pixel data in BGRA format, not RGBA.
Could you check if it works for you?
You can check this article.
Creating a Lens application that uses HLSL effects for filters
Regards,
Pieter Voloshyn
|
Geometry shader invocations input layout qualifier |
It's instancing within the shader; It works in a manor almost exactly like
instancing with glDrawArraysInstanced and such. The same input primitive is
processed by num_instances invocations of the GS. Each invocation is
completely separate, just like each instance in instanced rendering is
completely separate.
The only way to tell the difference between one GS invocation and another
for the same primitive input is with the gl_InvocationID. This will be
different for each invocation, within the same primitive.
|
Passing WebRTC video into geometry with GLSL Shader |
The primary problem here is that you are using the old GLSL reserved words
that were intended for programmable / fixed-function interop. In OpenGL ES
2.0 things like gl_MultiTexCoord0 and gl_TextureMatrix [n] are not defined,
because they completely removed the legacy fixed-function vertex array
baggage that regular OpenGL has to deal with. These reserved words let you
have matrix/vertex array state per-texture unit; they do not exist in
OpenGL ES, this was their purpose in OpenGL.
To get around this, you have to use generic vertex attributes (e.g.
attribute vec2 tex_st) instead of having a 1:1 mapping between texture
coordinate pointers and texture units. Likewise, there is no texture matrix
associated with each texture unit. To duplicate the functionality of
texture matrices, you need t
|
corrupted primitives out of geometry shader opengl 3.2 GLSL 150 |
This is a geometry shader driver bug. It took me quite a while to find that
out: I've had the same on Mac OSX 10.8 with an AMD Radeon HD HD6750M.
Switching to the internal Intel HD Graphics 3000 (using gfxCardStatus)
solves the problem but is of course much slower and doesn't support
multi-monitor.
Finally I upgraded to Mac OSX 10.9 Developer Preview 4 and the bug seems to
be gone for good.
|
Vertex shader to create and animate geometry in QQuickItem |
In your subclass of QQuickItem override updatePaintNode() method should
create (and update when needed) instance of QSGGeometryNode and set it up
with a QSGGeometry configured for specific geometry type. That will allow
you to directly control Vertex Buffer Object (just one, but with arbitrary
layout of vertex attributes) and use your custom shaders.
See "Custom Geometry" example in qt documentation. Full project is in
official repository.
Even more interesting example is "Texture in SGNode". It uses
QQuickWindow::beforeRendering() signal to be able to run completely
arbitrary OpenGL code. In this example custom rendering goes to Frame
Buffer Object. Later this FBO is used as texture in a QSGSimpleTextureNode
subclass.
|
Directx 11 Front Buffer |
You need to use the IDXGISwapChain::GetBuffer API to retrieve a swap chain
surface ( use the uuid ID3D11Texture2D for the result type ).
Now, the swap chain buffers are not mapable, so you need to copy it to a
staging resource.
Use ID3D11Texture2D::GetDesc to retrieve the surface description
Patch it with a D3D11_USAGE_STAGING usage and a cpu access flag of
D3D11_CPU_ACCESS_READ
Create a temporary surface ID3D11Device::CreateTexture2D
Copy to the staging surface ID3D11DeviceContext::CopyResource
You now have a ID3D11Texture2D with the content of your swap chain buffer
that allow you to use the ID3D11DeviceContext::Map API to read it on the
CPU
|
DirectX - halfs instead of floats in vertex buffer |
Usual graphics cards' pipelines are optimized to calculate the high
bandwidth stuff with floats.
I don't think directx can handle halfs in vertexbuffers on the low level.
I'm backing up my speculation with the documentations of
Device::CreateVertexBuffer and D3DFVF:
CreateVertexBuffer: The format is passed with the parameter FVF. Maybe when
setting this to a non-fvf buffer it's possible to render the buffer with
custom shaders (but I don't know if your framework would attempt to do that
as it may be slower). If it's non-zero:
D3DFVF: All the available vertex formats are float only.
|
DirectX 9 Triangle List drawing through itself, how do I set up a depth buffer? |
The problem was I forgot to clear the zbuffer in the device.Clear call:
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.SlateGray, 1.0f,
0);
and now it works!
|
Three.JS buffer geometry stride length |
What law of nature is it that means you struggle for hours with an issue
then find the answer yourself as soon as you post it here...
Working version: http://jsfiddle.net/EVYJv/1/
The answer was to set up buffer_geometry.attributes with itemSize: 3,
array: new Float32Array(lines * 6) and numItems: lines * 6.
That doesn't entirely make sense - I thought an 'item' was a line with
start and end positions but maybe that's a vertex.
Edit: WestLangly pointed out that you no longer need to specify numItems -
working version with that change here: http://jsfiddle.net/EVYJv/3/
|
Why can't access the G-Buffer from my lighting shader? |
I found the error and it was such a stupid one. The old rendering pipeline
bound the correct framebuffer before calling the draw function of that
pass. But the new one didn't so each draw function had to do that itself.
Therefore I wanted to update all draw function, but I missed the draw
function of the lighting pass.
Therefore the framebuffer of the G-Buffer was still bound and the lighting
pass changed its targets.
Thanks to you guys, you had no change to find that error, since I hadn't
posted my complete pipeline system.
|
Window resizing and scaling images / Redeclaring back buffer size / C++ / DIRECTX 9.0 |
Basically, the issue was I was being a complete derp. I was putting into my
rectangle the window width and then readjusting that size based on the
oldwidth / newwidth.. well the new width was already the screen size...
GRRRRRRR.
|
How the buffer byte array is continuously filling while streaming? |
If you're asking why you can read a ~500 MB file with a roughly 1 KB
buffer, it's because you overwrite the contents of the buffer each time you
go through the loop (approximately 500,000 times).
If you're asking how the read function is actually implemented, notice that
the underlying call includes the keyword native. That means that native
code is being called via JNI. The exact implementation is going to be JVM
and OS dependent.
|
Loop recording the directx output in C++/C# |
That's right. You use a circular buffer that is large enough to hold 30
seconds of frame data. In your capture thread you just copy directly into
the next frame in the buffer. And once you've filled the buffer, you just
loop around to the start and begin filling again. The next available frame
is always the oldest.
If you want, you can maintain a head/tail index. But it's easier to just
keep track of the next available index and a flag to say whether the buffer
is full (if the buffer is not full, then the next available index is also
the number of frames that you have available).
When you write the data out to disk, you either need to stop capture, or
ensure that you can write fast enough. I/O optimizations are important
here - use unbuffered writes in blocks that are multiples of
|
Distance using taxicab geometry: weird output |
Syntax mistake, the second for loop has no brackets. This is ok for single
line statements, but without brackets, if, for, while, etc, only apply to
the first line after it (up to the semicolon). Add brackets to multiline
for loops:
for (i=0; i<3; i++)
{
for(j=0; j<3; j++)
{
matrix[i][j] = abs(i-1) + abs(j-1) + 1; //taxicab algorithm
printf("%d ",matrix[i][j]); //prints the matrix
}
printf("
");
}
In your code this was causing the print statements to not be called as
often as you thought.
(I actually recommend always using brackets on all for loops, and most if
statements for this reason)
|
Which combination for uvec2 fragment shader output |
Solved! Thank you, glYoda! Here’s the solution:
GL_RG_INTEGER / GL_RG8UI / GL_UNSIGNED_BYTE
I think GL_UNSIGNED_INT is nice too, but since GL*8*UINT, BYTE is far away
better.
|
VS2012 D3D Debugging - Viewing all shader output |
VSGD ( for Visual Studio Graphic Diagnostic ) is "hélas" far from the
deprecated PiX in terms of functionality and worse than PiX in terms of
bugs. I recommend you to use NSIGHT 3.1 from nvidia or GPUPerfStudio from
ATI ( depends of your GPU ) if you need a tool useful for debugging a real
3D frame.
|
GLSL fragment shader output type |
For a decent explanation of the internal format and format, see:
http://opengl.org/wiki/Image_Format and
http://opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml.
You basically want GL_RED for format and likely want GL_R8 (unsigned
normalized 8-bit fixed-point) for the internal format.
A long time ago, luminance textures were the norm for single-channel, but
that is a deprecated format in modern GL and red is now the logical
"drawable" texture format for single-channels, just as red/green is the
most logical format for two-channel.
As for your shader, there are rules for component expansion defined by the
core specification. If you have a texture with 1 channel as an input, but
sample it as a vec4, it will be equivalent to: vec4 (RED, 0.0, 0.0, 1.0).
Writing to the texture is a little b
|
geodjango admin : An error occurred when transforming the geometry to the SRID of the geometry |
Proj4 and Geos Lib was not properly installed ! It's simplier when you just
do that :
sudo apt-get install binutils libproj-dev gdal-bin
As the official doc said before this line : "On Debian/Ubuntu, you are
advised to install the following packages which will install, directly or
by dependency, the required geospatial libraries:"
Then you're sure everything is correctly installed
Source : lien
|
How does vertex shader pass color information to fragment shader? |
The gradient comes from the interpolation between vertex colors happening
when the varying pass into fragment shader.If you don't want to interpolate
use "flat" keyword at the begining of the varying .
Your misunderstanding probably stems from the lack of knowledge on how
vertex and fragment stages work.They work differently.Vertex shader is
invoked per vertex while fragment -per-pixel.The interpolation happens by
default as there is a need to cover fragments generated during
rasterization stage on the area defined by the primitive assembly.And as I
said you can disable interpolation by "flat".In such a case the color of
the first vertex attribute will define the overall color of the shape.
|
Is it possible to buffer the output of $app->run from silex? |
Instead of having a split of header/main/footer and using the global
variables, you should use an unique template with template inheritence.
http://twig.sensiolabs.org/doc/templates.html#template-inheritance
|
Decoding the output buffer from MediaCodec |
The MediaCodec color formats are defined by the
MediaCodecInfo.CodecCapabilities class. 256 is used internally, and
generally doesn't mean that you have a buffer of JPEG data. The confusion
here is likely because you're looking at constants in the ImageFormat
class, but those only apply to camera output. (For example,
ImageFormat.NV16 is a YCbCr format, while COLOR_Format32bitARGB8888 is RGB,
but both have the numeric value 16.)
Some examples of MediaCodec usage, including links to CTS tests that
exercise MediaCodec, can be found here. On some devices you will not be
able to decode data from the ByteBuffer output, and must instead decode to
a Surface.
|
Node.js: Differing buffer output |
From the Mozilla Javascript Docs:
Every object has a toString() method that is automatically called when
the object is to be represented as a text value or when an object is
referred to in a manner in which a string is expected.
Node.js buffer docs: Buffer#toString.
|
Calculating z-buffer from glm::project output |
You applied the formula in that Wikipedia article to the wrong values. You
already applied the projection matrix with glm::project, which is what the
z' = ... formula does. So you basically apply the projection matrix twice
in your code.
The depth buffer values in OpenGL are in window coordinates, and they are
in the range [n,f], where n and f are set using glDepthRange(n, f)
(defaults are 0 and 1). You can read it up in 13.6.1 in the spec. These
values have nothing to do with the zNear and zFar value used in the
projection matrix.
glm::project simply assumes these default values, and, since it outputs
window coordinates, that's the value that's written to the depth buffer. So
the correct code is simply:
float zBufferValue = screenCoords.z;
|
Octave output buffer completely messed up on OS X. How to fix? |
My guess is that you end up running the Emacs that comes bundled with Mac
OS X (an old version of Emacs that only works in text mode) and you want to
change that by installing a more recent version that can run in the GUI.
But that's just a wild guess.
|
increasing standard output buffer size |
You need to supply a buffer to setvbuf() to work.
static char buf[50000]; /* buf must survive until stdout is closed */
setvbuf ( stdout , buf , _IOFBF , sizeof(buf) );
From the man page:
int setvbuf(FILE *stream, char *buf, int mode , size_t size);
...
Except for unbuffered files, the buf argument should point to a buffer
at least size bytes long; this buffer will be used instead of the current
buffer. If the argument buf is NULL, only the mode is affected; a new
buffer will be allocated on the next read or write operation.
Here is a sample program:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main (int argc, char *argv[]) {
char msg[42000];
char buf[50000];
setvbuf(stdout, buf, _IOFBF, sizeof(buf));
memset(msg, 'a', s
|
Clear the output buffer in telnetlib on Python 3.2 |
You should only be seeing things like that if you've enabled debug mode on
that instance of the Telnet class.
Debug mode is off by default, so unless you've changed telnetlib.py,
there's no way you can get that output with the code block you posted in
the question.
Either way, you can explicitly disable it with...
tn = telnetlib.Telnet()
tn.set_debug_level(0)
tn.open(host)
tn.write(command.encode('ascii') + b"
")
# etc.
|
Text output sent to file and used later in the program. How to use buffer instead? |
So, I think what you are looking for popen (or _popen in Windows), which
will allow you to read the standard out from another process.
You'd do something like FILE *fout = popen(micromegas.c_str(), "r");
instead of the system and fopen lines.
|
Making an output buffer return more readable/maintainable |
This is basically same as above,
sub query_db {
my $cmd = "cat /tmp/sql.$$ | db.sh -d ~~~ $db";
my @out = qx/$cmd/;
chomp(@out);
return @out;
}
foreach my $row (query_db($sql, "database")) {
blah
}
one that wrote original script didn't know that array can be chomped at
once, so he used map to chomp line be line.
& in front of subroutine call is unnecessary, and other than that I
wander why are parameters passed to query_db when subroutine doesn't use
them.
|
How can I re allocate my output buffer for zlib's inflate function? |
What zlib examples are you referring to? The ones I know of make no such
assumption. You tell inflate() how much space is in your output buffer,
avail_out, and it will only write that much decompressed data. The buffer
is not overrun. You then do whatever you need to do with that data, and
call inflate() again, reusing the same buffer by resetting next_in and
avail_in.
You should read this heavily annotated example of how to use zlib.
Do you want to read it all into a single buffer using, as it seems you are
implying, realloc()? In that case, you simply see that avail_out has gone
to zero, reallocate the buffer, update avail_out (and next_out since the
realloc() may have moved the buffer) and call inflate() again.
|
Getting access to current Output Sample Buffer in AVFoundation |
From the docs:
The imageBuffer parameter must be in one of the following formats:
kCVPixelFormatType_32ARGB
kCVPixelFormatType_422YpCbCr8
kCVPixelFormatType_32BGRA
You can try the route Image Buffer → IOSurface → CIImage instead.
Maybe the surface-based CIImage initializer does some implicit conversion:
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
IOSurfaceRef surface = CVPixelBufferGetIOSurface(imageBuffer);
CIImage* ciImage = [[CIImage alloc] initWithIOSurface:surface];
If this doesn't work, you could reconfigure your output settings so that
the sample buffers are provided in one of the supported pixel formats.
|
how to select closest class and its id with closest method |
You can find it as the previous sibling:
$(this).prev(".container")
Then grab it's id:
$(this).prev(".container").prop("id");
edit: To get it's width:
$(this).prev(".container").width();
|
.Call mdsplib - METAR - Buffer Overflow - what does this valgrind output mean? |
Right. I tracked down the bug with some printf statements.
Basically I had to replace the following code in parseCldData of dcdmetar.c
of the mdsplib library
from
strncpy(Mptr->cldTypHgt[next].other_cld_phenom,token+6,
(strlen(token)-6));
to
strncpy(Mptr->cldTypHgt[next].other_cld_phenom,token+6, min(4,
(strlen(token)-6)));
as Mptr->cldTypHgt[next].other_cld_phenom seemed to be of type char
other_cld_phenom[4];
And in my example METAR cases, the cloud phenomena where larger than 6
characters wide.
Just send the bug to the author of mdsplib.
Now, when looking back at the valgrind output, this wasn't at all showing
the cause of the problem. I'm wondering if there exists better ways to
debug than using printf.
|
Android MediaCodec 3gpp encoder output buffer contains incorrect bytes |
To all the fellow sufferers out there, answering my own question.
The real issue actually was feeding raw PCM data to encoder input. Android
docs are vague on how to exactly feed in the data into the input buffer
(ok, it has actually more to do with ByteBuffer behaviour to be honest):
int inputBufferIndex = codec.dequeueInputBuffer(timeoutUs);
if (inputBufferIndex >= 0) {
// fill inputBuffers[inputBufferIndex] with valid data
...
codec.queueInputBuffer(inputBufferIndex, ...);
}
My interpretation was to add data as following:
inputBuffers[inputBufferIndex].clear();
inputBuffers[inputBufferIndex].put(audioPCMbuffer);
codec.queueInputBuffer(inputBufferIndex, ...);
The above code has one bit missing: flip the position of the ByteBuffer!
inputBuffers[inputBufferIndex].flip()
|
converting script shader to js shader |
The question is not very clear, but I believe you are a little bit confused
on the basic concepts.
Shaders are not supposed to be converted to javascript. They are written in
GLSL language which the browser also understands and passes over to the
display driver.
Uniforms are the way you pass variables between Javascript code and GLSL
shaders. So you only need to care about uniforms on the Javascript side.
The other code in the shader scripts are part of the shader GLSL code,
can't be shared with or converted to javascript, and if you want to make
changes to them, you need to modify the shader itself.
|
Logical error in filling QTableWidget and filling all of nodes |
Try commenting out the line for column in range(5): and set column to 0.
There seems to be no point in running that loop because you are manually
incrementing the column number in which you want item to be added.
One more thing there is no point in running a loop over query because what
appears in a particular row is what comes out last in query. Plus you are
uselessly choking the memory by creating (len(query) - 1) * 5 items which
could potentially be never used again. Better comment out for result in
query: and replace it with result = list(query)[-1].
|
C# BinaryReader.ReadChar throws "System.ArgumentException: The output char buffer is too small" when reading NetworkStream |
I had this issue too. And here are some facts about it:
System.ArgumentException: The output char buffer is too small to contain
the decoded characters, encoding 'Unicode (UTF-8)' is known to be related
to UTF-8 encoding problem (invalid character code) rather than to buffering
problem - Detials here
NetworkStream (Read and other methods) is known to return only the amount
of bytes which is already present in system network buffers instead of
blocking until all requested data will be recieved - Details here. So, one
needs to use Read in a loop to get all requested data
BinaryReader is known to throw an exception when getting less data from
NetworkStream than it expected, instead of using a loop to retrieve the
rest (and YES, I am sure, this means a bug!) - Details here
So, my solution
|