w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
(3d object to json to webgl). Will I be able to manipulate skin of that 3d from webgl/javascript?
Yes, you will be able to manipulate the material. Take a look at: http://threejs.org/examples/#webgl_materials_cars As you can see, each car is a single object that also holds informations about which triangle that model is made of corresponds to which material that is exported. There are many different ways to control the apperiance of the object, but the answer remains the same - it is possible, it's up to you to decide how're you gonna do it. Hope this helps.

Categories : Javascript

What is the point of being able to attach / link shaders multiple times?
you get the best performance by compiling the shaders just long enough to attach them to a program, and then it is recommended you delete the shader itself to free memory and just track the program until it's deleted This activity would have no bearing on rendering performance; it may help reduce memory usage by the driver, but it won't help you get triangles on the screen faster. The process of compiling shaders is modeled after compiling programs, and your comment would be identical to deleting object files after you've compiled the full executable. What's the purpose of being able to attach shaders and re-link the program after it's been created? This lets you skip recompiling all of the shaders required for a program, and only relink the compiled shaders (which are differen

Categories : Performance

Multiple subroutine types defined in the same fragment shader does not work correctly using GLSL Shaders
glUniformSubroutines sets all of the subroutines for a shader stage, not just one of them. See, when OpenGL links your program, it takes all of the subroutine uniforms and builds an array out of them. Each uniform has an index into this array. If you want to figure out what the index for a particular subroutine uniform in the array is, you need to call glGetSubroutineIndex. Alternatively, assuming you have 4.3/ARB_explicit_uniform_locations (which admittedly AMD is rather slow on), you can just set this directly with the layout(location = #) layout qualifier. That way, you don't have to query it. Once you know what index each subroutine uniform refers to, you can then set all of the subroutine uniforms for a stage with a single call to glUniformSubroutines. You build up a short array, wh

Categories : Opengl

How to send multiple parameters to the vertex shader on WebGL?
There are a number of things wrong with this shader if you intend to use it in WebGL. For starters, WebGL is based on OpenGL ES 2.0, which uses a version of GLSL derived from 120. Your #version directive is invalid for WebGL; you cannot use in, out, or smooth for vertex attributes or varying variables; there is no layout qualifier. This will get you part of the way to fixing your shader: #version 100 attribute vec4 position; attribute vec4 color; varying vec4 theColor; void main() { gl_Position = position; theColor = color; } But you will also need to bind the attribute locations for position and color in your code (before linking your shaders - see glBindAttribLocation (...)). If you are having difficulty finding tutorials for WebGL / ESSL, you can re-use many OpenGL

Categories : Javascript

In HTML5, where is the line between WebGL drawing an non WebGL drawing?
It's WebGL if they're using a WebGLRenderingContext. See: https://www.khronos.org/registry/webgl/specs/1.0/ Example 1 from that document shows: var canvas = document.getElementById('canvas1'); var gl = canvas.getContext('webgl');

Categories : HTML

What shaders are in the three.js ShaderLib?
As of r.59, the three.js shaders available through ShaderLib are: basic lambert phong particle_basic dashed depth normal normalmap cube depthRGBA The shaders in ShaderLib are listed and defined here: https://github.com/mrdoob/three.js/blob/r59/src/renderers/WebGLShaders.js#L1936 – This includes their linked uniforms, as well as included shader shunks, and in a few cases, fragment and vertex shader definitions. They are so far unmentioned in the documentation, and for some reason a search in the repo for ShaderLib turns up empty, apparently because of unexplained deficiencies with github search.

Categories : Three Js

What are shaders in openGL and what do we need them for?
Shaders basically give you the correct coloring of the object that you want to render, based on several light equations. So if you have a sphere, a light, and a camera, then the camera should see some shadows, some shiny parts, etc, even if the sphere has only one color. Shaders perform the light equation computations to give you these effects. The vertex shader transforms each vertex's 3D position in virtual space (your 3d model) to the 2D coordinate at which it appears on the screen. The fragment shader basically gives you the coloring of each pixel by doing light computations.

Categories : Opengl

Using different shaders on the same model at runtime
You could call glUseProgram(program) (specifications here) with the intended shader program before rendering your object. You probably want to use the _program variable that you already have. You can then change what variables (uniforms/arrays) you set based on which shader you're using. I'm not sure about "attaching and detaching shaders", but to answer your efficiency question, most people tend to group their "models" based on their shader, to minimize the calls to glUseProgram(). This also means you'll only have to set uniforms like bloomQualityUniform once per frame, instead of once per model that uses that shader. Edit: Here's an example (based on your example) which would allow you to choose the shader at runtime using an enum enum MyShaderEnum { DEFAULT, CRAZY} void RendererGL

Categories : Opengl

Get a vignette effect by shaders
My first answer got deleted because i pointed to my solution via link. So I'll answer your question right here. The trick is to use 4 linear gradients. Applying them will give you a result that comes pretty close to a true Vignette effect. And it's freaking fast too ;) So here's part of my solution. First you must create a canvas: Canvas canvas = new Canvas(bitmapOut); canvas.drawBitmap(mVignette.getBitmapIn(), 0, 0, null); Then you define how far the effect should reach into your image: int tenthLeftRight = (int)(width/5); int tenthTopBottom = (int)(height/5); Now you create your four shaders: // Gradient left - right Shader linGradLR = new LinearGradient(0, height/2, tenthLeftRight/2, height/2, Color.BLACK, Color.TRANSPARENT, Shader.TileMode.CLAMP); //

Categories : Java

Shaders : Best practice to store them
Im not very good in webGL, but I did some stuff. I was asking myself that question too. There are 3 common ways, that I know. First one is the basic one that you presented here. Yeah, it is messy, it is complicated, but it have some advantages. Shader code is easy to maintain. Second way that I found is put shader code into array, then join the array immediately, so it become a string. After that, you can pass string to gl.createShader function. This technique is pretty common in three.js javascript library, where is plenty of shader code. It keeps shader human readable and not that messy as the first one, but maintaining shader code is a little bit harder, as you probably see. The main point is, that it will allows you to keep everything in one single javascript file, which is desir

Categories : Javascript

Cocos2d blur with shaders
I have just started to play a little bit with shaders myself. There's a lot of material on the web to read and try out. I'll point you in the direction of some urls I found useful to get to understand how/what they do.. that might get you started. Simple tutorial to achieve a greyscale effect with shaders (Cocos2D) http://www.shaderdev.com/2013/09/16/full-scene-shader-effects-how-to-create-a-grayscale-pause-screen-using-ccrendertexture/ Coding experiments blogpost: great looking shader effect. This is the shader I share for cocos2D below... http://coding-experiments.blogspot.com/2010/06/frosted-glass.html With those you are surely on your way. Feel free to use the shaders below too if you find them useful, these were taken from the second url. vertex shader attribute vec4 a_positio

Categories : IOS

Is there a state that needs to be reset in between shaders
The render state is controlled by four state objects: GraphicsDevice.BlendState GraphicsDevice.DepthStencilState GraphicsDevice.RasterizerState GraphicsDevice.SamplerStates[] // one for each sampler Their introduction in XNA 4 is explained in this blog post. All state changes go through these variables OR can be set in a .fx file. IIRC, XNA's built-in Effect objects don't set state using either method - although SpriteBatch does. I can't say for sure what is setting the state in your case, from the code you've provided. Normally I would guess that SpriteBatch is the culprit (see this blog post) - as this comes up a lot. But maybe it's something in your shaderEffect. In any case, it's perfectly reasonable to simply set the states you want before rendering. Here's the typical states f

Categories : C#

OpenGL Shaders Don't Seem To Be Working
You only call GL20 glEnableVertexAttribArray 0 But you use layout (location = 0) in vec3 inPosition; layout (location = 1) in vec3 inColor; which leads me to think you should also call GL20 glEnableVertexAttribArray 1 Unless I am seriously mistaken.

Categories : Scala

Why doesn't Xcode recognize these shaders?
You've almost got it - xcode needs to know these are resources that need to be copied over, not source. No need to edit project files by hand, it's easy to fix in xcode: Select your project (from the column on the left) Select your target Select the build phases tab If the shaders appear in "compile sources", remove them Add your shaders to "copy bundle resources" That's it! Keep in mind, you'll have to do this for every new shader you add.

Categories : IOS

Fixing GLSL shaders for Nvidia and AMD
Is it possible to check if a shader will compile on AMD/Nvidia drivers without running the application on a machine with the respective hardware and actually trying it? No. If you are going to be serious about developing applications, testing on a variety of hardware is the only reliable way to go about it. And if you're not going to be serious, then who cares. Generally speaking, the easiest way to handle this for a small team is to avoid the problem altogether. Most driver incompatibilities come from attempting to do something unorthodox: passing arrays as output/input varying variables, passing matrices as attributes, using more recent driver features, etc. So... don't do that. Use only solid, safe stuff that's been around in GLSL and has been almost certainly used in real-world O

Categories : Opengl

Compose two Shaders (Color Picker)
This has to do with Android 4+ having Hardware Acceleration(HA) enabled by default. Before 4.0 you could optionaly enable HA in your AndroidManifest.xml . Hardware acceleration carries out all drawing operations that are performed on a View's canvas using the GPU, which is good because is supposed to be faster. But when HA is enabled some drawing methods are just not supported. So to use the acceleration you can use only a subset of the drawing methods. In your case the problem is, as the documentation says (see Unsupported Drawing Operations), that ComposeShader can only contain shaders of different types (a BitmapShader and a LinearGradient for instance, but not two instances of BitmapShader) Disabling hardware acceleration. You can disable HA for your whole application in yo

Categories : Java

Shaders not outputting anything when attribute location != 0
What you are seeing is not possible. GLSL version 1.10 does not support layout syntax at all. So your compiler should have rejected the shader. Therefore, either your compiler is not rejecting the shader and is therefore broken, or you are not loading the shader you think you are. If it still doesn't work when using GLSL version 3.30 or higher (the first core version to support layout(location=#) syntax for attribute indices), then what you're seeing is the result of a different bug. Namely, the compatibility profile implicitly states that, to render with vertex arrays, you must either use attribute zero or gl_Vertex. The core profile has no such restrictions. However, this restriction was in GL for a while, so some implementations will still enforce it, even on the core profile where it

Categories : C++

Saving up variables to process later on shaders
The problem is, i don't know what is this 'somewhere'. I have no idea how to save a variable in a vertex shader call, to the use it in the next call. This is something you can't do (it takes something beyond the vertex shader stage to do this). Simply consider that all vertices are processed by the GPU through the vertex shader at the same time. This is the model you should have in your mind when working with the vertex stage. I think it's pretty obvious why it is, that what you want can't be done that way.

Categories : Android

Proper way to manage shaders in OpenGL
The problem is your engine architecture,or it's lack.In many game engines game objects are divided into several categories like Mesh object which takes care of geometry (vertex buffers etc),Material objects(responsible for the appearance of the mesh object).In such a design the material is usually an entity which contains info for the uniforms that being passed into shaders during rendering.Such a design allows a good amount of flexibility as you can reuse,reassign different materials between different renderable objects.So for the starter you can set specific shader program to specific material type so each time a mesh is drawn, its materials interacts with the shader program passing in all needed uniforms. I would suggest you to take a look at open source OpenGL engine to get an idea h

Categories : Opengl

Copy texture sub rectangles using shaders and rtt
Solution: The 0.5/tex_width part in the definition of the texcoords was wrong. An easy way to work around is to completely remove that part. float texcoords[4] = { (srcRect[0] * (src_width - 1) + 0.5) / src_width, (srcRect[1] * (src_height - 1) + 0.5) / src_height, (srcRect[2] * (src_width - 1) + 0.5) / src_width, (srcRect[3] * (src_height - 1) + 0.5) / src_height }; Instead, we draw a smaller quad, by offsetting the vertices by: float dx = 1.0 / (dest_rect[2] - dest_rect[0]) - epsilon; float dy = 1.0 / (dest_rect[3] - dest_rect[1]) - epsilon; // assume glTexCoord for every vertex glVertex2f(vertices[0] + dx, vertices[1] + dy); glVertex2f(vertices[2] - dx, vertices[1] + dy); glVertex2f(vertices[2] - dx, vertices[3] - dy); glVertex2f(vertices[0] + dx, vertices

Categories : C++

Which is the best way to handle OpenGLES shaders as part of an API for iOS?
One way to avoid having to bundle the shader files with your framework or static library is to embed them as string constants. I do this in this project using the following macros: #define STRINGIZE(x) #x #define STRINGIZE2(x) STRINGIZE(x) #define SHADER_STRING(text) @ STRINGIZE2(text) This lets me then do something like the following: NSString *const kGPUImagePassthroughFragmentShaderString = SHADER_STRING ( varying highp vec2 textureCoordinate; uniform sampler2D inputImageTexture; void main() { gl_FragColor = texture2D(inputImageTexture, textureCoordinate); } ); and use that NSString constant to provide the shader for my programs where needed. Shader files are then not needed, which simplifies the distribution process. You don't get as specific of syntax highlighting i

Categories : IOS

OpenGL Normal transformation in shaders
vec3 lightDir = vec3(1,1,0) is constant while normal is transformed every time the camera changes. To prevent this make sure lightDir and normal are in the same space. Compute the dot before transforming normal or, transform lightDir by gl_NormalMatrix then compute the dot. In addition to the effect you want to achieve you have an issue in the fragment shader: normal is not normalized. This is because a linear interpolation of unit vectors won't always produce a unit vector. What you should do is something like: color*dot(normalize(normal), transform_to_same_space(lightDir))*inten; Some minor issues in the second part: You're declaring modelMatrix without a type. You cannot transform vec3 with a 4x4 matrix. You can pad gl_Normal with a 0 since it's a vector or cast down the matri

Categories : Opengl

Correct place to set up shaders for GLKView?
You are right that the earliest place you can easily set up your shaders is in the drawRect method. This is because there must be a valid GL context current. Per the GLKView documentation: Before calling its drawRect: method, the view makes its EAGLContext object the current OpenGL ES context and binds its framebuffer object to the OpenGL ES context as the target for rendering commands. So, the easiest thing to do is hang onto some information, like the program handle, and only initialize if it is non-zero. if (program == 0) program = setupShaders(vsh, fsh); If you don't like this approach, you can consider initializing your GLKView with a context that you provide, or overriding bindDrawable. Or you could not use GLKView and do things manually...

Categories : IOS

Using shaders for long computations without causing lag
It is correct that performing that 10 second operation would block your GPU. In my experience you can get crazy behaviour/crashes from your GPU driver when your calculations take that long, especially when you try to mix UI and DirectX content. I suggest that you continue down the path of splitting the calculations into smaller thread counts and look for other avenues to optimize your calculations or even refactor your code.

Categories : Wpf

blending two fragment shaders with opengl/glsl
Use the blending operations see here (yes its an old link but its still valid). Draw the underlying triangles with the shader, then enable blending and draw the other trangles with the other shader, not also that the second shader must write the Alhpa value (eg. 0.7). About your terminology: Saying that a texture has a shader is plainly wrong, shaders fetch textures so you basically see only a shader that does operations on textures.

Categories : Javascript

Front to back rendering vs shaders swapping
There is no one answer to this question. Does changing OpenGL state "cause overhead"? Of course they do; nothing is free. The question is whether the overhead caused by state change will be worse than the less effective depth test support. That cannot be answered, because the answer depends on how much overdraw there is, how costly your fragment shaders are, how many state changes a particular sequence of draw calls will require, and numerous other intangibles that cannot be known beforehand. That's why profiling before optimization is important.

Categories : Opengl

OpenGL - GLSL Shaders, Alpha blending
I ended up doing this: vec4 tex = texture(texture1, pass_TextureCoord); out_Color = vec4(tex.r*(pass_Color[0]),tex.g*(pass_Color[1]),tex.b*(pass_Color[2]),tex.a*(pass_Color[3])) ; Works just fine!

Categories : Java

OpenGL translation without glPush and glPop through shaders.
It is common problem to solve when moving from GL - fixed pipeline into modern "core" openGL. In general you have to update matrices on your own each time you want to 'move' geometry. glUseProgram(shader_id); bind_textures(); bind_vaos_and_buffers(); matrix = math_library::createTranslationMatrix(x, y, z); glUniformMatrix4fv (matrix_location, 1, GL_FALSE, matrix); glDraw*(...); matrix = math_library::createTranslationMatrix(x1, y1, z1); glUniformMatrix4fv (matrix_location, 1, GL_FALSE, matrix); glDraw*(...); in general: each time you want to draw something: create a proper matrix, sent it to the active shader and then call a 'draw' command. Please note that that way one geometry data is used and we simply draw it several times using several different transformations. math_library -

Categories : Opengl

OpenGL 2.0 Bug - Setup and shaders work, won't draw
It seems to me that your points are drawn outside the viewport. OpenGL uses normalised device coordinates in range [-1..1] for both X and Y. You're taking position of the touch in screen space which is different. E.g. for an iPad it is [0..1024] and [0..768]. In order to draw those points you have to convert them from the screen space to NDC space: NDC_x = 2*(touch_x/screen_size_x) - 1; NDC_y = 2*(touch_y/screen_size_y) - 1;

Categories : IOS

Trouble initializing shaders before resize on wxGLCanvas
There is no specific method to initialize OpenGL that gets called before everything else, but after the window was shown, in wxWidgets. You can roll your own with a member variable that indicates whether OpenGL has been initialized, and doing your initialization in the Paint event handler if the variable is false. In my experience it is safest to issue all OpenGL commands only in the Paint event handler, so in your Size event handler, you should save the new viewport size and update the projection matrix in your Paint handler (or just query the viewport size using wxGLCanvas' GetClientRect() method).

Categories : C++

OpenGL ES 2 on iOS: NULL Fragment and Vertex Shaders
As seen here - http://developer.apple.com/library/ios/#documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/DeterminingOpenGLESCapabilities/DeterminingOpenGLESCapabilities.html#//apple_ref/doc/uid/TP40008793-CH102-SW1 Check for Extensions Before Using Them Is the encoding (for shader files) ASCII or UTF-8? Call glGetError to Test for Errors Because your code seems perfectly normal, try calling glGetError(...) in place of NSLog(...) to find more about errors (if there are any).

Categories : IOS

PyOpenGL - A tutorial about shaders that doesn't work
It is failing because you don't have GLUT installed. GLUT is the GL Utility Toolkit, a very widely used simple framework for creating OpenGL programs that runs on MS Windows, Macs, Linux, SGI, ... All the functions and constants have a glut or GLUT_ prefix, and glutInitDisplayMode is the usually the first function that gets called. [ REMOVE INSTALLATION NOTES ] OK, you've done the install and it still doesn't work. That means that while you have GLUT installed, the Python program can't load the GLUT DLL. Dynamic linking, oh joy. Find where glut32.dll got installed. A quick and dirty solution for a single program is to copy the glut DLL into the same directory as the program itself. GLUT is 32 bit (AFAIK, unless you built it yourself) and this can be tricky if you have a 64 bit versio

Categories : Python

Why it is necessary to detach and delete shaders after creating them, on OpenGL?
From the OpenGL doc on glDeleteShader: If a shader object to be deleted is attached to a program object, it will be flagged for deletion, but it will not be deleted until it is no longer attached to any program object, for any rendering context (i.e., it must be detached from wherever it was attached before it will be deleted) So, it's more like decrementing a reference counter than an actual deletion.

Categories : C++

combine two shaders properties into one shader in opengl es 20?
You are mixing up two concepts. Set line color as uniform vec4: Your first shader reads the line color as a 4-element vector. This is perfectly valid, but you should use a better variable name. For example uniform vec4 lineColor;. In your Java code, you then set the uniform value with lineColorHandle = GLES20.glGetUniformLocation(sProgram, "lineColor"); GLES20.glUniform4fv(lineColorHandle, 1, LineColor, 0); Set texture unit as uniform sampler2D: Your second shader has uTex0 declared as uniform sampler2D uTex0;. In this case, uTex0 is not a 4-element vector. It is an integer index that tells your shader to which texture unit uTex0 is bound to. You should set the uniform like this: texHandle = GLES20.glGetUniformLocation(sProgram, "uTex0"); GLES20.glUniform1i(texHandle, 0); // 0 is defa

Categories : Android

GLSL Shaders - Fragment shader not compiling
I retrived the log by using this: GL20.glGetShaderInfoLog(shaderID,GL20.glGetShaderi(shaderID,GL20.GL_INFO_LOG_LENGTH)); and it said that I am trying to asign a value to a varying variable. Basicly I cant change variables that have an (in) prefix to them. In this case I was trying to add pass_Velocity to pass_TextureCoord, therefor trying to change the value, which is not allowed.

Categories : Java

Which graphics card is used when working with OpenGL shaders over RDP?
That depends on your remote system's configuration. Your usual consumer GPU with standard drivers will not provide any HW acceleration for the RDP whatsoever and you'll drop into a SW emulation mode. However there are special visualization server GPUs and drivers which hook into the RDP server implementation and provide GPU acceleration for it; very likely you don't have that though. Note that it's ultimately just a driver issue. With the right drivers every GPU could do that. But HW vendors want to sell their special purpose devices, so they're locking that functionality to a specific product line. I'm hoping, that the advances of the Linux open source graphics ecosystem will, as a sideshow, enable implementing such visualization servers with commodity hardware in the near future. As mu

Categories : Opengl

How do I declare the OpenGL version in shaders on Android?
Reading the GLSL ES3.0 spec it lists "attribute" and "varying" as reserved keywords that will result in an error. In GLES3, you must qualify input variables with "in" and output variables with "out". So in the vertex shader, attribute -> in varying -> out And in the fragment shader varying -> in Section 4.3 in the spec (storage qualifiers) has all the details.

Categories : Android

Compiling GLSL shader breaks other shaders
I've found the answer. In my third shader I have a sampler2D named "texture." Changing this name in the Shaders and the call to glGetUniformLocation fixed the problem. I don't understand why, since "texture" is not a reserved word in GLSL and there are no other uses of the word in any other uniform (there is a "texcoord" attribute, but I doubt that this has caused the problem), but it worked. EDIT: I actually found the specific reason for this a while ago -- I had been using a bit of Apple's GLKit sample project, which binds shader attributes to an enum which is in the sample that I used placed outside of the @implementation of the View Controller, meaning that its scope is outside of a specific instance of the class. The class that had this problem actually had two shaders, and when the

Categories : IOS

How to set a specific eye point using perspective view with shaders
Here, we are interested in computing a transformation from Camera Coordinates (CC) to Normalized Device Coordinates (NDC). Think of E as the position of the projection plane in Camera Coordinates, instead of the position of the eye point according to the projection plane. In Camera Coordinates, the eye point is by definition located at the origin, at least in my interpretation of what "Camera Coordinate" means: a coordinate frame centered from where you look at the scene. (You can mathematically define a perspective transformation centered from anywhere, but this means your input space is not the camera space, imho. This is what the World->Camera transformation is for, as you will see in chapter 6) Summary: you are in camera space, hence your eye point is located at (0,0,0) you are loo

Categories : Opengl

OpenGL Shaders - Should the camera translation happen on the GPU or the CPU?
I am not entirely sure what you are currently doing. But the sane way of doing this is to not touch the VBO. Instead, pass one or more transformation matrices as uniforms to your vertex shader and perform the matrix multiplication on the GPU. Changing your VBO data on the CPU is insane, it means either keeping a copy of your vertex data on the CPU, iterating over it and uploading or mapping the buffer and iterating over it. Either way, it would be insanely slow. The whole point of having a VBO is so you can upload your vertex data once and work concurrently on the CPU while the GPU buggers off and does its thing with said vertex data. Instead, you just store your vertices once in the vertex buffer, preferably in object space (just for sanity's sake). Then you keep track of a transformati

Categories : Opengl



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.