Concluding GSoC: Final Documentation and Demos

Code Documentation and Description
This repository contains all the work done during the GSoC period. The work started with simple shapes in Vertex and Fragment Shader and ends with arbitrary shapes generated using VS, FS and Geometry Shaders as well. All the individual code snippets are explained here and the same can be understood and observed by running the code after proper libraries installed.

Python 3.6+ along with the following libraries
1. os
2. dipy
3. vtk (8.0.1 or newer)
4. numpy
5. random

Description of the Code
All the individual snippets are explained here along with a demonstration of the same.

Frame Rate Calculation (_Frame_Rate.py_)
The motivation to try this example was to understand the functioning of a timer callback function. The frame rate is calculated by finding the difference between the current callback and the previous callback time. The inverse of the difference in seconds gives the Frame Rate in Hz. This example additionally has 10 spheres which rotate on mouse drag events. The demo can be seen here:

Sphere with user-defined color (_Sphere_Color_on_User_Input.py_)
This example was a good introduction to use SetUniform method to inject uniform values in the Shaders. The RGB values were individually injected into the shader using the `SetUniformf` method. This introduced me to the concept of injecting variables to the shader and later it is used extensively. The demo can be seen here:

Slider to change axes value of an ellipsoid (_ellipsoid_with_slider.py_)
This is more or less similar to the previous example but this time the passing of values is dynamic. Previously a sphere can have RGB values defined only at the start of the program and then it remains the same. In this case, the slider changes and the ellipsoid is rendered and updated continuously. Proper callbacks have been used to achieve this. The demo can be seen here:

Low and High Resolution Spheres (_Low_Resolution_Sphere.py_ & High_Resolution_Sphere.py_)
Any 3D shape is made up of planar triangles. To understand how any 3D structure is formed, I used triangles to make the sphere from scratch. As the name suggests, the two spheres have a different number of triangles and thus the one with more number of triangles looked better than the one with fewer triangles. Demo below:

High-Resolution Sphere:

Geometry Shader – 1D rendering of extended lines (_Geo-Shader-Working-Example.py_)

Geometry Shader was given due importance in this project. This example was the first working example produced. This just amplifies one vertex to 5 vertex and joins them to form a wired frame of the polygon. The Geometry Shader code is injected by using the method `SetGeometryShaderCode`. The demo  is shown below:

Geometry Shader – 2D rendering of triangle strips (_Geo-Shader-Polygon.py_)
This was initially achieved by simply changing the `line_strips` to `triangle_strips`. Then, to achieve proper rotation of the polygon, with the help of my mentor David, the offset values were multiplied with the MCDC matrix to obtain the correct transformation. The demo is shown below:

Geometry Shader – 3D rendering of spheres (_Geo-Shader-Sphere.py_ & _Geo-Shader-Sphere-100-repulsion.py_ & _Geo-Shader-Sphere-100-repulsion-max-128.py_)
This is unarguably the most import part of the implementation and a big achievement to the final GSoC product. This overcomes all the shortcoming of previous attempts to Geometry Shader. The code Geometry-Shader-Sphere uses previously developed Low-Resolution-Sphere and converts injects it into Geometry Shader to develop several copies to the same shape. The implementation of Geo-Shader-Sphere-100-repulsion and Geo-Shader-Sphere-100-repulsion-max-128 uses the repulsion100 model of sphere vertices and faces to produce spheres and inject into the shader code. The later code works on systems with 128 as the amplification limit. The demonstration can be seen here:

This is very interesting since we can use the same code with slight modification and produce as many as 30k or even more spheres without much lag. This is illustrated here:

Other Helping and Debugging Codes (_Geo-Shader-VTK-Issue.py_ & _Wide-Line-GS.py_)
These codes were made to try out the WideLine Geometry Shader Code provided by vtk library. There are known issues with this implementation and functions like `AddGeometryShaderReplacement` are broken. These were reported to vtk.

Last Achievement Unlocked: 3D Shapes in Geometry Shader

So, I was busy working on the last part of my GSoC project – to be able to see 3D Spheres in Geometry Shader. This part required more time and than 1-D and 2-D combined. While trying to extend from 2-D to 3-D, the following problems were observed –

  1. The plane always remained perpendicular to the camera vector:  The following line was present in my 2-D implementation of the shader:> gl_Position = gl_in[0].gl_Position + offset;

    So, this was responsible for giving an offset to the original vertex. Since offset is a constant vec4, we would always see the point move relative to the camera normal.
    The approach towards solving it: So it was clear that the issue is with this offset variable. We don’t want to keep it constant, rather this should be continuously transformed as we moved the camera. So we need to multiply the offset with an appropriate transformation matrix. After lots of experimentation by me, David and Elef, it was found by David that multiplying the offset variable with the MCDC Matrix would result in the desired behavior. Hence, the modified expression for the produced vertex should be:
    >  gl_Position = gl_in[0].gl_Position + (MCDCMatrix * offset);

    Hence, now we have a perfectly rotating 2-D polygon.  Another important limitation is described below.

  2. A limited number of GS Vertex per Shader Vertex: This was one major challenge which we were working on for the last couple of weeks. There is a strict limit to the number of vertices that we can produce per Shader Vertex. This limit is 256 in my Windows Machine and 128 for Elef and David. This is too low for rendering even a sphere. So I used the ‘repulsion100’ model available in dipy. But for completing the sphere, I had to use them repeatedly and this made the number of required vertices go up to 330. This is more than what we can afford. After trying out various possible solutions, I came up with the idea of using two/three different actors for constructing a sphere, thus we split the list of vertices into three parts – each part contributing to a third of the sphere. Thus we get a perfectly rotating sphere.

Thus we finally have a bunch of spheres in 3-D with perfectly rotating spheres.  Thus, it was pleasant to see the tiny beautiful spheres on the screen. We can even render 30K spheres on the screen without much lag.



Success with Geometry Shaders

So finally, after playing around a bit with Geometry Shaders and having brainstormed a lot, we got some success in Geometry Shaders and this can be considered as a good accomplishment in this GSoC project. With the help of vtk.SetGeometryShaderCode now I am able to inject my custom shader and able to amplify points.

As a first step towards achieving custom visualization, I attempted to amplify a point into lines as given in the example here. I was able to achieve this but was unable to color the points as per my wish. This is where David came to my rescue and helped me out with coloring the points in a GS.

The trick here is to add line_strip as the output of the Geometry Shader. In case we want triangle strips, we have to replace it with triangle_strip. On the first go, I obtained a visualization as shown below. The complete code is available here.

Next, after having checking basic line rendering, its time to try out with 2-D shapes such as triangles and squares. So replacing line_strip with triangle_strip and make necessary changes, we obtain a beautiful representation of a pentagon each on the four positions of the screen as shown below. The complete code for this part of the project can be seen here.

The aim now is to display a sphere built using triangles as done previously using vertex and fragment shaders. As of now, we are able to see shapes on the screen but unfortunately, we hit vertex limitation as we can not use too many vertices in a single geometry shader. For my system with Windows and anaconda environment, we can get only 256 amplified vertices per base vertex. For David, this number was less than 256. So, we may not be able to use Geometry Shader to always render complicated shapes but this is surely interesting and can be of great use.


The progress with Geometry Shaders so far

As mentioned in the previous post, we are having issues implementing the manual Geometry Shader part. I posted on the vtkusers mailing list and finally got the attention of Ken from Kitware. He pointed out that we might be missing some uniform variables. He was correct and we made a progress by manually setting the uniform shader. But it does not render as a tube but only as a line. Our next step is to make it work like a tube. After having found the initial glitch, I think either me or David will come up with a correct implementation of the Geometry Shader.
Meanwhile, for the purpose of putting into mailing list, I wrote a small snippet with 3 cases – 1st to render a simple line, 2nd with SetRenderLineAsTubes() and 3rd with Geometry Shader manually being injected into the code.  Later, it was pointed out by Elef that we were not specifying any value to the uniform variable linewidthNVC. I manually used vec2(0.1, 0.1) instead of this variable to be able to see the lines on the screen. But also, it was observed that still, it is not tube.
We are hopeful of more progress in coming days, as we now have a better understanding of how Geometry Shaders work.

The code can be seen here.


The uncertainty with injecting Geometry Shader in vtk

Hello all,
For the last two weeks, I have been focusing on how to set custom Geometry Shader Code to any simple vtk mapper with actors. In the case of Fragment Shader and Vertex Shader, this task of injecting code was relatively easier and has been well explored by me. Now shifting our focus to Geometry Shaders, we encountered a few abnormal behaviours.

From the documentation and source code of vtk, it is very clear that the library supports custom Geometry Shader along with a few other custom shaders available.  The default Geometry Shaders for WideLines works very well. The existing example in dipy docs with and without geometry shaders is attached. The difference can be clearly observed in this case.

For the task of replicating the shader, I used the method ‘SetReplacementShaderCode’ to set the same code given in WideLines. It is somehow not working well. The same was also observed by David and we are currently approaching the vtk mailing list to get more insight into the problem we are facing now.

The current working of the normal renderer and the one with WideLines enabled are shown below.

Line with WideLines GS disabled.

Line with WideLine GS enabled.

We are hoping to find the solution to this issue and hopefully help vtk with a few working examples of the same.


Geometry Shaders – An Introduction

It is well accepted that Shaders have played a very important role in fast and effective visualization effects. After Vertex Shaders and Fragment Shaders, we have a very important Shader – Geometry Shader.

Fragment Shaders and Vertex Shaders are the building blocks of any rendering scene. But it would be very cumbersome and difficult to construct a complicated shape using ONLY vertex shaders and fragment shaders. This is because it is very difficult to code details of each and every vertex. Hence, we have with us the Geometry Shader. A Geometry Shader takes the whole primitive as input and produces a geometry which can then be used as a block rather than handling each and every vertex separately.

A geometry shader can be thought of as vertex amplifier, that is, it helps in rendering more vertices than what it actually would take to do it using Vertex and Fragment Shaders.  Thus as given in the example here, only a single instance of Points would render different shapes. This is helpful in complicated rendering scenes, like rain, fireworks etc.

But all of this comes with some limitations. A geometry shader is slow compared to normal rendering and this is not that big a surprise. Also, as described here,  there is a limit to the number of primitives I can use per input to Geometry Shader. The limit given here is 1024. This may vary over systems but the message is that we can not use it for too big data.

Is this useful for us?

dipy requires rendering complicated data with a large number of vertices. I have not fully explored the possibilities in Geometry Shader but I am sure this would be very helpful to us. GS would make it easy to code the vertices of the data. This does compromise the speed but that must be marginal. More would be known after working with actual datasets.

My approach now

Currently, I am trying to replicate the example given here using python-vtk. This is a little bit tricky as I have never read examples of GS in python-vtk. I guess this would be completed after having a discussion with mentors and reading relevant documentation.

Shaders – Resources and Examples

Shaders are probably the best tool for start-of-art visualizations. Having shaders in our dipy would be a cool addition.  Apart from the implementation, I also spend time getting to know the shaders better. Fragment shaders and Vertex Shaders are not very trivial and it takes a bit of time to understand and digest completely. Below are some points about Shaders and some easy examples of how they work.

Vertex Shaders

As the name suggests, Vertex Shaders contains attributes about a particular vertex in a shape. All the information about a vertex is contained in it including the color and texture. As mentioned here, there is always a 1:1 correspondence between the input vertex and the output vertex. Just the shader does a particular transform on the attributes.

Fragment Shaders

Fragment Shaders gives the color and the depth value of an input pixel. The fragment shader is something which is independent of the movement of the object. It is very important to note that each Fragment Shader has its own “window space” which acts as the output of the stream.

It requires examples to properly understand the difference between the two and how both of them when combined, can render interesting shapes.

Now, look at the sample code given below:

#ifdef GL_ES

precision medium float;


void main() {

fl_FragColor = vec4(1.0, 0.0, 1.0, 1.0);


Clearly, this deals with the fragment shader. Why so? It is because of the fact that we just want to see the color of the output and no vertex is involved here. Hence, Fragment Shaders makes sense here. The color of the screen is set to Red and Blue, which yields Pink.

There are a lot of interesting examples which uses the combination of two to get eye-catching examples and visualizations.

Inbuilt Variables in Shaders

It is very very important to know what all predefined variables comes along with shader so that you don’t have to keep looking for where it was first used. So, here is a resource which gives major focus on the predefined variables and what it does. It also has important data types required for visualization tasks.

*NOTE* This blog will be updated regularly as I read more about Shaders and try them out.

More into my experiments on vtk

It’s time for another update of the work.  After having spent a lot of time exploring shaders, I came up with beautiful illustrations in my last blog. This time around, I will discuss the brief extension of the work done and will also talk about my recent Pull Request being merged.

vtk Ellipsoid with adjustable semi-major axis length

So, last week I introduced a sphere which can change colors based on the color input by the user. The next step is to move this further and make a sphere which can dynamically adjust the parameters while rendering continuously. The following example uses a slider to set the value of the semi-major axis. In our case, we are only changing the x coordinate to adjust the ellipsoid.

Initially, I had issues with dynamically rendering the screen. On discussion with Ranveer, we had in our mind two possible solutions:-

  • Destroy the renderer and render afresh every time. It is clear that this is not an elegant solution to the problem.
  • Update the renderer every time. This was an acceptable solution and quite logical.

This was discussed with Elef during our periodic conversation and he helped me sort out the problem of assigning the updated value back to the vtk ellipsoid.

The code is available here.

My Pull Request to Add Sphere object to the sphere class is merged.

My work on adding Sphere objects as a parameter to the sphere function got merged. The pull request can be seen here. I added faces and vertices parameter to the sphere. Now instead of providing the function every time with centers and radii, we can simply provide the faces and vertices which we get from the Sphere function of dipy.

Also, I am working on resolving my doubts with the function MapDataArrayToVertexAttribute. After today’s meet, it is clear and I will soon come up with the corrected version of my code.

The has been a good journey so far. I hope to say this till the end of my summers. Shaders are quite interesting and exciting. Looking forward to more fun.


Interesting world of Shaders and vtk

Hey folks,
The last two weeks were full of fun. Sitting near my screen exploring various shaders was fun. With the help of Elef and Ranveer, I could do a lot of different examples and all this helped me build some really cool visualization examples (At least I was amazed 😛 ).
So let me walk you through few things I achieved over the past two weeks.

  • Displaying Frame-Rate of vtk Renderer
    The example renders ten small unit radii spheres and displays the number of times the screen is rendered per second.  It is calculated by finding the inverse of the last rendering time (in seconds of course). And we obtain last rendering time using a callback function GetLastRenderTimeInSeconds. As the name suggests, this function returns the time delay between last rendering time and the present rendering time.
    A small demo of the work is available here:

    The documented code is available here.

  • Taking user inputs to Shaders using SetUniformf
    This task was aimed at understanding how to use user input to set shader properties. I came up with a code which takes as an input our choice for RGB values and returns a sphere of that color.
    SetUniformf is a very important tool which magically acts as an interface between the OpenGL world of shaders and our Python world. As we move on to more complicated tasks, this function will be increasingly useful for sure.
    A small demo of the code is available here:

    The documented code can be found here.

  • Constructing Sphere using fundamental blocks – Vertices and Faces
    It was very interesting to know how any 3D shape is rendered using triangles as the fundamental block. I tried to visualize the same in a more general way. The rendered shape can be changed to ellipsoid very easily. I am currently working on including a slider which changes the dimension of the sphere as per the slider input. The sphere looks deformed (intentionally) to emphasize the use of triangles to form the sphere. I have used as less as 24 triangles to get a sphere.
    The sphere looks like this:

    The documented code can be found here.

  • Extending the to-be-added sphere function
    Elef is working on a Pull Request to add sphere function. The sphere would help to produce 3D Glyphs. The purpose of my work is to allow for additional parameters -vertices and faces, which would act as an alternative to the traditional center and radii thing. The Pull Request is under review and can be seen here.

It is really fun working with my mentors and the awesome community at dipy. I strive to learn new things and will try more such interesting stuff as we proceed. I’m loving GSoC 🙂 . Looking forward to more fun this summer.