joaodellagli's Blog

Week 8: The Birth of a Versatile API

joaodellagli
Published: 07/25/2023

Hello everyone, it's time for another weekly blogpost! Today, I am going to tell you all about how is the KDE API development going, and to show you the potential this holds for the future!

Last Week's Effort

Last week I told you how I managed to render some KDE renders to the screen, both in 2D and 3D, as you may check by my last blogpost. My new task was, as I had this example working, to start the API development. I;n a meeting with Bruno, one of my mentors, we debated on how could this work, reaching two options:

  1. Implement the KDE in a single, simple actor.
  2. Implement a KDE rendering manager, as a class.

The first one would have the advantage of being simple and pretty straightforward, as a user would only need to call the actor and have it working on them hands, having the tradeoff of leaving some important steps for a clean API hidden and static. These steps I mention are related to how this rendering works, as I have previously showed you, it relies on post-processing effects, which need an offscreen rendering, that for example are done by the callback functions.

In short, these functions are instructions the user gives to the interactor to run inside the interaction loop. Inside FURY there are tree types of callbacks passed to the windoe interactor:

  1. Timer Callbacks: Added to the window interactor, they are a set of instructions that will be called from time to time, with interval defined by the user.
  2. Window Callbacks: Added directly to the window, they are  a set of instructions called whenever an specific event is triggered.
  3. Interactor Callbacks: Added to the window interactor, they are a set of instructions called whenever an specific interaction, for example a mouse left-click, is triggered.

In this API, I will be using the Interactor Callback, set by the window.add_iren_callback() function, that will be called whenever a Render interaction is detected, and needs to be first passed to the onscreen manager.

These details are more complicated, and would need, for example, for the user to pass the onscreen manager to the actor.kde() function. Also, in the case of a kde actor not being used anymore and being declared, the callback then passed would still exist inside the manager and be called even when the kde actor is not on screen anymore, which is not ideal.

Knowing these problems, we thought of second option, that would have the advantage of not leaving those details and steps behind. It has the tradeoff of maybe complicating things as it would need to be called after calling the effects manager, but as I will show you below, it is not that complicated at all.

I also reviewed my fellow GSoC contributors PR's as well, PR #810 and #803. Bruno told me to take a look as well on Conventional Commits, a way to standardtize commits by prefixes, so I did that as well.

So how did it go?

Well, the implemented manager class is named EffectManager() and to initialize it you only need to pass the onscreen manager:

effects = EffectManager(onscreen_manager)

After that, to render a KDE calculation of points to the screen, you need only to call its kde() function:

kde_actor = effects.kde(center, points, sigmas, scale = 10.0, colormap = "inferno")
# Those last two are optional

Pass it to the onscreen manager scene:

onscreen_manager.scene.add(kde_actor)

And to start it, as usual:

onscreen_manager.start()

As simple as that. This three lines of code output the same result as I showed you last week, this time, with different sigmas for each point:

After having that working, I experimented beyond. See, as I previously said, we are dealing here with post-processing effects, with KDE being only one of the many existing ones, as this Wikipedia Page on post processing shows. Knowing that, I tried one of the first filters I learned, the Laplacian one. This filter is, as its name hints, applying the Discrete Laplace Operator in an image. This filter shows suddend changes of value, a good way to detect borders. The process is the same as the kde actor, requiring only the actor you want to apply the filter to. Below, the result I got from applying that to a box actor:

Something I found important to leave as an option was filter compositing. What if an user wanted to, for example, apply one laplacian filter after another? Well, the example below shows that is possible as well:

It still need some tweaks and suffers from some bugs, but it works! Those represent important progress as it shows the versatility this API may present. I have also already implemented grayscale and 3x3 gaussian blur as well:

This Week's Goals

My plans for this week are to keep working and polishing the API, mainly the KDE part, so it can be ready for a first review. When that is ready, I plan to experiment with more filters and make this more dynamic, maybe implementing a way to apply custom kernel transformations, passed by the user, to the rendering process. This has been a really exciting journey and I am getting happy with the results!

Wish me luck!

View Blog Post

Week 7: Experimentation Done

joaodellagli
Published: 07/18/2023

Hello everyone, welcome to another weekly blogpost! Let's talk about the current status of my project (spoiler: it is beautiful).

Last Week's Effort

Having accomplished a KDE rendering to a billboard last week, I was then tasked with trying a different approach to how the rendering was done. So, to recap, below was how I was doing it:

  1. Render one point's KDE offscreen to a single billboard, passing its position and sigma to the fragment shader as uniforms.
  2. Capture the last rendering's screen as a texture.
  3. Render the next point's KDE, and sum it up with the last rendering's texture.
  4. Do this until the end of the points.
  5. Capture the final render screen as a texture.
  6. Apply post processing effects (colormapping).
  7. Render the result to the screen.

This approach was good, but it had some later limitations and issues that would probably take more processing time and attention to details (correct matrix transformations, etc) than the ideal. The different idea is pretty similar, but with some differences:

  1. Activate additive blending in OpenGL.
  2. Render each point's KDE to its own billboard, with position defined by the point's position, all together in one pass.
  3. Capture the rendered screen as a texture.
  4. Pass this texture to a billboard.
  5. Apply post processing effects (colormapping).
  6. Render the result to the screen.

So I needed to basically do that.

Was it Hard?

Fortunately, it wasn't so hard to do it in the end. Following those steps turned out pretty smooth, and after some days, I had the below result:

This is a 2D KDE render of random 1000 points. For this I used the *"viridis"* colormap from `matplotlib`. Some details worth noting:

  • For this to work, I have implemented three texture helper functions: `window_to_texture()`, `texture_to_actor()` and `colormap_to_texture()`. The first one captures a window and pass it as a texture to an actor, the second one passes an imported texture to an actor, and the last one passes a colormap, prior passed as an array, as a texture to an actor.
  • The colormap is directly get from `matplotlib`, available in its `colormaps` object.
  • This was only a 2D flatten plot. At first, I could not figure out how to make the connection between the offscreen interactor and the onscreen one, so rotating and moving around the render was not happening. After some ponder and talk to my mentors, they told me to use *callback* functions inside the interactor, and after doing that, I managed to make the 3D render work, which had the following result:

After those results, I refactored my PR #804 to better fit its current status, and it is now ready for review. Success!

This Week's Goals

After finishing the first iteration of my experimental program, the next step is to work on an API for KDE rendering. I plan to meet with my mentors and talk about the details of this API, so expect an update next week. Also, I plan to take a better look on my fellow GSoC FURY contributors work so when their PRs are ready for review, I will have be better prepared for it.

Let's get to work!

View Blog Post

Week 6: Things are Starting to Build Up

joaodellagli
Published: 07/12/2023

Hello everyone, time for a other weekly blogpost! Today, I will show you my current progress on my project and latest activities.

What I did Last Week

Last week I had the goal to implement KDE rendering to the screen (if you want to understand what this is, check my last blogpost. After some days diving into the code, I finally managed to do it:

This render may seem clean and working, but the code isn't exactly like that. For this to work, some tricks and work arounds needed to be done, as I will describe in the section below.

Also, I reviewed the shader part of Tania's PR #791, that implement ellipsoid actors inside FURY. It was my first review of a PR that isn't a blogpost, so it was an interesting experience and I hope I can get better at it.

It is important as well to point out that I had to dedicate myself to finishing my graduation capstone project's presentation that I will attend to this week, so I had limited time to polish my code, which I plan to do better this week.

Where the Problem Was

The KDE render basically works rendering the KDE of a point to a texture and summing that texture to the next render. For this to work, the texture, rendered to a billboard, needs to be the same size of the screen, otherwise the captured texture will include the black background. The problem I faced with that is that the billboard scaling isn't exactly well defined, so I had to guess for a fixed screen size (in this example, I worked with 600x600) what scaling value made the billboard fit exactly inside the screen (it's 3.4). That is far from ideal as I will need to modularize this behavior inside a function that needs to work for every case, so I will need to figure out a way to fix that for every screen size. For that, I have two options:

  1. Find the scaling factor function that makes the billboard fit into any screen size.

  2. Figure out how the scaling works inside the billboard actor to understand if it needs to be refactored.

The first seems ok to do, but it is kind of a work around as well. The second one is a good general solution, but it is a more delicate one, as it deals with how the billboard works and already existing applications of it may suffer problems if the scaling is changed. I will see what is better talking with my mentors.

Another problem I faced (that is already fixed) relied on shaders. I didn't fully understood how shaders work inside FURY so I was using my own fragment shader implementation, replacing the already existing one completely. That was working, but I was having an issue with the texture coordinates of the rendering texture. As I completely replaced the fragment shader, I had to pass custom texture coordinates to it, resulting in distorted textures that ruined the calculations. Those issues motivated me to learn the shaders API, which allowed me to use the right texture coordinates and finally render the results you see above.

This Week's Goals

For this week, I plan to try a different approach Filipi, one of my mentors, told me to do. This approach was supposed to be the original one, but a communication failure lead to this path I am currently in. This approach renders each KDE calculation into its own billboard, and those are rendered together with additive blending. After this first pass, this render is captured into a texture and then rendered to another big billboard.

Also, I plan to refactor my draft PR #804 to make it more understandable, as its description still dates back to the time I was using the flawed Framebuffer implementation, and my fellow GSoC contributors will eventually review it, and to do so, they will need to understand it.

Wish me luck!

 

View Blog Post

Week 5: All Roads Lead to Rome

joaodellagli
Published: 07/04/2023

Hello everyone, time for another weekly blogpost! Today, we will talk about taking different paths to reach your objective.

Last Week's Effort

After having the FBO properly set up, the plan was to finally *render* something to it. Well, I wished for a less bumpier road at my last blogpost but as in this project things apparently tend to go wrong, of course the same happened with this step.

Where the Problem Was

Days passed without anything being rendered to the FBO. The setup I was working on followed the simplest OpenGL pipeline of rendering to

an FBO:

1. Setup the FBO

2. Attach a texture to its color attachment

3. Setup the shader to be used in the FBO render and the shader to render the FBO's Color Attachment

4. Render to the FBO

5. Use the color attachment as texture attached to a billboard to render what was on the screen

But it seems like this pipeline doesn't translate well into VTK. I paired again on wednesday with my mentors, Bruno and Filipi, to try to figure out where the problem was, but after hours we could not find it. Wednesday passed and then thursday came, and with thursday, a solution: Bruno didn't give up on the idea and dug deep on VTK's documentation until he found a workaround to do what we wanted, that was retrieving a texture from what was rendered to the screen and pass it as a texture to render to the billboard. To do it, he figured out we needed to use a different class, vtkWindowToImageFilter , a class that has the specific job of doing exactly what I described above. Below, the steps to do it:

windowToImageFilter = vtk.vtkWindowToImageFilter()
windowToImageFilter.SetInput(scene.GetRenderWindow())
windowToImageFilter.Update()

texture = vtk.vtkTexture()
texture.SetInputConnection(windowToImageFilter.GetOutputPort())
# Bind the framebuffer texture to the desired actor
actor.SetTexture(texture)

This is enough to bind to the desired actor a texture that corresponds to was prior rendered to the screen.

This Week's Goals

Having a solution to that, now its time to finally render some KDE's! This week's plans involve implementing the first version of a KDE calculation. For anyone interested in understanding what a Kernel Density Estimation is, here is a brief summary from this Wikipedia page:

"In statistics, kernel density estimation (KDE) is the application of kernel smoothing for probability density estimation, i.e., a non-parametric method to estimate the probability density function of a random variable based on kernels as weights. KDE answers a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form. One of the famous applications of kernel density estimation is in estimating the class-conditional marginal densities of data when using a naive Bayes classifier, which can improve its prediction accuracy."

This complicated sentence can be translated into the below image:

That is what a KDE plot of 100 random points look like. The greener the area, the greater the density of points. The plan is to implement something like that with the tools we now have available.

Let's get to work!

 

View Blog Post

Week 4: Nothing is Ever Lost

joaodellagli
Published: 06/26/2023

 

Welcome again to another weekly blogpost! Today, let's talk about the importance of guidance throughout a project.

Last Week's Effort

So, last week my project was struggling with some supposedly simple in concept, yet intricate in execution issues. If you recall from my last blogpost, I could not manage to make the Framebuffer Object setup work, as its method, SetContext(), wasn't being able to generate the FBO inside OpenGL. Well, after some (more) research about that as I also dived in my plan B, that involved studying numba as a way to accelerate a data structure I implemented on my PR #783, me and one of my mentors decided we needed a pair programming session, that finally happened on thursday. After that session, we could finally understand what was going on.

 

Where the Problem Was

Apparently, for the FBO generation to work, it is first needed to initialize the context interactor:

FBO = vtk.vtkOpenGLFramebufferObject()

manager.window.SetOffScreenRendering(True) # so the window doesn't show up, but important for later as well

manager.initialize() # missing part that made everything work


FBO.SetContext(manager.window) # Sets the context for the FBO. Finally, it works
FBO.PopulateFramebuffer(width, height, True, 1, vtk.VTK_UNSIGNED_CHAR, False, 24, 0) # And now I could populate the FBO with textures


This simple missing line of code was responsible for ending weeks of suffer, as after that, I called:

print("FBO of index:", FBO.GetFBOIndex())
print("Number of color attachments:", FBO.GetNumberOfColorAttachments())

That outputted:

FBO of index: 4
Number of color attachments: 1

That means the FBO generation was successful! One explanation that seems reasonable to me on why was that happening is that, as it was not initialized, the context was being passed null to the SetContext() method, that returned without any warning of what was happening.

Here, I would like to point out how my mentor was essential to this solution to come: I had struggled for some time with that, and could not find a way out, but a single session of synchronous pair programming where I could expose clearly my problem and talk to someone way more experienced than I, someone designated for that, was my way out of this torment, so value your mentors! Thanks Bruno!

This Week's Goals

Now, with the FBO working, I plan to finally render something to it. For this week, I plan to come back to my original plan and experiment with simple shaders just as a proof of concept that the FBO will be really useful for this project. I hope the road is less bumpier by now and I don't step on any other complicated problem.

Wish me luck!

View Blog Post