Adam2392's Blog

Aug 28 - Sep 3, 2023: Week 14

Adam2392
Published: 08/25/2023

This week, I am finishing my final PR to fix the Coiteration algorithm: ​​https://github.com/hameerabbasi/xsparse/pull/31.

 

This will enable coiterations over levels that are nested within other levels. As such, we can define multiple Coiterate objects that co-iterate over each dimension respectively. 

 

With this, the ground-work will be laid to very easily implement the MergeLattice Coiterator, which is just an abstraction on top of this idea of calling multiple coiterators.

The final PR now shows a unit-test that Co-iterates over two sets of nested levels, which each together define a CSR matrix. It is done with a conjunctive merge, so the unit-test defines how the API must be specifically defined. There were a few errors that I ran into that were hard to debug, but it turns out they were consequences of how I was leveraging the Coiterate API, which is slightly unforgiving right now.
 

View Blog Post

Aug 21 - Aug 27, 2023: Week 13

Adam2392
Published: 08/21/2023

 

This week, I was sick so did not have a lot of time for work. 

 

However, I did play around with the new API for the Coiterate that we needed to get working. Specifically, I realized that if we pass in a tuple of PKs and IKs into Coiterate, then we need to adapt various other parts of the API, which assumed the PK was a single element. This may present some challenges that I will aim to finish as the last sprint of my GSoC.


 

View Blog Post

Aug 14 - Aug 20, 2023: Week 12

Adam2392
Published: 08/14/2023

 

This week, I finished up the PR to add implementation of a Tensor. I also identified a flaw with the previous Coiteration implementation. For the remainder of the GSoC I will work on fixing that flaw.

 

Basically, the Coiterate iterator should take in a tuple of indices and a tuple of Pks if coiterating over a set of levels defined by a stack of levels.


 

View Blog Post

Aug 7 - Aug 13, 2023: Week 11

Adam2392
Published: 08/10/2023

 

This week, I finished up the PR to add static_assert to the container-traits. I also simplified the Tensor implementation PR and submitted it for review/merge.

 

The bulk of my work now has to do with implementing a version of the Coiterate over the Tensor classes now, instead of over single levels.

 

Now, merge lattice takes in a tuple of Tensors:

 

  1. Constructor: we can compile-time check that the tensor shapes align? I don’t know how to do this, so probably a future PR.

  2. Algorithm:

    The algorithm must initialize a coiterator over levels for the index into the level given to the merge lattice. It will advance, dereference and compare elements until this coiterator is done. Then it will advance the coiterator above.


     

Ex:

 

Expression: (A_ij + B_ij) @ C_j = D_i

Tensors: (A, B, C)

Indices: [(0, 1), (0, 1), (1)]

 

How would the iterator function? Initialize iter1 = Coiterate(A_i*, B_i*, true). Initialize iter2 = Coiterate (A_*j, B_*j, C_j).

 

If iter2 != end:

Advance iter2;

Else:

Advance iter1;

Reset iter2; // what would this mean? Iter2 now must take in the new IK and PK from iter1 to know where to start

 

My intuition stems from: https://github.com/hameerabbasi/xsparse/blob/10b91002e246a16d2e14db8495faafa3774d383e/test/source/compressed_test.cpp#L63-L77

 

Questions:

  1. Basically, what sort of objects do I need to handle in the iterator of MergeLattice class? Should I store a tuple of ``Coiterate`` classes for each tuple of levels from the Tensors?

  2. Am I thinking about this correctly?

View Blog Post

July 31 - Aug 6, 2023: Week 10

Adam2392
Published: 07/29/2023

This week, I finished the compile-time check validity of coiteration in the PR [ENH] Check validity of coiteration. This has a few hacks, but it works and is able to raise a compiler-error when co-iteration with a disjunctive merge with an unordered level is involved.

 

I am starting to understand the merge lattice class more. It is simply another abstraction on iteration that now iterates over collections of tensors instead of collections of levels like co-iteration does. 

 

As a few pre-steps towards implementing the merge lattice, I also added a PR that adds compile-time checks for valid methods to be defined for any container traits that would store indices in the sparse tensor, or the pointers to the next level (or data array): Adding required member functions of abstract container traits. This tightens up the API to allow users to define their own custom containers as long as they implement a certain API.

 

I also started working on the Tensor class implementation. The tensor class now connects a collection of levels to actual data and represents a sparse n-dimensional array. [ENH] Adding Tensor implementation. This probably will take up most of my next week, since I will need to understand how to implement the iterator over the tensor. This will require initializing iterators over all the levels that define the tensor and then also dereferencing to get the right indices and data.

 

A few questions I’ll have to figure out this week are:

  1. How to initialize the iterators of each level?

  2. How to design the Tensor’s iterator? Should I follow a design similar to `Coiterate` since they are both at a high-level iterating over a collection of levels?

  3. What should dereference do? My thinking is return a tuple of indices (i.e. IKs) and then the data value, which is just the PK pointer into the data vector.

    I.e. tensor* -> (1, 0, 5) 60.3 returns the value at index (1, 0, 5) in the 3D sparse array with non-zero value 60.3 there.

  4. Does it matter if we iterate over an unordered level?

View Blog Post
DJDT

Versions

Time

Settings from gsoc.settings

Headers

Request

SQL queries from 1 connection

Static files (2312 found, 3 used)

Templates (28 rendered)

Cache calls from 1 backend

Signals

Log messages