This week, I started with the implementation of Time Sink. Overall, the events in this week were pretty complicated, confusing and yet interesting. Following are the updates of the week:

Update in the data flow structure of the module

The data flow structure of the module was discussed in last blog post. There is a slight modification in the idea. The details are explained below:

Basically, the structure discussed in last week suggested that there is a Python based GR block working as a sink. C++ class will act as an accelerator for processing tasks. Hence, the scheduler will call the Python block to consume data. Python block will ask C++ to do the processing on the data. C++ object will respond with the data whenever Python block asks for it. The flow can be described as follows: Scheduler/Simulator (C++) –> Python (trivial GR tasks) –> C++ (processing) –> Python (Plotting)

We came to conclusion that the previous structure still has a processing bottleneck when the sheduler/simulator calls the Python block to consume data. Irrespective of how much we accelerate the processing part, speed of the module will be limited by the Python part doing trivial GR tasks.

Hence, we come to conclusion that we use C++ to do the trivial GR tasks. Hence, updated data flow structure will be as follows:
Scheduler/Simulator (C++) –> C++ (GR tasks + processing -> Python (plotting)
So, now the sink will be a C++ block (instead of Python “block” in previous structure). The Python plotting will be an utility to plot the data.

This structure has one more advantage: suppose in future, there comes a better library for plotting, we can easily detach the Bokeh and connect the other library. This portability will significantly help in maintaining the code of the module.

Current Status

Implementation of Time Sink

Initially I implemented the time sink considering previous dataflow structure. Then updated the code to reflect the changes in data flow structure of the module. As of now, develop branch of the repository have most recent version of the code. We have time sink block ready with basic functionalities. Initial testing with Python tests are successful. I am yet to write a full structured tests for the same.

Learning of the week

  • SWIG is easy but hard: Getting error from SWIG will inspire paradoxes in your mind! (Like the heading of this point). I spent 3 whole days to find the solution to solve an error coming from SWIG. But after the break of 2 days, I accidentally solved the issue. Let me describe the debugging process of SWIG:
    • You will think that your error is unique. You will google your issue and as usual, you will find lots of answers (read: 10 answers) to solve the issue. Most of the answers will tell you to do the same thing. But since you already developed a bias that your issue is different and unique, everything in the world will help you to strenghten your bias and your issue will not be solved.
    • Then you will get a personal advisor on the issue (In my case, he was Michael Dickens). The personal advisor will suggest you some changes in your apporach and voila! Your issue is solved. (May be because you believed in your advisor(?))
    • But when you see observe what was the change that solved your issue, you will find that you did the same thing that those 10 answers told you to do. You will go to bed with the hope that you won’t wake up in this world that is becoming more and more dependent on the computers filled with randomness!
    • TL;DR: Believe in the solutions given by Google and Stackoverflow!

To-Do list for next week:

  1. Complete Time Sink and drop a testing request to the community
  2. Start evaluating the performance