essay on programming languages, computer science, information techonlogies and all.

Sunday, February 6, 2022

Blocking vs non-blocking assignment on sequential logic

Made counter that hold 1 clock at every 4 clock. First with blocking assignment


`timescale 1ns / 1ps
module ride_edge (
        input wire clk,resetn,
        output wire hold_p
    );
    
    reg hold;
    reg [1:0] count;
    
    assign hold_p = hold;
    
    always @(posedge clk or negedge resetn) begin
        if (!resetn) begin
            hold = 0;
            count = 0;        
        end
        else begin         
            if ( count == 0 ) hold = 1;    
            else if ( hold == 1 ) hold = 0;
            
            count = count + 1;
        end    
    end
    
endmodule



Then probe with oscilloscope.


Then made same code with non-blocking assignment like below.

`timescale 1ns / 1ps
module ride_edge_nonblocking (
        input wire clk,resetn,
        output wire hold_p 
    );
    
    reg hold;
    reg [1:0] count;
    
    assign hold_p = hold;
    
    always @(posedge clk or negedge resetn) begin
        if (!resetn) begin
            hold = 0;
            count = 0;        
        end
        else begin         
            if ( count == 0 ) hold <= 1;    
            else if ( hold == 1 ) hold <= 0;
            
            count <= count + 1;
        end    
    end    
endmodule



Here is Vivado diagram. The FCLK_CLK0 is 100MHz, xslice_1 pick Din[4] make 3.125MHz clock ( = 100MHz/32 ).


Then probe both hold_p.


Puzzled. I expected that non-blocking will be delayed 1 clock. No wonder as schematic shows exactly same nets generated.


But why no difference between blocking and non-blocking assignment ?  I guess that when the design involves a register then assignment type doesn't matter as the register will be used in same manner. It seems Vivado synthesis reads between lines.   

Then once you put a register in a design, same sequential logic will be generated which means '<=' will be used on every '='. Then better use '<=' on Verilog to avoid confusion ?  

Sequential logic needs register. It seems to me that '<=' is the only choice left. 
 

Wednesday, July 7, 2021

Finding planar rigid tranformation - linearized and weighted least squares

Rigid Transformation

A point P in the world can be expressed as Body coordinate as well as World coordinates, and the relationship between the two are rigid transformation - rotation and translation - like below.



Refer Wiki - Kinematics : Matrix representation

Image resolution and dimension

A point at an image can be expressed on world coordinates using Image resolution - i.e. mm / pixel - and dimension. At below, resolution is r. W is width and H is Height. Point at center (0,0) will be (320,240) of 640x480 image.



Geometric Model

For example, there is a pixel point (P) on a camera ( C ), that is offseted ( CO ) and assmebled to a machine (W).



And the machine has a stage (S) and it is moved by motor (M) and the stage has a point.



Then any point that camera see can be matched to a point on the stage.



Least square method on a linearized transformation matrix

If we know all other transformation except camera offset, then we can rearrange above equation like below for an i-th point.




The unknown CO transformation matrix can be linearized in assumption that the angle is small.



Then the equation can be rearranged putting the unknown X on right. Then the unknown can be solved by matrix inversion. Refer Wiki - Least Square : Linear least squares




Weighted least square method

If there is an uncertainty on each measurment of pixel point, the uncertainty can be put as weight (W) on each pixel point.




Then the equation can be rearranged for the unknown X. Note that element of X changed sign for convenience.







Monday, June 28, 2021

Continuous Integration of .NET Application using Azure DevOps

Are you still building exe in your local PC ?

Or maintain a dedicated PC that goes power-cycle at a stormy day or goes dead when HDD break down ? Aren't you tired of maintaining those build PCs ? Here is a story that can put those worries to history - only for .NET application using Azure DevOps now.

Cloud will build like local

Here,I want to make the build process runnable in the local PC as well as in the cloud - Azure DevOps - so that you can verify your change locally and you push the changes and know that the cloud will build your code in same manner. Then, once you fixed something on your local PC, you are just one push away to finish. Now if you sold, be patient and try to follow below.

Create Azure DevOps project

First, go to Azure DevOps and create a new project
Then clone the repository to your local PC. Or just clone template project

Directroy structure

Now you will populate the workspace with below directories. Or just copy and paste template project.

    /build         : contains build batch, project    
    /src   
       /bin        : generated dll, exe, pdb goes here
       /packages   : dependent component such as NUnit goes here
       /Foo        : various VS solutions
       /Bar
       ...

Local build

In local, you can open *.sln file at IDE and code and debug and build. Then you will open a command prompt and run build processs that will goes through all the solutions and make sure nothing break down by your change.

To make the process as easy as double-click an icon, there is Build command-line icon ath the build folder. When double click the icon, msbuild will be executed to build the whole process. Try to follow those arrows at below.


Your aim is to see below 'Build succeded.' like below.

Install packages

NuGet can put referenced dll on your local folder. It needs PackageReference instead of package.config - package.config will install packages at global repository. If correctly setup packages, you can find line something like below at csproj. The folder to hold those packages should be defined at nuget.config at the root folder. During build process, those packages should be downloaded and installed - called restore. At local build, it can be defined at the build.proj like below.

Build and test

MSBuild task builds all solutions file listed up like below. And it can test all the unittest like belew. If you want to debug the code - put a breakpoint - at the unitest, you will need to put '/process=Single'.

zip up binaires

After successful build and test, those dll, exe and relevant files can be zipped up. At below, files at src\bin\debug|release are ziped to src\bin\install\SampleNetApp.{Debug|Release}.zip

Do the same thing at cloud

Azure can do what you did on local PC on cloud. It just need azure-pipelines.yml at the root folder. Then whenever you pushed change on git, then pipeline will start and build. You can look into build log.
With everything in place, you can get the build as zipped file at artifact.
Now as promised, you get the build from cloud in addition to your local PC. This is a simple continous integration of .NET application but it can be a good starting point on your journey.

Friday, July 5, 2019

Create instance of Type dynamically in C#

When applying RAII, constructor tends to have arguments to be fully autonomous afterward. When combined with factory method, there can be number of if-else with "new" and arguments like below pseudo code.

interface IWish
{
  void MakeTrue();
}

class EscapeCave : IWish
{
  public EscapeCave( string master, int nTh, string target ) {  ...  }
  public void MakeTrue() { ... }
}

class BecomePrince : IWish
{
  public BecomePrince( string master, int nTh, string target ) {  ...  }
  public void MakeTrue() { ... }
}

class BecomeHuman : IWish
{
  public BecomeHuman( string master, int nTh, string target ) {  ...  }
  public void MakeTrue() { ... }
}

class Genie
{
  static public IWish Create( string wish, string master, int nth, string target )
  {
    if( wish == "enscape cave" )
    {
      return new EscapeCave( master, nth, target );   // code repetition and become tedious and smelly 
    } 
    else if( wish == "become prince" )
    {
      return new BecomePrince( master, nth, target );
    } 
    else if( wish == "become human" )
    {
      return new BecomeHuman( master, nth, target );
    }  
    
    throw new ArgumentException( "Not supported wish" );
  }
}
To remove code repetition - "new" and arguments, Type can be used with System.ComponentModel.TypeDescriptor.CreateInstance(). Note that the Type should have same constructor - same number of arguments and types. It may be too restrictive but sometimes it is desirable as only difference should be differenct behaviour with same arguments.

class Genie
{
  static public IWish Create( string wish, string master, int nth, string target )
  {
    var typeDict = new Dictionary
    {
      { "enscape cave", typeof(EscapeCave) },
      { "become prince", typeof(BecomePrince) },
      { "become human", typeof(BecomeHuman) },   // genie can add its capable wish by simply adding line in here 
    };
    var typeWish = typeDict[ wish ];

    return (IWish)System.ComponentModel.TypeDescriptor.CreateInstance(
                provider: null, // reflection
                objectType: typeWish,
                argTypes: new Type[] { typeof(string), typeof(int), typeof(string)},
                args: new object[] { master, nth, target }  
    }
}

Wednesday, June 26, 2019

Integer factorization and combination in C#

 
     F
R = --- 
     D 
When transmitting data in a certain rate ( R ), clock rate ( F ) has to be divided ( D ). And sometimes, it is desirable to have the rate as integer. Usually, F is given from the transmitting device. And we want to know every possible R.

This asks R to be integer and asks D to divide F without fraction. Then F has to be integer multiple of D. First F has to be factored into prime numbers. Then all the permutation of prime numbers has to be generated. Then each permuation can be D ( or R in here).

Prime numbers of clock can be calculated on the fly but not the point in this article. Assume prime numbers are already known like { 2, 3, 5, 7, 11, ... }. Then factors of R can be enumerated by continuously divide with each primes like below function.

IEnumerable< int> Factors(int n, IEnumerable< int> primes)
{
  foreach (int p in primes) {
    if (p * p > n) break;

    while (n % p == 0) {
      yield return p;
      n /= p;
    }
  }

  if (n > 1) yield return n;
}


Then each factors can be permuted and enumerated so that each dividible product can be calculated.

IEnumerable< int[]> Combination(Tuple< int, int>[] groups )
{
  var product = new int[groups.Count()];
  foreach (var p in RecursivelyCombine(product, groups, 0)) {
    yield return p;
  }
}

IEnumerable< int[]> RecursivelyCombine(int[] products, Tuple< int, int>[] factors, int index)
{
  var factor = factors[index];
  for (int i = 0; i <= factor.Item2; ++i) {
    products[index] = i;
    if (index == factors.Length - 1) {
      yield return products;
    }
    else {
      foreach (var p in EnumerateProduct(products, factors, index + 1)) {
        yield return p;
      }
    }
  }
  yield break;
}


Then here is function that enumerate all possible rates with a given clock F.

IEnumerable< int> Rates(int F) // F = 50
{
  auto factors = Factors( 500, new int [] {  2, 3, 5, 7 } );
  // factors = { 2, 2, 5, 5, 5 }
    
  var groups = factors.GroupBy(f => f).Select(g => new Tuple(g.Key, g.Count())).ToArray();
  // groups = { {2,2}, {5,3} }  <- 2^2 x 5^3

  foreach (int[] pm in Combination(groups)) { // {0,0}, {0,1}, {0,2}, {0,3}, {1,0}, .... , {2,3}

    // pm can be { 1, 2 } which means D = 2^1 x 5^2
    int D = 1;
    for (int i = 0; i < groups.Length; ++i) {
      D *= (int)Math.Pow(groups[i].Item1, pm[i]);
    }

    yield return D;
  }
}

Monday, June 17, 2019

Criticism vs Comparison

"Criticism is constructive, comparison is abuse" from The Rehearsal by Eleanor Catton.


When you have no other measure on the field, you just need to look into the object directly. But if there is any, you then stop looking at it and start to looking for difference only.

I happend to work on a project which rewrites legacy C++ code to C# component based code. And got a lot of feedback of comparison with old software. It goes like this.

'Hey, this new one does not generate output same as old one.'
'But does the new output cause an issue ?'
'No, but it is not what user have seen.'
'What should be seen ?'
'Don't ask me. Just make it same as before'.


Legacy code has its own reason of being put in such a way. And when it unearthed, it is not easy to see why it was shaped in such a way.

Not to burden the follower, when writing code, code should be clean and concise. And should show intention and possibly reasoning behind the code.

Though no matter how crystal-clear the code, better way is to leave a document. And when referred after 10 years, the document should tell what the software should do not what it does.

Monday, June 3, 2019

Bayesian Inference on program crash

One company consulting have been in trouble with abnormal program crash. The log says that it crashed just before accessing I/O board. And some SW engineers believed that the culprit is the I/O board and asked for replacement.

But it turned out that there are other steps before I/O which don't leave any log and caused the crash but left no trace. But when people doesn't see the IO log , they conclude that IO crashed the program so no log left on it. It is a classic case of 'Correleation is not the causation'

Thought about how to find out falut in systematic way and read about Bayesian inference and tried in here. Refer Beysian Inference at wiki

P( Bad IO | Crash ) = P( Crash | Bad IO ) * P( Bad IO ) / P( Crash )
P( Bad IO | Crash ) : The probability of Bad IO when Crash happens
P( Crash | Bad IO ) : The probability of Crash when IO goes nut
P( Bad IO ) : The probability of IO goes nut
P( Crash ) : The probability of crash


Say, there are 100 running of the program, 1 crash happens. And during the 100 running, IO probably accessed 1000 times and goes nut 1 time. If IO goes nut, it will definitely crash. Then,
P( Bad IO | Crash ) = 1 * 0.001 / 0.01 = 0.1
It is telling that the P( Bad IO ) is too small to tell that it is the culprit. One way to increase probability is to change hypothesis like 'Bad IO Writing'. IO writing may happen 500 times during 100 running and can go nut. Then P( Bad IO Writing ) can be 0.002 ( = 1 / 500 ) and the 'Bad IO writing' probability or confidence can be 0.2.

To increase confidence on a hypothesis, the hypothesis has to be very specific or the probability of the hypothesis - P ( Bad IO ) in here - is too small to be meaniningful. The process of refining hypothesis is the trouble shooting, I guess. More specific, more chance of having valid cause.

In the end, it is a keen eye to find out the bug - unsafe reentrant code in multi-threaded environment. This Beysian Inference is not strightforward to quantize - hard to tell what is probabilty of some hypothesis.