Libcamera-Apps-CV Tutorial

In this tutorial you will find a complete guide how to change your default libcamera-apps into libcamera-apps that can detect anything what you want! This tutorial is technical solution for problems that we found when we were developing low-latency IoT solutions for the industry. You can read more about it here . Source code is available here .

1. Install OpenCV on Raspberry Pi

We highly recommend the installation guide  of OpenCV from QEngineering, they did a great job!

2. Clone our repository from Github

If you already had installed Git you can just paste:


git clone https://github.com/modelingevolution/libcamera-apps-cv.git
      

if not, install Git first:


sudo apt install git
      

3. Build libcamera-apps-cv

Go to Sourcedirectory:


cd libcamera-apps-cv/Source/
      

Add permission to install.sh:


sudo chmod +x install.sh
      

Run it:


./install.sh
      

4. Test stream

To test streaming and basic triangle recognition you need to install ffplay  or any other player which can handle video streaming via tcp protocol.

If you want to test your newly installed libcamera app (as a server) paste:


libcamera-vid --width 1920 --height 1080 -g 24 -t 0  --frame-counter 10 --inline --listen -o tcp://0.0.0.0:9001
      

As a client (ffplay on Windows) in a directory with ffplay.exe, paste in terminal:


./ffplay tcp:// {Your Raspberry Pi IP address } :9001 -vf "setpts=N/24" -fflags nobuffer -flags low_delay -framedrop
      

Server console view: Console view
Client ffplay.exeview: Ffplay view

6. Where can I implement my computer vision algorithm?

In libcamera-vid.cppfile (libcamera-apps-cv/Source/apps/libcamera-vid.cpp) you will find prepared method where you have direct access to the current and previous frame, both in grayscale.


void opencv_loop(LibcameraApp *app)
{
    addSignalHandling();

    cv::Mat frame;
    cv::Mat frame2;
    cv::Mat inRangeFrame;

    cv::Mat *current = &frame;
    cv::Mat *prv = &frame2;

    long frameCounter = 0;

    while (true)
    {
        if (!app->GetVideoFrame(*current))
        {
            continue;
        }

        if (frameCounter > 0)
        {
            //Basic triangle recognition example - START
            std::vector<std::vector<cv::Point> > contours;
            std::vector<cv::Point> approx;
            int trianglesCounter=0;

            cv::inRange(frame,
                    cv::Scalar(0, 0, 0),
                    cv::Scalar(100, 100, 100),
                        inRangeFrame);

            cv::findContours(inRangeFrame,contours, cv::RETR_LIST, cv::CHAIN_APPROX_SIMPLE);

            for (size_t i = 0; i < contours.size(); i++) {
                approxPolyDP(cv::Mat(contours[i]), approx, arcLength(cv::Mat(contours[i]), true)*0.02, true);
                if(approx.size()==3)
                    trianglesCounter++;
            }

            std::cout<<"Counturs: "<< contours.size()<<" Triangles: "<< trianglesCounter <<std::endl;

            //Basic triangle recognition example - END
        }

        cv::Mat *tmp = prv;
        prv = current;
        current = tmp;

        frameCounter++;
    }
}
   

When your changes will be applied, you just need to rebuild your code by running ./install.sh

7. Additional flags

-- frame-counter (=0)- Every which frame to be sent to recognition.

--save-to-file (=path)- Save frames to binary file in given location. Name of file will be $TIMESTAMP_frames.bin.

Liked it?