TImyML machine vision development
Ahmad Hamza wrote 04/29/2023 at 09:46 • 0 pointsI'm building machine vision application using ESP32 and STM32 development boards with builtin image sensor. the only way to transfer images to PC for debugging is through serial ports which is is very slow. what is the best options to get a view on the internal frames during debugging ?
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.
It's great to hear about your machine vision development using ESP32 and STM32 boards! To get a better view of the internal frames during debugging, you can consider using a faster data transfer method like WiFi or Ethernet. By setting up a local network connection, you can stream the image frames to your PC in real-time, which should significantly improve the debugging process. Additionally, you may explore integrating a lightweight web server on the boards to access the https://circulairee.com
of images via a web interface on your PC. This way, you can visualize the frames quickly and efficiently. Good luck with your project!
Are you sure? yes | no
[this comment has been deleted]
Thanks chatGPT but this is IRRELEVANT answer
Are you sure? yes | no
could this be help? : https://hackaday.io/project/190923-open-source-esp32-camera-for-yolov456
Are you sure? yes | no
It can be challenging to transfer images through serial ports, especially for debugging purposes. One solution could be to use a wireless connection like Wi-Fi or Bluetooth to transfer the images in real-time. Another option could be to store the images in an SD card and then access them from a PC. The car parking multiplayer game , it's a fun game that can help improve your spatial awareness and coordination skills!
Are you sure? yes | no
By "during debugging" you mean open debugger session, e.g. GDB? If so, you could grab memory. In runtime however, given that You have ESP32, you can save the image to filesystem and then host a website that points to it, or create raw socket endpoint and write simple client for the development host.
Are you sure? yes | no
any idea How hosting a web-server will affect the performance of the code by consuming CPU cycles to run the server stack ?
Are you sure? yes | no
Not really, you'd have to test it. Server itself, when not queried for responses should not consume much cpu time, rather when actively sending response to the browser. However saving an image to filesystem will surely be considerable amount of time.
Running web server on the other core than your image processing pipeline (or running pipeline on the other core) might be useful for lesser latency. Remember that you will have to look out for dataraces: https://www.freertos.org/Inter-Task-Communication.html
If you just want to debug image processing pipeline - to see if it does what you want, I'd rather do that on the host PC entirely (keep your code abstract and portable enough so that parts you want to test can be run on x86_64). Then grab some frames from your device for testing purposes and put them trough the pipeline. You'll have to put some time upfront, but then cut off a lot of time during tweaking: waiting for flashing, processing on ESP etc. And you'll have embedded code pretty much ready for TDD.
What framerate and at what resolutions are you targeting at? What frameworks are you using? E.g. vanilla ESP-IDF or Arduino? Any ML/CV libraries such as TF Lite or SOD?
Are you sure? yes | no
[this comment has been deleted]
Thanks for the detailed answer and valuable options let me add some comments on these:
1 - for openCV and tensor-flow you mean to run the code on PC?
4 - I appreciate if you share how to achieve this technique or links to such tools
or what are the tools usually used to build embedded tinyML vision application. may be i'm missing some knowledge?
Are you sure? yes | no
I don't want to be rude, but that looks like GPT response, posted just for the sake of posting. Have you read it? Looks far from being viable for @Ahmad Hamza stack.
Are you sure? yes | no