You can’t master robotics unless you understand how robots see the world and work on object detection, face recognition, and other algorithms. In order to test these ideas or even add them to your existing robot, you are going to need a high-quality vision sensor. The HuskyLens 2 is an AI-powered camera with over 20 algorithms, livestreaming, and support for LLMs through an MCP server. The good people at DFRobot were kind enough to send us one to test. Let’s see what was included:

Our HuskyLens 2 shipped in a tiny package. It came with the main unit, a microscope lens, and a WiFi module. A power board was also included, which comes in handy when you want to use this vision sensor with a Raspberry Pi or Arduino. We decided to power ours with a portable power bank. Our package also included a metal accessory kit, which is useful when you need to integrate this into your DIY projects.

The HuskyLens 2 comes with a 1.6GHz dual-core processor and 6 TOPS AI performance, which means you can use it as a standalone vision sensor. It features various algorithms, including face recognition, object tracking, color recognition, pose & hand recognition, and line tracking. It can also read barcodes and QR codes. You can use Mind+ to train and import your own models.
The HuskyLens 2 comes ready to go out of the box. You can start learning new poses, teach faces and identify emotions with minimal steps. For instance, in hand gesture recognition mode, this camera can recognize 21 points in your image, including your finger joints and wrists. It supports multi-angle learning. You can adjust detection and recognition thresholds to control how the model behaves. Once a gesture is learned, it is recognized by the AI and framed. It will also show the probability for each recognized gesture.

For other modes, you will have a similar set of steps for setup and learning. In face recognition, you will just have to point the camera at a new face and press the top button to learn it. You will be able to choose a name for each face and adjust NMS, recognition and detection thresholds. The NMS threshold determines how duplicate boxes are handled. Multi-face acceleration, as the name suggests, comes in handy when you are dealing with multiple faces. It offers smoother results but could reduce recognition accuracy.


The HuskyLens 2 is also very capable when it comes to learning colors and objects. With multi-angle learning, you will be able to learn an object from multiple angles. You will just have to long press the top button and adjust the angle of the camera. This camera has emotion recognition pre-programmed. We just had to point it at a bunch of faces to read their emotional expressions.

In order to use the HuskyLens 2, you don’t need to install the WiFi or microscope lens. By doing so, you will be able to do a lot more, though. To replace the lens, you will just have to remove two screws. The WiFi module has to be installed inside the unit. We removed the 4 screws, carefully opened the camera and pushed the module into the appropriate slot.
What sets this apart from other smart cameras we have tested is its support for MCPs. You can only access this after updating the firmware to version 1.1.6. The WiFi module also needs to be installed with this. To set up your MCP service, you need Cherry Studio and an AI API key. We just had to choose a model provider (Gemini, GPT, …), add our API key and create an MCP. The URL we needed was displayed on the camera. As long as your vision sensor and computer are on the same network, you will be able to connect the camera to an AI to chat about what it sees. We managed to use HuskyLens 2 with Qwen and Gemini, but many other models are supported.
The latest firmware also brings support for livestreaming. For starters, you can just use this gadget as a camera to record videos and capture photos for training. When paired with a power bank, it works as a portable camera for outdoor situations. In livestreaming mode, the camera connects to VLC to show what the camera sees on a monitor. We just had to make sure the camera was recognized as a network adapter on Windows 11.
You have the option to train your own models. You need Mind+ to add and tag your images. It is possible to capture images with your camera or just use the ones you already have available. Once you have labeled your data, you can rely on Mind+ to complete the training. More advanced users can train their models with Python. They can use the ONNX-to-HuskyLens 2 conversion GUI tool which supports YOLOv8n object detection.


Overall, we found the HuskyLens 2 to be very easy to use. Even if you are new to AI vision sensors and MCPs, DFRobot has plenty of educational information to get you familiar with various AI concepts. It also walks you through model training and MCP setup. It works on its own but can be paired with a Raspberry Pi, micro:bit, UNIHIKER M10, or Arduino for more complex projects. If you are looking for a versatile AI tool to get your feet wet with AI applications, you should give this gadget a look.














































