Skip to content

Commit 368e358

Browse files
committed
nn detector info
1 parent 0b1b8e3 commit 368e358

1 file changed

Lines changed: 16 additions & 2 deletions

File tree

docs/section-7/limelight.md

Lines changed: 16 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ There are two main pipelines ("modes") we use are: AprilTags and Neural Networks
1313

1414
Apriltags are essentially QR codes. These are placed throughout the field, such as on scoring targets. Since they have set locations on the field, it is useful for us to use these to detect what things we can do in our location (for example, if we can see that we are in front of a goal then we can shoot).
1515

16-
On the other hand, we use the Neural Network pipeline to detect custom objects. Usually, we train a neural network to find these objects, and then we can get measurement values from it. For more details on training a model, check out [this link](https://docs.limelightvision.io/docs/docs-limelight/pipeline-neural/getting-started-with-neural-networks). There are also several pretrained models online you can steal >:)
16+
On the other hand, we use the Neural Network pipeline to detect custom objects. Usually, we train a neural network to find these objects, and then we can get measurement values from it. We'll go more in depth later.
1717

1818
There are two important interfaces for the limelight: http://limelight.local:5801 and http://limelight.local:5800 (you need to be connected to the limelight via the radio to use these links). The first one is for configuring the Limelight pipeline and the second one is for displaying the camera feed.
1919

@@ -58,6 +58,7 @@ public double getRotation() {
5858
double cameraLensHorizontalOffset = LimelightHelpers.getTX("limelight") / getDistance();
5959
double realHorizontalOffset = Math.atan(cameraLensHorizontalOffset / getDistance());
6060
double rotationError = Math.atan(realHorizontalOffset / getDistance());
61+
return rotationError;
6162
}
6263
```
6364

@@ -90,8 +91,21 @@ If it's too long to read, basically what it does is:
9091

9192
Now go back and read it again :)
9293

94+
## What if I want to detect other things?
95+
Well now you're at the right section. Using Limelight, we can detect a bunch of different objects, like game pieces and people!
9396

94-
Some notes to keep in mind when coding:
97+
There are two (main) types of vision models: classifier and detector.
98+
99+
A classifier is used to categorize an entire image into a predefined label. For example, if you want to distinguish between a red ball and a blue ball, you would use a classifier.
100+
101+
A detector is used to find specific distances within an image. This is usually used more often. For example, if you want to find the bounding box and specific location of that ball, you would use a detector.
102+
103+
To train one of these, you need to have a reference group.
104+
For more details on training a model, check [this](https://docs.limelightvision.io/docs/docs-limelight/pipeline-neural/getting-started-with-neural-networks) out. There are also several pretrained models online you can steal >:)
105+
106+
Once you have your files, change the pipeline option to "Neural Network" and upload the files to the Limelight interface. Then watch the magic happen 😼😼
107+
108+
Some general notes to keep in mind when coding:
95109
- Make sure you keep stay consistent with units
96110
- Remember to store the name of your limelight(s) in constants
97111
- In commands, don't make Limelight a requirement so that it can be used by multiple commands at the same time

0 commit comments

Comments
 (0)