User Tools

Site Tools


ip:ws2021:lets_plaiy:student-documentation:object-detection:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ip:ws2021:lets_plaiy:student-documentation:object-detection:start [2022/01/13 16:33] – [Motivation] bashar001ip:ws2021:lets_plaiy:student-documentation:object-detection:start [2022/02/27 23:47] (current) – [3.2 Projects and Capabilities] shashank001
Line 1: Line 1:
-====== Object Detection ======+====== 3. Object Detection ======
  
  
  
-====== Begin to Plaiy====== 
 ---- ----
  
-=== NVIDIA Jetson Object Detection ===+ 
 +=== 3.1 NVIDIA Jetson Object Detection ===
  
 Jetson Nano is a small, powerful computer that lets you run multiple neural networks in Jetson Nano is a small, powerful computer that lets you run multiple neural networks in
Line 35: Line 35:
  
  
-===== Teaser ===== 
----- 
  
-=== Activity 1===+ 
 +=== 3.2 Activity===
  
   - Form of group of 10 students and provide them a Nano Jetson kit.   - Form of group of 10 students and provide them a Nano Jetson kit.
   - Ask them to start collecting objects around them, which ever they want to be detected. Additionally, they should make a note about what have they collected.   - Ask them to start collecting objects around them, which ever they want to be detected. Additionally, they should make a note about what have they collected.
-  - Students should bring the objects in front of the Jetson camera and see the output on the screen. Also, noting down if their object is detected correctly or not by the Jetson Nano. +  - Students should bring the objects in front of the Jetson camera and see the output on the screen. This can be done by connecting the Jetson to a computer and executing the following commands on the terminal\\ <code>cd jetson-inference 
 +bash docker/run.sh 
 +detectnet csi://0.</code> 
 +  - Finally, noting down if their object is detected correctly or not by the Jetson Nano. 
  
 After completing the following activity, they should answer the following questions: After completing the following activity, they should answer the following questions:
Line 49: Line 51:
     - What were their thoughts about A.I earlier and now?     - What were their thoughts about A.I earlier and now?
     - Did they enjoy this activity?     - Did they enjoy this activity?
-=== Brain Storming ===+=== 3.3 Brain Storming ===
        * Ready to use Jetson Object detector        * Ready to use Jetson Object detector
        * What does AI know        * What does AI know
Line 58: Line 60:
  
  
 +|{{:ip:ws2021:lets_plaiy:student-documentation:object-detection:dsc00282.jpg?400|}} |
 +^Figure 2: Banana being detected ^
 +
 +
 +| {{:ip:ws2021:lets_plaiy:student-documentation:object-detection:dsc00281.jpg?400}}  |
 +^ Figure 3: Apple being detected                                                     ^
  
-=== Ready to use object detection ===+=== 3.4 Ready to use object detection ===
  
 As the students assemble in groups and are given the NVIDIA Jetson, at first glance it is evident that they would be enthusiastic to see the objects they show on the screen be recognized. They would be provided, or asked to bring, some toys, fruits and other objects to test the working of the Jetson. Using the interface of Snap!, which makes it easier for the students to operate. A block is clicked that captures an image on the stage of Snap! and is recognized by the click of another block. A text-to-speech generator is then activated that narrates the classified object, aloud. This is done several times and a challenge is issued by the teacher to try to trick the Jetson into guessing the wrong object. As the students assemble in groups and are given the NVIDIA Jetson, at first glance it is evident that they would be enthusiastic to see the objects they show on the screen be recognized. They would be provided, or asked to bring, some toys, fruits and other objects to test the working of the Jetson. Using the interface of Snap!, which makes it easier for the students to operate. A block is clicked that captures an image on the stage of Snap! and is recognized by the click of another block. A text-to-speech generator is then activated that narrates the classified object, aloud. This is done several times and a challenge is issued by the teacher to try to trick the Jetson into guessing the wrong object.
-=== what does AI know? (capabilities) ===+=== 3.5 what does AI know? (capabilities) ===
  
 AI is Artificial Intelligence, which is a branch of computer science. It integrates mathematics, logic, statistics, computer science and other disciplines. In fact, we have been exposed to artificial intelligence very early in our lives, such as mobile phones, sweeping robots, and smart speakers. The extent of which humans have implemented artificial intelligence is incredibly vast, to the point at which they exceed human capabilities on tasks like speech recognition,complex decision making etc. that may transform human lives drastically. In the case of the NVIDIA Jetson, the only objects that it can identify are the ones in Coco dataset. Here it is only able to identify 90 labels. AI is Artificial Intelligence, which is a branch of computer science. It integrates mathematics, logic, statistics, computer science and other disciplines. In fact, we have been exposed to artificial intelligence very early in our lives, such as mobile phones, sweeping robots, and smart speakers. The extent of which humans have implemented artificial intelligence is incredibly vast, to the point at which they exceed human capabilities on tasks like speech recognition,complex decision making etc. that may transform human lives drastically. In the case of the NVIDIA Jetson, the only objects that it can identify are the ones in Coco dataset. Here it is only able to identify 90 labels.
  
-=== Limitations and boundaries ===+=== 3.6 Limitations and boundaries ===
  
 Since the camera that is used is just a simple webcam that is integrated to the PC. This restricts the students into only a confined region in front of the PC. As the Jetson is connected to the PC, the only available objects that they can allow the Jetson to identify are the ones that that are of reach only at the moment. Since we are using COCO dataset for our object detection, this brings us another limitation. Since the camera that is used is just a simple webcam that is integrated to the PC. This restricts the students into only a confined region in front of the PC. As the Jetson is connected to the PC, the only available objects that they can allow the Jetson to identify are the ones that that are of reach only at the moment. Since we are using COCO dataset for our object detection, this brings us another limitation.
Line 77: Line 85:
 effort and lengthy approach but do make the dataset more capable. effort and lengthy approach but do make the dataset more capable.
  
-=== Examples ===+=== 3.7 Examples ===
  
 Once the students have enjoyed playing around with the Jetsons and have had a few objects classified, Assuming that its the first time they have seen objects being classified instantly with their bounding boxes shown, they would be filled with enthusiasm. To this enthusiasm, AI is introduced to them through a YouTube video and an explanation as to how they are identified using the coco dataset is made. Without going too deep into the technical information, present-day examples are stated that they can relate to. Once such example is from the movie WALL.E, wherein Wall.E (Waste Allocation Load Lifter: Earth)collects and assembles waste left by post-apocalyptic humans and shapes them into boxes. The robot conveniently avoids destroying the objects that it can recognize such as lighters, jewelry boxes and bobble-heads and places them as souvenirs in his house. Voice activated AI like Siri and Alexa are also be demonstrated where they respond to commands that alter the environment of the room. Once the students have enjoyed playing around with the Jetsons and have had a few objects classified, Assuming that its the first time they have seen objects being classified instantly with their bounding boxes shown, they would be filled with enthusiasm. To this enthusiasm, AI is introduced to them through a YouTube video and an explanation as to how they are identified using the coco dataset is made. Without going too deep into the technical information, present-day examples are stated that they can relate to. Once such example is from the movie WALL.E, wherein Wall.E (Waste Allocation Load Lifter: Earth)collects and assembles waste left by post-apocalyptic humans and shapes them into boxes. The robot conveniently avoids destroying the objects that it can recognize such as lighters, jewelry boxes and bobble-heads and places them as souvenirs in his house. Voice activated AI like Siri and Alexa are also be demonstrated where they respond to commands that alter the environment of the room.
ip/ws2021/lets_plaiy/student-documentation/object-detection/start.1642088010.txt.gz · Last modified: 2022/01/13 16:33 by bashar001