The project implements a templated linked list data structure for Expression Validation: Before evaluating the expression, the program validates the input linked list to ensure that it represents a valid arithmetic expression. It checks for proper alternation of digits, decimal points, and operations, as well as the presence of valid operation characters. Every single keystroke is considered as one of the nodes for a linked list. Arithmetic Evaluation: The core functionality of the project is the evaluation of the arithmetic expression represented by the linked list. The program traverses the linked list, reconstructs the float values from individual digits and decimal points, and performs the specified operations (+, -, *, /) on the operand (and returns a float- given that the expression is valid). 1 The program will consider every single keystroke to be a new Node of an instance of the class LinkedCalc. ExampleLinkedCalc calc1; calc1.insert(‘1’); calc1.insert(‘+’); calc1.insert(‘2’); calc1.validateExpression()//returns True calc1.evaluateExpression()//returns 3.0f (float) Methods bool validateExpression()- The validateExpression method is a member function of the LinkedList class in the Linked List Arithmetic project. Its purpose is to validate the expression represented by the linked list to ensure that it follows the correct format and adheres to the rules of arithmetic expressions. float evaluateExpression()- The evaluateExpression method is another member function of the LinkedList class in this project. Its purpose is to evaluate the arithmetic expression represented by the linked list and calculate the final result. Invalid expression conditions1. Consecutive decimal points (with no digits between them). 2. Consecutive operators (with no digits between them). 3. No digits following an operator. 2 Example interactionSee tester.cpp for details. Assumptions ● A valid input will contain a series of floats or integers joined by the operators(+, -, *, and /) ● The output (for a valid expression) will always be a positive float. ● You can assume that the divisor will never be 0. ● Order of operationsa. Division, from left to right b. Multiplication, from left to right c. Addition, from left to right d. Subtraction, from left to right Included Files 1. linked_calc.cpp 2. linked_calc.hpp 3. Tester.cpp How to handle segfaults? 1. https://discover.cs.ucsb.edu/commonerrors/tutorial/gdbtutorial.html 2. http://marvin.cs.uidaho.edu/Teaching/CS445/debuggingSegFaults.html 3 Implementation instructions1. You are not allowed to use std::string for this project. 2. You can use as many helper functions as you see fit. The isdigit() function is already implemented for you. 3. Your code must be runnable on the student cluster. There can be noo exceptions to this. Submission instructions Compress all the files together and submit to Canvas (all of the team members must submit). Name the file- Project 1-S24.zip Rubric ● 10 test cases*9 =90 ● Proper documentation (comments explaining the logic/code)= 10 ————-————-————-————- Total = 100 4
Objective This lab will teach you about probabilistic robot localization. You will use: (1) Global Reference Frame and Grid Cell Numbering and (2) World Representation. Requirements Programming: Python Robot: Webots 2023b (FAIRIS) Sensors Utilized To successfully complete this assignment, you will need to utilize several sensors on the robot. Each sensor provides unique data essential for robust navigation and environmental awareness. Below is an overview of the sensors, their specific functionalities, and references to previous assignments for guidance on usage: • Encoders: The encoders track the rotation of each wheel, enabling the calculation of distance traveled and rotational changes. By using encoder readings, you can estimate the robot’s displacement and changes in orientation over time, aiding in precise position tracking. Refer to Lab 1 for an introduction to encoder usage. • Compass: The compass sensor returns the robot’s orientation relative to the world frame, typically measured in degrees from the east direction (the x-axis). This sensor is essential for maintaining and adjusting the robot’s heading during navigation tasks. Refer to Lab 1 for initial setup and utilization of the compass. • Lidar: The Lidar sensor captures a 360-degree range image of the environment by measuring distances to obstacles and landmarks at various angles. This information allows the robot to map surroundings, avoid obstacles, and make path-planning decisions based on spatial awareness. Refer to Lab 2 to review Lidar integration and data handling. • Camera with Object Recognition: The onboard camera, integrated with object recognition capabilities, allows the robot to detect and differentiate various objects in its environment. Specific landmarks, such as colored cylinders, can be identified, providing visual cues for navigation and goal localization. Refer to Lab 3 for implementation details on using the camera and object recognition. Each of these sensors provides critical information that will need to be integrated into your navigation algorithm to ensure accuracy and efficiency. Be sure to review the respective labs to understand how to utilize these sensors effectively as you design your solution. Global Reference Frame and Grid Cell Numbering The reference frame is set up as follows: • The positive y-axis points North, while the negative y-axis points South. • The positive x-axis points East, and the negative x-axis points West. • The origin, (0,0), is located at the center of the arena. The arena is divided into a 5×5 grid of 25 cells, numbered from 1 to 25. Each grid cell is a 1 x 1 meter square, creating a structured layout for navigation and localization. In each corner of the arena, there is a colored cylinder that serves as a landmark. These cylinders have a fixed size:Global Reference Frame and Grid Cell Numbering (cont.) • Radius, r = 0.25 meters • Height, h = 1.5 meters • Each cylinder is centered at the corner of a grid cell. The four cylinders are uniquely colored (yellow, red, blue, and green) to assist in visual identification and orientation within the arena. Figure 1. Global reference frame and grid layout for Task 1 (maze7.xml). The arena is divided into a 5×5 grid with colored cylinders located in each corner. All units are in meters. Refer to this layout as you plan and execute navigation strategies within the defined grid. Wall Configuration Data Structure For this assignment, you will need to implement a data structure to represent the wall configuration of each grid cell in the arena. This structure will allow your robot to recognize and navigate around obstacles effectively. As a conceptual guide, consider representing the wall configurations using a 25×4 matrix format, as illustrated in Figure 2. In this example: • The matrix corresponds to the 25 grid cells in the arena. • Each cell in the matrix contains four entries representing the presence or absence of walls in the West-North-East-South (WNES) orientation. • A ”W” (wall) entry indicates the presence of a wall, while an ”O” (open) entry denotes the absence of one. While you are encouraged to design your own data structure, this example can serve as a reference to help you get started.Wall Configuration Data Structure (cont.) Figure 2. Example of a wall configuration representation. The left side illustrates the physical world, while the right side shows the corresponding 25×4 matrix representation using WNES orientation. Think carefully about how your data structure will allow you to efficiently check for walls in each direction and update it as the robot moves through the arena. Background: Localization with Landmarks and Normal Probabilities In this section, we will introduce the concept of using landmarks for robot localization within a grid. The robot uses distance measurements from known landmarks to estimate its location within a 5×5 grid by comparing these measurements with precomputed distances from each grid cell to the landmarks. For each grid cell CN, we compute the likelihood of the robot being in that cell based on the measured distances to landmarks. This probability is calculated by comparing: • DR−Li : The distance from the robot’s current position to each landmark Li . • DCN−Li : The precomputed distance from the center of each cell CN to each landmark Li . Using a normal (Gaussian) probability distribution function, we can calculate the likelihood of the robot’s position given these distances. Calculating Normal Probability in Python To calculate the probability density for a given distance measurement, you can implement the normal distribution probability function using the following formula: f(s) = 1 √ 2πσ2 exp − (s − µ) 2 2σ2 ! where: – s is the mean (precomputed distance from the grid cell to the landmark), – µ is the observed distance (measured distance to the landmark), – σ is the standard deviation, representing expected measurement noise. Write a function in Python to calculate this probability. Here is the pseudocode to guide you: 1 # Pseudocode for calculating the normal probability density 2 def calculate_normdist (s, mu , sigma):Background: Localization with Landmarks and Normal Probabilities (cont.) 3 “”” 4 Calculate the normal probability density for a given state. 5 6 Parameters: 7 s (float): Precomputed distance from the grid cell to the landmark. 8 mu (float): The observed distance (measured distance to the landmark). 9 sigma (float): The standard deviation (expected noise level). 10 11 Returns: 12 float: The probability density value. 13 “”” 14 # Step 1: Calculate the coefficient (1 / sqrt(2 * pi * sigmaˆ2)) 15 coefficient = 1 / (sqrt (2 * pi * sigma ˆ2)) 16 17 # Step 2: Calculate the exponent (-((s – mu)ˆ2) / (2 * sigmaˆ2)) 18 exponent = -((s – mu)ˆ2) / (2 * sigma ˆ2) 19 20 # Step 3: Calculate the probability by multiplying the coefficient with the exponentiated result 21 probability = coefficient * exp(exponent) 22 23 return probability In this pseudocode: • s represents the precomputed distance from a specific grid cell center to the landmark. • mu represents the observed distance from the robot to the landmark. • sigma represents the standard deviation, which models the expected measurement noise or uncertainty. Example Calculation To illustrate how to use this function, let’s walk through an example. Suppose the robot measures a distance of µ = 2.915 meters to a particular landmark, and the precomputed distance from a given cell center to the same landmark is s = 2.828 meters. Let’s assume a standard deviation of σ = 1. Using the pseudocode function calculate normdist, we would calculate the probability as follows: 1 # Example values 2 mu = 2.915 # Measured distance to the landmark 3 s = 2.828 # Precomputed distance from cell center to landmark 4 sigma = 1 # Standard deviation 5 6 # Calculate the probability 7 probability = calculate_normdist (s, mu , sigma) 8 print(“Probability:”, probability ) In this example: • The smaller the difference between s and mu, the higher the probability, indicating that the robot is likely close to the grid cell associated with mu. • This probability represents the likelihood of the robot being at the cell given the measured distance to the landmark. You will use this function to compute the probability for each cell in the grid and determine the cell with the highest probability as the most likely position of the robot. Alternatively, you can use the Python Scipy library’s norm function to achieve this.Background: Localization with Landmarks and Normal Probabilities (cont.) 1 # Import package 2 from scipy.stats import norm 3 4 # Parameters 5 mu = 2.915 # the measured distance to the landmark 6 s = 2.828 # the precomputed distance from a grid cell to the landmark 7 sigma = 1 # standard deviation 8 9 # Calculate the normal probability 10 probability = norm.pdf(s, mu , sigma) 11 print( probability ) Background: Localization Using Wall Configuration and Sensor Readings In this section, we introduce the concept of using a sensor model to estimate the robot’s position within a grid maze based on wall configurations. The sensor model, illustrated in Figure 3 (Figure 3), provides a way to calculate the probability of the robot being in a given cell based on its surroundings. This model relies on the robot’s ability to detect walls in four directions (north, south, east, and west) using LiDAR or another distance-sensing device. Sensor Model and Wall Configuration The sensor model allows us to compare the observed wall configuration around the robot to known wall configurations of each cell in the grid maze. Each cell has a specific layout of walls, which can be represented in a data structure where each cell’s configuration is recorded. For example, a cell might have walls on the north and east sides but open paths to the south and west. These configurations are stored in advance, providing a reference for comparing with the robot’s observations. Collecting Sensor Readings To use the sensor model for localization, the robot must gather distance readings in four directions: • North: The LiDAR measures the distance to any object directly in front of the robot. • South: The LiDAR measures the distance to any object directly behind the robot. • East: The LiDAR measures the distance to the right of the robot. • West: The LiDAR measures the distance to the left of the robot. These readings will indicate the presence or absence of walls in each direction (You will need to consider whether there is a wall based on a min and max distance for each cell), forming an observed configuration of the surrounding area. Calculating Probabilities Using the Sensor Model Once the robot has collected LiDAR readings in all four directions, it compares this observed wall configuration with the pre-stored configurations of each cell in the grid maze. The sensor model assigns a probability to each cell based on how closely the observed configuration matches the expected configuration of that cell. For each cell CN, the probability P(z = 0|s = CN) is calculated based on the match between: • The actual wall configuration stored for CN, which specifies where walls are located around the cell. • The observed wall configuration derived from the robot’s LiDAR readings. If the observed wall configuration matches the stored configuration for a given cell, that cell will have a high probability, represented by P(z = 0|s = CN). If the configurations do not match, the probability P(z = 0|s = CN) for that cell will be lower. This probability calculation is based on each directional match. Weighted Probability CalculationBackground: Localization Using Wall Configuration and Sensor Readings (cont.) To determine the likelihood of being in a specific cell, calculate the weighted probability by multiplying the individual probabilities for each direction (north, south, east, and west) based on the match between observed and expected wall configurations. The weighted probability for each cell is given by: P(z = 0|s = CN) = P(znorth = 0|s = CN)× P(zsouth = 0|s = CN)× P(zeast = 0|s = CN)× P(zwest = 0|s = CN) where each term P(zdirection = 0|s = CN) represents the probability that the observed reading in a specific direction (e.g., north) matches the expected wall configuration in that direction for cell CN. By calculating the combined probabilities P(z = 0|s = CN) for all cells in the maze, the robot can determine the most likely cell based on the current wall configuration around it. This approach provides a probabilistic localization method that leverages the robot’s immediate surroundings rather than relying solely on distance measurements to landmarks. Figure 3. Sensor model wall measurement probabilities for state probability calculations. Task 1: Localization with Landmarks In this task, you will estimate the robot’s position (pose) using landmarks and sensors, such as cameras, IMU, or LiDAR. The positions of the landmarks are known in advance, and the robot will navigate through an open-world environment (see Figure 4) starting from an unknown initial pose. You are not allowed to use the GPS to determine the starting location. The goal is to estimate the robot’s pose based on distance measurements to landmarks and navigate the entire grid, printing and comparing the estimated positions as the robot moves. • Pose Estimation Using Normal Distribution: – For each grid cell CN, use a normal distribution function to estimate the robot’s pose by comparing the robot’s measured distances to each landmark with the precomputed distances from the cell centers to the landmarks. – Measure the distances from the robot’s current position to each landmark using available sensors (e.g., camera, LiDAR, IMU). – Calculate the probability that the robot is in each cell by computing the weighted probability as the product of normal probabilities based on the distance comparison: WCN = NCN−L1 × NCN−L2 × · · · × NCN−Lk where NCN−Li is the normal probability for cell CN based on landmark Li . • Identify the Current Cell Based on Highest Probability: – Calculate the weighted probabilities for each cell each time the robot enters a new cell.Task 1: Localization with Landmarks (cont.) – Identify the cell with the highest probability WCbest and assume this cell represents the robot’s current position. – Print the estimated pose (coordinates of Cbest) each time the robot moves into a new cell. This output should be displayed in the console and recorded in the video. • Complete Coverage of All Grid Cells: – Program the robot to navigate through the entire grid, ensuring that it visits each cell at least once. – Use a systematic approach, such as a lawnmower pattern, to ensure complete coverage without unnecessary revisits. – Keep track of the cells visited by assuming the highest-probability cell is the robot’s current location. – The task ends when the robot has successfully navigated all cells in the grid. • Data Collection and Analysis: – For each estimated position, obtain the actual (x, y) position from the GPS sensor for comparison (this can be used only for evaluation, not for navigation). – Record both the estimated and actual positions for each grid cell entry and log this data. – Use the collected data to create a graph comparing estimated positions to actual positions. – Include a detailed analysis in the lab report, discussing any discrepancies between estimated and actual positions and evaluating the accuracy of the localization method. • Recording Requirements: – Record a video showing the robot starting from at least two different initial positions and navigating the grid to complete the task. – Ensure that the console output of the calculated probabilities and the estimated pose for each cell is visible in the video. – The video should clearly demonstrate the robot’s movement, pose estimation calculations, and grid coverage. Figure 4. Open world layout for Task 1 with known landmarks for localization (maze file maze7.xml).Task 1: Localization with Landmarks (cont.) Be sure to include a thorough comparison of the predicted pose versus the GPS readings in your report, as this will be a key part of your evaluation. Task 2: Localization with Walls and Landmarks In this task, you will implement an algorithm that uses the sensor model to estimate the robot’s position as it navigates through a 5×5 grid maze. The objective is to use LiDAR sensor readings to calculate probabilities for each cell based on wall configurations, as described in the background section. The robot’s starting location will not be known to the robot. This task will demonstrate how the sensor model can aid in predicting the robot’s location, even though it may not lead to full localization due to the symmetry of the maze. The testing environment is shown in Figure 5, and the complete map configuration is known in advance to the robot. However, the robot will start from an unknown position in the maze, and students are not permitted to use GPS or any other starting position attribute to determine the robot’s initial location. • Navigating Through the Maze: – Program the robot to navigate from cell to cell within the 5×5 grid maze. Each time the robot enters a new cell, it should stop and take sensor readings in four directions: north, south, east, and west, aligned with the cardinal directions. – The robot should continue moving until it has entered at least 15 different cells. If the robot encounters a dead end, it is allowed to backtrack to a previous cell and continue navigating. • Map Data Structure and Sensor Model: – Students must implement a data structure that stores the wall configurations for each cell in the maze, aligning the wall measurements with the cardinal directions (north, south, east, and west). This data structure will represent the known map of the environment. – Each time the robot enters a new cell, it should use the sensor model to calculate the probability P(z = z ′ |s = CN) for each of the 25 cells based on the observed wall configuration and compare it to the stored wall configurations for each cell. – Print the calculated probabilities for all 25 cells to the console each time the robot enters a new cell. This output should be visible in the recorded video. • Listing Cells with Highest Probability as Predictions: – After calculating the probabilities for all cells, identify the cell(s) with the highest probability. If there are multiple cells with the highest probability (due to symmetry in the maze), list all of them as the robot’s predicted position. – Print the predicted cell(s) to the console as the robot’s best estimate of its current position. This prediction should be visible in the video as well. • Video Requirements: – Record a video of the robot navigating the maze and performing the probability calculations. The video should show the robot starting from multiple initial locations and moving through at least 15 cells for each starting position. – Ensure that both the probability calculations and the list of predicted cells are printed to the console and clearly visible in the video. This will demonstrate the robot’s process of estimating its position based on wall configurations. • Note on Expected Outcomes: – The purpose of this task is to illustrate how the sensor model can be used to predict the robot’s location based on surrounding wall configurations. However, due to the symmetry of the maze, the robot will not be able to fully localize itself using only the sensor model, as multiple cells may have identical wall configurations.Task 2: Localization with Walls and Landmarks (cont.) Figure 5. Maze environment for Task 2, with known wall configurations for Bayesian inference-based localization (maze file maze8.xml). This iterative application of motion and measurement updates enables precise localization and navigation within the maze using Bayesian inference. Extra Credit Opportunity (10 points) For students who wish to attempt a more accurate localization, an additional 10 points of extra credit will be awarded to the student who provides the correct grid localization for all grid cells in the shortest time. To be considered, students must include a plot comparing predicted positions with GPS data and record the fastest completion time in their report. Additionally, students need to explain exactly how they implemented accurate predictions without ever using the GPS or starting location. Code Evaluation Task 1 (45%) • Probability Calculation Using Landmarks (10 points): – Full Marks (10): The algorithm accurately calculates the probability for each of the 25 cells based on measured distances to landmarks every time the robot enters a new cell, using the normal distribution formula precisely. – Acceptable (7-9): Probabilities are calculated for each cell, with occasional minor errors or small deviations from the normal distribution formula. – Poor (1-6): Probability calculations are implemented, but frequent errors or incorrect application of the normal distribution formula significantly impact accuracy. – No Marks (0): Probability calculations are either missing or incorrectly implemented for most cells, with no functional results. • Printing and Displaying Most Likely Cell (5 points): – Full Marks (5): The algorithm correctly identifies and prints the cell(s) with the highest probability as the robot’s predicted position each time it enters a new cell. This output is clearly visible in the video.Task 1: Motion to Goal (cont.) – Acceptable (3-4): The algorithm identifies and prints the most likely cell, with minor errors or inconsistencies in visibility in the video. – Poor (1-2): The most likely cell is occasionally identified, but with significant errors or missing visibility in the video. – No Marks (0): The most likely cell is not identified or displayed in the video. • Using Highest Probability to Track Visited Cells (10 points): – Full Marks (10): The algorithm accurately uses the highest probability to determine the current cell and maintains an accurate record of visited cells. – Acceptable (7-9): The algorithm tracks visited cells with occasional errors in determining the highest probability cell or maintaining an accurate visited cells record. – Poor (1-6): The algorithm tracks visited cells inconsistently, with frequent errors in determining the highest probability cell or maintaining a reliable record. – No Marks (0): The algorithm does not track visited cells accurately, with the record incomplete or unreliable. • Complete Grid Coverage (10 points): – Full Marks (10): The robot successfully covers all cells in the 5×5 grid at least once, demonstrating complete and systematic grid coverage. – Acceptable (7-9): The robot covers most cells, with occasional missed cells or unnecessary revisits. – Poor (1-6): The robot fails to cover a substantial portion of the grid or exhibits frequent unnecessary revisits, impacting systematic coverage. – No Marks (0): The robot fails to cover most of the grid, with random or inconsistent movement patterns. • Plotting and Analysis (10 points): – Full Marks (10): A clear and accurate plot is provided, comparing the robot’s predicted positions (based on the highest probability cell) with its actual GPS-based positions. A thorough analysis of discrepancies is included in the lab report. – Acceptable (7-9): A plot is provided with a reasonably accurate comparison of predicted and actual positions, with minor errors or a limited analysis in the report. – Poor (1-6): The plot lacks clarity or has significant errors, with limited or unclear analysis in the lab report. – No Marks (0): The plot is missing or incorrect, and the lab report lacks any meaningful analysis. Deductions • Print Statement Visibility (-20 points maximum): Up to 20 points may be deducted if required print statements are not clearly visible in the submitted video. The severity of the deduction will depend on the clarity and frequency of missing print statements. • Not Showing Multiple Starting Locations (-10 points): A 10-point deduction will be applied if the video for Task 1 does not show multiple starting locations. • Robot Collisions with Wall (-5 points): A 5-point deduction will be applied each time the robot collides with a wall, up to a maximum of 5 points. Code Evaluation Task 2 (45%) • Probability Calculation Using Sensor Model (15 points):Code Evaluation Task 2 (cont.) – Full Marks (15): The algorithm accurately calculates probabilities for each of the 25 cells based on wall configurations every time the robot enters a new cell, correctly implementing the sensor model formula. – Acceptable (11-14): The algorithm calculates probabilities with occasional minor errors, showing a generally correct but partially inconsistent use of the sensor model formula. – Poor (1-10): Probability calculations are implemented, but frequent errors or major inaccuracies in the sensor model formula significantly impact the result. – No Marks (0): Probability calculations are missing or incorrectly implemented for most cells, providing no meaningful results. • Printing and Displaying Predictions Based on Probabilities (10 points): – Full Marks (10): The algorithm correctly identifies and prints the cell(s) with the highest probability as the robot’s possible positions each time it enters a new cell, with clear and consistent output visible in the video. – Acceptable (7-9): The algorithm identifies and prints the likely cells, with minor errors or occasional inconsistencies in the output visibility in the video. – Poor (1-6): The probable cells are sometimes identified but with major errors or missing output in the video. – No Marks (0): The probable cells are not identified, or output is consistently missing from the video. • Robot Movement Through Non-Repeated Cells (15 points): – Full Marks (15): The robot moves through at least 15 unique cells, revisiting cells only when encountering dead ends, and adheres to a systematic movement pattern. – Acceptable (11-14): The robot moves through 15 cells with minor, unnecessary revisits or slight deviations from the systematic movement requirement. – Poor (1-10): The robot does not move through at least 15 unique cells or frequently revisits cells, failing to show a consistent or systematic approach. – No Marks (0): The robot fails to reach 15 unique cells or shows random, repeated movement without any clear pattern. Deductions • Print Statement Visibility (-20 points maximum): Up to 20 points may be deducted if required print statements are not clearly visible in the submitted video. The severity of the deduction will depend on the clarity and frequency of missing print statements. • Not Showing Multiple Starting Locations (-10 points): A 10-point deduction will be applied if the video for Task 1 does not show multiple starting locations. • Robot Collisions with Wall (-5 points): A 5-point deduction will be applied each time the robot collides with a wall, up to a maximum of 5 points. Report & Video Evaluation (10%) A project submitted without a report, or without including the corresponding task video, will not be graded and will receive a “0” for the complete lab. Report Sections: The report should include the following (points will be deducted if anything is missing): • Mathematical Computations: Show how you calculated the speeds of the left and right servos given the input parameters for each task. • Conclusions: Analyze any issues encountered when running the tasks and discuss how these could be improved. Discuss your results, including any navigation errors. Conclusions need toReport & Video Evaluation (cont.) reflect an insight into what was learned during the project. Statements like “everything worked as expected” or “I enjoyed the project” will not count as valid conclusions. • Video: Upload the video to Canvas showing the robot executing the task (details below). • Authorship Statement: Include and sign the following statement at the end of the report: “I developed all the code and written report with the following exceptions: [Student name: signature, date] [Code sections (specify lines): references and additional comments] If any external tools were used to generate code, you need to include the prompt and output.” Task 1: Landmark-Based Localization • Show the robot moving from cell to cell within the 5×5 grid. • Each time the robot enters a new cell, display the calculated probabilities for all 25 cells based on distance measurements to landmarks. This information should be printed to the console and visible in the video. • Show the robot’s predicted position by printing the cell(s) with the highest probability. This prediction should be visible in the video each time the robot moves to a new cell. • Demonstrate complete grid coverage by having the robot traverse all cells in the 5×5 grid at least once. The video should capture this full navigation path. • Repeat the task from multiple starting locations. Task 2: Sensor Model-Based Localization Using Wall Configurations • Show the robot navigating through the 5×5 grid and entering at least 15 unique cells, only revisiting cells in cases of dead ends. • Each time the robot enters a new cell, display the calculated probabilities for each of the 25 cells based on wall configurations observed in four directions (north, south, east, west). This information should be printed to the console and clearly visible in the video. • Show the robot’s possible position(s) by printing the cell(s) with the highest probability. This prediction should be visible in the video each time the robot moves to a new cell. • Repeat the task from multiple starting locations. Note: Ensure that all probability calculations, cell predictions, and multiple starting locations are clearly visible in the video for both tasks. This will demonstrate the functionality and robustness of your localization algorithms across varied initial conditions.Lab Submission For Lab 4 there are two separate Canvas Assignments: Lab 4 Code Submission and Lab 4 Report Submission. Please follow the Submission policy outlined below. Note: Failure to adhere to the submission policy will result in a reduction of 20 points. Code Submission Policy Students will need to upload a Python file (i.e. Webots controller) for each Task. The Python file should follow this naming convention USF ID LabX TaskY.py, where X is the lab number and Y is the task number. Example rockybull5 Lab1 Task1.py. Additionally, if a student has created any functions that are not present in the controller but rather the MyRobot.py, the student will also need to upload that python file as well using the following naming convention USF ID MyRobot.py. If custom functions are used in a student’s submission and they are not in the controller or the MyRobot Python files, an automatic Zero will be assigned for the code submission. Each file should be uploaded to the same submission. DO NOT upload a zip file, Canvas allows multiple uploads per submission. Failure to meet these submission policies will result in 20 points off. Report Submission Policy Students will need to upload their videos to their USF OneDrive or USF Sharepoint and generate a shareable link. Please ensure that you make it viewable to everyone or at the very least the TA and the Professor (use their emails to add them to the access list). Students will need to upload a PDF file of their lab report. Be sure that the lab report contains the link to the videos and any Metrics requested by the Lab. These Metrics will be used to determine which student receives the extra credit. Use the naming convention USF ID LabX Report.pdf. Only PDFs will be accepted. If the student used ChatGPT to generate code they will also need to provide the conversation used to generate the code. This can either consist of screenshots of the conversation or a shareable link that the system provides. This should be a separate document and not part of the report. Each file should be uploaded to the same submission. DO NOT upload a zip file, Canvas allows multiple uploads per submission. Failure to meet these submission policies will result in 20 points off. TA may ask you additional questions to gauge your understanding via Canvas Message or MS Teams. Failure to reply may result in point deduction.
Objective The purpose of this lab is to introduce the fundamental concepts of mapping and path planning in mobile robotics, focusing on how a robot can represent and navigate a structured environment. Students will learn to create a world model, determine the robot’s position within that model, and calculate the most efficient path to a specified goal location. In real-world robotics, mapping and path planning are essential for tasks such as autonomous navigation, search and rescue operations, and logistics, where robots need to operate in dynamic or predefined environments. By completing this lab, students will develop skills that are foundational for enabling robots to understand, plan, and execute movements through spatial environments. Requirements Programming: Python Robot: Webots 2023b (FAIRIS) Sensors Utilized To successfully complete this assignment, you will need to utilize several sensors on the robot. Each sensor provides unique data essential for robust navigation and environmental awareness. Below is an overview of the sensors, their specific functionalities, and references to previous assignments for guidance on usage: • Encoders: The encoders track the rotation of each wheel, enabling the calculation of distance traveled and rotational changes. By using encoder readings, you can estimate the robot’s displacement and changes in orientation over time, aiding in precise position tracking. Refer to Lab 1 for an introduction to encoder usage. • Compass: The compass sensor returns the robot’s orientation relative to the world frame, typically measured in degrees from the east direction (the x-axis). This sensor is essential for maintaining and adjusting the robot’s heading during navigation tasks. Refer to Lab 1 for initial setup and utilization of the compass. • Lidar: The Lidar sensor captures a 360-degree range image of the environment by measuring distances to obstacles and landmarks at various angles. This information allows the robot to map surroundings, avoid obstacles, and make path-planning decisions based on spatial awareness. Refer to Lab 2 to review Lidar integration and data handling. • Camera with Object Recognition: The onboard camera, integrated with object recognition capabilities, allows the robot to detect and differentiate various objects in its environment. Specific landmarks, such as colored cylinders, can be identified, providing visual cues for navigation and goal localization. Refer to Lab 3 for implementation details on using the camera and object recognition. Each of these sensors provides critical information that will need to be integrated into your navigation algorithm to ensure accuracy and efficiency. Be sure to review the respective labs to understand how to utilize these sensors effectively as you design your solution.Global Reference Frame and Grid Cell Numbering The reference frame is set up as follows: • The positive y-axis points North, while the negative y-axis points South. • The positive x-axis points East, and the negative x-axis points West. • The origin, (0,0), is located at the center of the arena. The arena is divided into a 5×5 grid of 25 cells, numbered from 1 to 25. Each grid cell is a 1 x 1 meter square, creating a structured layout for navigation and localization. Refer to this layout as you plan and execute navigation strategies within the defined grid. Wall Configuration Data Structure For this assignment, you will need to implement a data structure to represent the wall configuration of each grid cell in the arena. This structure will allow your robot to recognize and navigate around obstacles effectively. As a conceptual guide, consider representing the wall configurations using a 25×4 matrix format, as illustrated in Figure 1. In this example: • The matrix corresponds to the 25 grid cells in the arena. • Each cell in the matrix contains four entries representing the presence or absence of walls in the West-North-East-South (WNES) orientation. • A ”W” (wall) entry indicates the presence of a wall, while an ”O” (open) entry denotes the absence of one. While you are encouraged to design your own data structure, this example can serve as a reference to help you get started. Figure 1. Example of a wall configuration representation. The left side illustrates the physical world, while the right side shows the corresponding 25×4 matrix representation using WNES orientation. Think carefully about how your data structure will allow you to efficiently check for walls in each direction and update it as the robot moves through the arena.Background: Path Planning with Wave Front Planner The Wavefront Planner is a grid-based path planning algorithm commonly used in robotics for navigating a robot from a start position to a goal position. This algorithm operates on a grid map by propagating values from the goal cell to other cells in a wave-like manner, making it suitable for environments with known obstacles. The primary goal of the Wavefront Planner is to determine the shortest path to the target by expanding outwards from the goal cell until the start cell is reached. How the Algorithm Works. • Initialize the Goal Cell: The algorithm begins by assigning the goal cell a starting value, typically zero, indicating the destination. • Wave Expansion: From the goal cell, values incrementally increase for each neighboring cell, representing the number of steps required to reach the goal. Only valid, traversable cells (nonobstacle cells) are considered in this propagation. • Marking Path from Start to Goal: Once all cells are filled, the robot begins at the start cell and moves to adjacent cells with decreasing values, ultimately reaching the goal cell along the shortest path. • Obstacle Handling: Cells representing obstacles are left unassigned or marked with an infinite or high value, ensuring the robot does not travel through them. This approach, which supports four-point (or optionally eight-point) connectivity for movement, enables the robot to navigate efficiently in grid environments with obstacles. Although simple, the wavefront planner is effective for pathfinding in static maps where obstacles and the grid structure are known beforehand. High-Level Pseudo Code. WavefrontPlanner(grid, start, goal): 1. Initialize all cells in the grid with a zero, indicating unvisited. 2. Set the goal cell to zero. 3. Set the start cell to 2. 4. Set the wall cells to 1. 5. Create a queue and enqueue the goal cell. 6. While the queue is not empty: a. Dequeue the front cell (current cell) from the queue. b. For each neighbor of the current cell (4-point connectivity): i. If the neighbor is unvisited and not an obstacle: – Set neighbor’s value to current cell’s value + 1. – Enqueue the neighbor cell. 7. Backtrack from the start cell to the goal: a. Initialize path list with start cell. b. While the current cell is not the goal cell: i. Select the neighboring cell with the lowest value. ii. Move to the selected cell and add it to the path list. 8. Return the path list as the planned path. This algorithm’s complexity is manageable for grid environments and is widely used due to its straightforward implementation and reliable pathfinding capabilities. The Wavefront Planner’s outputBackground: Path Planning with Wave Front Planner (cont.) path allows robots to navigate effectively by avoiding obstacles and reaching target destinations efficiently, which is essential for autonomous mobile robots operating in structured environments. Background: Alternative Approach: Graph-Based Path Planning with Dijkstra’s Algorithm In this approach, students will represent the environment as a graph rather than using a wavefront expansion. This method is advantageous for environments that can be represented as nodes (cells) connected by edges (paths), making it flexible and adaptable to changes in the map structure. After creating a graph representation of the environment, students will implement Dijkstra’s algorithm to find the shortest path between two cells. Creating the Graph Data Structure. • Define Nodes: Each cell in the grid environment is treated as a node in the graph. The position of the cell (such as cell numbers 1–25) can serve as the unique identifier for each node. • Define Edges: An edge represents a direct connection between two adjacent cells that are free of obstacles. If two cells share an open boundary (no wall between them), they are connected by an edge. Each edge has a weight, which, in this case, can be set to a uniform value (e.g., 1) since all movements between adjacent cells are treated as equal. • Adjacency List Representation: To represent the graph, use an adjacency list where each node (cell) has a list of neighboring nodes (cells) to which it connects. For example, if cell 1 connects to cells 2 and 6, the adjacency list entry for cell 1 would be {2: 1, 6: 1}, with each neighbor’s entry containing the edge weight. Example Adjacency List for a 3×3 Grid: Cell 1: {2: 1, 4: 1} Cell 2: {1: 1, 3: 1, 5: 1} Cell 3: {2: 1, 6: 1} … Implementing Dijkstra’s Algorithm on the Graph. Once the graph structure is complete, Dijkstra’s algorithm can be implemented to determine the shortest path between a start cell and a goal cell. Dijkstra’s algorithm works by exploring paths from the start node to the goal node with the shortest cumulative cost. • Initialization: Set the distance to the start node as 0 and all other nodes as infinity. Use a priority queue to keep track of nodes to explore based on their distance from the start node. Keep a dictionary to record the predecessor of each node, which helps reconstruct the path once the goal is reached. • Algorithm Execution: – While the priority queue is not empty: ∗ Dequeue the node with the smallest distance (this is the current node). ∗ For each neighbor of the current node: · Calculate the tentative distance by adding the current node’s distance to the edge weight connecting to the neighbor. · If the tentative distance is less than the recorded distance for the neighbor, update the distance and set the current node as the neighbor’s predecessor. · Enqueue the neighbor with the updated distance. • Path Reconstruction: Once the goal node is reached, backtrack from the goal to the start node using the predecessor dictionary to reconstruct the shortest path.Background: Alternative Approach: Graph-Based Path Planning with Dijkstra’s Algorithm (cont.) High-Level Pseudo Code. Dijkstra(graph, start, goal): 1. Initialize distance for all nodes as infinity, except the start node (set to 0). 2. Initialize a priority queue and enqueue the start node with distance 0. 3. Initialize an empty dictionary for predecessors. 4. While the priority queue is not empty: a. Dequeue the node with the smallest distance (current node). b. If current node is the goal, break the loop. c. For each neighbor of the current node: i. Calculate tentative distance = current distance + edge weight. ii. If tentative distance < recorded distance for neighbor: – Update neighbor’s distance. – Set current node as the predecessor for the neighbor. – Enqueue the neighbor with the updated distance. 5. Reconstruct path from goal to start using the predecessors dictionary. 6. Return the reconstructed path as the shortest path. This approach emphasizes creating a graph structure and implementing Dijkstra’s algorithm from scratch, which is valuable for understanding data structures and algorithms in robotics pathfinding. By using this method, students will gain a deeper comprehension of graph theory and optimization, which are foundational concepts in robotic navigation and autonomous systems. Pose Estimation and Print Statements In all tasks, you are required to print the following information once per cell visited. Be sure to include these print statements clearly in your videos, ensuring that they are readable on screen. (1) State Pose: The robot’s pose should be printed in the form s = (x, y, n, θ), where: • (x, y) represents the robot’s position in global coordinates, • n is the grid cell number, and • θ is the robot’s orientation relative to the global frame. Note: You are not allowed to use GPS for this assignment. At the start of your code, use the following steps to access the robot’s initial position and orientation attributes: (a) Access the starting position attribute of the robot object. (b) Extract the x and y coordinates, representing the robot’s initial horizontal and vertical positions, respectively. (c) Extract the θ value, which gives the initial orientation angle of the robot. Below is an example code snippet to obtain these initial values: # Move robot to a random starting position listed in the maze file robot.move_to_start() start_pos = robot.starting_position x = start_pos.x y = start_pos.y theta = start_pos.theta print(x, y, theta)Pose Estimation and Print Statements (cont.) Once the initial values are acquired, you must continuously update the robot’s current position (x, y) and orientation (θ) as it moves. Use the encoders and compass sensors to calculate and adjust the pose over time. This logic should be embedded within the main loop, updating pose data at regular intervals. (2) Shortest Path: Regardless of how you calculate the shortest path from the start to goal cells, your program should clearly print the path between the cells. For example we should something similar to what is shown below printed at some point in your video. Cs → Ct1 → Cti+1 → · · · → CG Where: • Cs is the start cell, where the path begins, • Ct1 ,Cti+1 , . . . are intermediate cells along the shortest path, ordered sequentially. Each Cti represents a cell that is part of the optimal route from Cs to CG, • CG is the goal cell, which is the final destination of the path. This notation shows the path as a progression from the start cell through intermediate cells to reach the goal cell, ensuring minimal distance or cost from Cs to CG. Task 1: Path Planning In this task, you will implement a solution for navigating a robot along the shortest path from a starting cell to a specified goal cell within a maze. This involves constructing a data structure for representing maze configurations, implementing a pathfinding algorithm, and ensuring that the robot accurately follows the calculated path. The requirements for Task 1 are as follows: (1) Maze Representation: Implement a data structure to represent the maze, capturing details about cell connectivity and obstacles. Each cell should be able to store information about walls or open paths to neighboring cells. (2) Shortest Path Calculation: Develop an algorithm to compute the shortest path between a given starting cell and a goal cell. This pathfinding algorithm should be efficient and should return a sequence of cells that the robot must traverse to reach the goal. (3) Path Display: Once the shortest path is calculated, print the complete path sequence, showing each cell that the robot will pass through from the start to the goal. This should be displayed in a clear, ordered format. (4) Path Navigation: Program the robot to navigate along the calculated path. The robot should move sequentially from the starting cell through each cell in the path until it reaches the goal. (5) Pose Estimation Printing: Each time the robot enters a new cell, print the robot’s current pose estimate, which includes its position and orientation in the maze. This estimate should update as the robot progresses along the path. (6) Task Completion: The robot should stop once it reaches the goal cell. At this point, the robot should print a final message indicating that it has reached its destination. (7) Video Requirements: For the video submission, test your algorithm by setting ten unique starting locations, with the goal cell set to cell 7 in each test. Use the maze configuration provided in maze8.xml for this task. The video should demonstrate the algorithm calculating the shortest path from each of the ten starting cells to the goal and then show the robot navigating along the calculated path. Ensure that each example starts from a different cell and that the robot correctly follows the path to reach the goal. For each starting location demonstrated in the video, include the calculated path in your report to provide a complete record of the algorithm’s output and path execution.Task 1: Path Planning (cont.) Figure 2. Maze layout for Task 1 (maze file maze8.xml). Extra Credit Opportunity (10 points) The student who achieves the shortest navigation time from cell 25 to cell 7 will be awarded 10 points of extra credit. To qualify, the student must include a video showing the robot navigating the shortest path from cell 25 to cell 7. Additionally, the report must display the calculated path and state the total simulation time taken to reach the goal. Code Evaluation Task 1 (90%) • Maze Representation (15 Points) – Full Marks (15 points): Maze data structure is complete, accurate, and includes all required details about cell connectivity and obstacles. – Acceptable (10 points): Maze data structure is mostly accurate, with minor errors or omissions that do not significantly impact functionality. – Poor (5 points): Maze data structure is incomplete or has major errors, impacting the algorithm’s ability to calculate paths correctly. – No Marks (0 points): No data structure implemented for maze representation. • Shortest Path Calculation (20 Points) – Full Marks (20 points): Algorithm calculates the shortest path from any starting cell to the goal cell accurately and efficiently. – Acceptable (15 points): Algorithm calculates the shortest path with minor inefficiencies or small errors that don’t prevent completion of the task. – Poor (10 points): Algorithm calculates a path, but it is not consistently the shortest, or contains significant errors. – No Marks (0 points): No algorithm implemented to calculate the shortest path. • Path Display (10 Points) – Full Marks (10 points): The calculated path is clearly printed in sequence, showing each cell from the start to the goal. – Acceptable (7 points): Path is printed, but with minor formatting or sequence errors.Task 1: Motion to Goal (cont.) – Poor (3 points): Path is printed, but it is unclear or significantly incorrect. – No Marks (0 points): No path display provided. • Path Navigation (25 Points) – Full Marks (25 points): Robot follows the calculated shortest path accurately from start to goal without deviations. – Acceptable (18 points): Robot follows the path with minor deviations or inconsistencies. – Poor (10 points): Robot attempts to follow the path but frequently deviates or does not complete it. – No Marks (0 points): No path navigation implemented or path not followed at all. • Pose Estimation Printing (10 Points) – Full Marks (10 points): Pose (position and orientation) is printed accurately each time the robot enters a new cell. – Acceptable (7 points): Pose is printed but with minor inaccuracies or occasional omissions. – Poor (3 points): Pose is printed inconsistently or inaccurately. – No Marks (0 points): No pose estimation printed. • Task Completion (10 Points) – Full Marks (10 points): Robot stops correctly upon reaching the goal cell and prints a final message indicating completion. – Acceptable (7 points): Robot stops at the goal cell but final message is unclear or missing. – Poor (3 points): Robot stops at the goal cell inconsistently or with significant delays. – No Marks (0 points): Robot does not stop at the goal cell or no indication of task completion. Deductions • Print Statement Visibility (-20 points maximum): Up to 20 points may be deducted if required print statements are not clearly visible in the submitted video. The severity of the deduction will depend on the clarity and frequency of missing print statements. • Not Showing Multiple Starting Locations (-10 points): A 10-point deduction will be applied if the video for Task 1 does not show multiple starting locations. • Robot Collisions with Wall (-5 points): A 5-point deduction will be applied each time the robot collides with a wall, up to a maximum of 5 points. Report & Video Evaluation (10%) A project submitted without a report, or without including the corresponding task video, will not be graded and will receive a “0” for the complete lab. Report Sections: The report should include the following (points will be deducted if anything is missing): • Mathematical Computations: Show how you calculated the speeds of the left and right servos given the input parameters for each task. • Conclusions: Analyze any issues encountered when running the tasks and discuss how these could be improved. Discuss your results, including any navigation errors. Conclusions need to reflect an insight into what was learned during the project. Statements like “everything worked as expected” or “I enjoyed the project” will not count as valid conclusions. • Video: Upload the video to Canvas showing the robot executing the task (details below). • Authorship Statement: Include and sign the following statement at the end of the report: “I developed all the code and written report with the following exceptions: [Student name: signature, date]Report & Video Evaluation (cont.) [Code sections (specify lines): references and additional comments] If any external tools were used to generate code, you need to include the prompt and output.” Video Requirement for Task 1 (1) Demonstrations from Multiple Starting Locations • Show at least ten unique starting positions in the maze. • For each demonstration: – Begin by displaying the initial starting cell and the goal cell (cell 7). – Show the calculated shortest path from the starting cell to the goal cell. (2) Robot Navigation for Each Starting Position • For each starting cell, allow the video to show the robot navigating along the calculated shortest path. • Highlight: – Each time the robot enters a new cell, with an emphasis on the pose estimation (position and orientation) printed for each cell transition. – The robot’s ability to accurately follow the path without deviating. • Conclude each demonstration when the robot reaches the goal cell and prints the completion message.Lab Submission For Lab 5 there are two separate Canvas Assignments: Lab 5 Code Submission and Lab 5 Report Submission. Please follow the Submission policy outlined below. Note: Failure to adhere to the submission policy will result in a reduction of 20 points. Code Submission Policy Students will need to upload a Python file (i.e. Webots controller) for each Task. The Python file should follow this naming convention USF ID LabX TaskY.py, where X is the lab number and Y is the task number. Example rockybull5 Lab1 Task1.py. Additionally, if a student has created any functions that are not present in the controller but rather the MyRobot.py, the student will also need to upload that python file as well using the following naming convention USF ID MyRobot.py. If custom functions are used in a student’s submission and they are not in the controller or the MyRobot Python files, an automatic Zero will be assigned for the code submission. Each file should be uploaded to the same submission. DO NOT upload a zip file, Canvas allows multiple uploads per submission. Failure to meet these submission policies will result in 20 points off. Report Submission Policy Students will need to upload their videos to their USF OneDrive or USF Sharepoint and generate a shareable link. Please ensure that you make it viewable to everyone or at the very least the TA and the Professor (use their emails to add them to the access list). Students will need to upload a PDF file of their lab report. Be sure that the lab report contains the link to the videos and any Metrics requested by the Lab. These Metrics will be used to determine which student receives the extra credit. Use the naming convention USF ID LabX Report.pdf. Only PDFs will be accepted. If the student used ChatGPT to generate code they will also need to provide the conversation used to generate the code. This can either consist of screenshots of the conversation or a shareable link that the system provides. This should be a separate document and not part of the report. Each file should be uploaded to the same submission. DO NOT upload a zip file, Canvas allows multiple uploads per submission. Failure to meet these submission policies will result in 20 points off. TA may ask you additional questions to gauge your understanding via Canvas Message or MS Teams. Failure to reply may result in point deduction.
Objective This lab will teach you how to plan motion for a robot to reach a goal while avoiding obstacles. This lab will teach you about how to utilize a camera with object detection and the bug zero algorithm to navigate through an obstacle-rich environment to reach a goal location. Requirements Programming: Python Robot: Webots 2023b (FAIRIS) RGBD Camera with Object Detection In FAIRIS Lite, the robot is equipped with an RGBD camera, which can be accessed via the rgb camera attribute of the MyRobot class. This sensor allows for both image capture and object recognition, providing detailed information about objects detected within the robot’s field of view. The function robot.rgb camera.getRecognitionObjects() returns a list of recognized objects, each containing attributes such as position, size, and color. This is crucial for detecting landmarks, identifying obstacles, and enhancing navigation capabilities. Key Functions and Their Usage: • robot.rgb camera.getRecognitionObjects(): This function returns a list of recognized objects. Each recognized object is a Webots RecognitionObject with various attributes such as: – landmark.getId(): Returns a unique identifier for the detected object. – landmark.getPosition(): Returns the 3D position of the object relative to the camera as an array [X, Y, Z]. – landmark.getSize(): Returns the relative size of the object as an array [Y, Z]. – landmark.getPositionOnImage(): Returns the 2D position of the object on the image as an array [X, Y], where (0,0) represents the top-left corner. – landmark.getSizeOnImage(): Returns the 2D size of the object on the image as an array [X, Y], representing the object’s bounding box. – landmark.getColors(): Returns the RGB color of the object as an array [R, G, B]. Example Usage: rec_objects = robot.rgb_camera.getRecognitionObjects() # if camera has detected an object if len(rec_objects) > 0: # extract detected object landmark = rec_objects[0] # object ID object_id = landmark.getId() # object position relative to the camera [X, Y, Z] object_position = landmark.getPosition() # object relative size [Y, Z] object_size = landmark.getSize()# object position on image [X, Y] object_position_on_image = landmark.getPositionOnImage() # object size on image [X, Y] object_size_on_image = landmark.getSizeOnImage() # object color [R, G, B] object_color = landmark.getColors() This code checks if any objects are detected by the RGBD camera. If an object is detected, it extracts the first object and prints details such as the object’s ID, position, size, and color. Figure 1. RGBD Camera View in Webots with Object Detection For more information on the available functions and object attributes, please refer to the Webots Documentation and the FAIRIS Lite Documentation. Motion to Goal One of the core objectives in mobile robotics is navigating towards a goal, referred to as ”motion to goal.” In this assignment, your robot will navigate toward a goal represented by a yellow cylinder. The task involves rotating the robot to face the goal, moving toward it, and stopping at a specific distance in front of it. Steps for Motion to Goal: (1) Detect the Goal: The robot must first identify the goal object (the yellow cylinder) using its RGBD camera. Use the object recognition function robot.rgb camera.getRecognitionObjects() to detect the goal. (2) Center the Goal in the Camera Frame: Once the goal is detected, the robot should rotate in place to center the object within the camera’s frame. This is done using the goal’s position on the image, combined with a PID controller to rotate the robot smoothly. (3) Move Towards the Goal: After the goal is centered in the camera frame, the robot should move forward while ensuring it stops a set distance (e.g., 0.5 meters) in front of the goal. Helpful Hints: • The position of the object in the image can be obtained from landmark.getPositionOnImage(). The robot should rotate until the x-coordinate of the object is centered in the image. • A simple PID controller can be used to adjust the robot’s rotational speed based on the error between the object’s x-coordinate and the center of the image. • Once the object is centered, move forward and stop the robot when the object is a set distance away using a forward motion PID controller (similar to the one you created in Lab 2).• Think about how you will handle when the robot can not see the goal. Bug Zero Algorithm The Bug Zero algorithm is a simple, reactive path-planning algorithm used by robots to navigate around obstacles while moving toward a goal. The robot moves directly towards the goal unless it encounters an obstacle, in which case it follows the edge of the obstacle until it can resume a direct path to the goal. Key Concepts: • Straight Line to the Goal: The robot starts by moving in a straight line towards the goal. This is the primary behavior when no obstacles are detected. • Obstacle Avoidance: If the robot detects an obstacle in its path (using sensors such as the LIDAR or RGBD camera), it must navigate around the obstacle. In the Bug Zero approach, the robot follows the boundary of the obstacle until it can safely move directly toward the goal again. • Returning to the Goal Path: As the robot follows the obstacle’s edge, it continuously checks whether it can resume moving directly toward the goal. Once the path is clear, the robot returns to the direct line toward the goal. Steps to Implement Bug Zero: (1) Move Towards the Goal: Initially, the robot moves straight toward the goal (e.g., a yellow cylinder). It uses the goal’s position detected by its sensors to guide its movement. (2) Detect Obstacles: If an obstacle is detected (e.g., using LIDAR or the camera), the robot should switch to obstacle avoidance mode, where it moves around the object. (3) Follow the Obstacle’s Boundary: The robot should trace the perimeter of the obstacle while continuing to monitor whether it can resume its direct path to the goal. You can apply the wallfollowing behavior developed in Lab 2 to assist in this process. (4) Return to the Direct Path: Once the robot finds an open path toward the goal, it should stop following the obstacle and return to the goal path, continuing until it either reaches the goal or encounters another obstacle. Helpful Hints: • Use LIDAR or the RGBD camera to detect obstacles in the robot’s path. The LIDAR can give you precise distance measurements, while the camera can help recognize obstacles and landmarks. • When following the obstacle’s edge, make sure the robot stays close enough to avoid losing track of the obstacle but far enough to not collide with it. • Reusing Wall-Following Behavior: You can reuse the wall-following algorithm you developed in Lab 2 to help the robot trace the obstacle’s boundary efficiently. This will save you time and help with the implementation of Bug Zero. • Continuously check whether the path to the goal is clear while tracing the obstacle. Once a clear path is found, the robot should stop tracing the obstacle and resume direct motion toward the goal. The Bug Zero algorithm provides a robust yet simple solution for navigating in environments with obstacles. Although it does not guarantee the shortest path, it ensures the robot can avoid obstacles and eventually reach the goal. Task 1: Motion to Goal The goal of this task is for the robot to reach a yellow cylinder and stop when it is 0.5 meters away from it. There are no obstacles between the robot and the goal, as illustrated in Figure 2. The robot should be tested by starting from different locations and orientations. Task Details:Task 1: Motion to Goal (cont.) • Initial Condition: The robot may begin facing away from the goal, meaning it will need to turn until the goal is detected. • Goal Detection: The robot should use its RGBD camera to recognize the yellow cylinder (goal) and determine both the distance and orientation to it. • PID Controller: You may implement a PID controller to help guide the robot smoothly toward the goal. The input to the controller could be the distance and orientation of the yellow cylinder relative to the robot, while the output would be the linear velocities of the left and right wheels. • Navigation and Stopping: Once the goal is detected, the robot should move toward it and stop once it is exactly 0.5 meters away. The camera can be used to identify the goal’s position and orientation, and the range sensors (e.g., LIDAR) can be used to compute the distance to the goal. Testing and Timing: • Test your solution in the maze5.xml world file. • During the task, the robot should turn, locate the goal, move toward it, and stop at the correct distance. • Test the robot from multiple starting positions and orientations to ensure robustness. • For the video recordings, be sure to capture the robot performing motion to goal from different locations and orientations. This can be done by pausing the simulation and clicking and dragging the robot or the yellow cylinder. Figure 2. Yellow Cylinder Goal for Task 1: Motion to Goal (maze5.xml) Task 2: Bug Zero Algorithm In this task, the robot should implement the Bug Zero algorithm to navigate toward the goal while avoiding obstacles, as shown in Figure 3. The goal “G” is represented by a yellow cylinder, similar to the one used in Task 1 (Figure 2). The robot starts at position “S” and must reach the goal while avoiding any obstacles in its path. Task Details: • Initial Condition: The robot begins at position ”S” and must move toward the goal ”G.” • Bug Zero Algorithm: The robot should implement the Bug Zero algorithm to handle obstacles. If an obstacle is detected, the robot should navigate around it, staying no further than 0.5 meters away from the obstacle.Task 2: Bug Zero Algorithm (cont.) • Obstacle Avoidance: The robot should be capable of making either left or right turns when tracing the obstacle’s boundary, depending on the environment. • Sensors: Use the RGBD camera and/or range sensors (e.g., LIDAR) to compute the distance and orientation to the goal as well as the obstacles. • Completion Time: The task should be completed within 3 minutes, meaning the robot must reach the goal within that time while successfully navigating around obstacles. Testing and Timing: • Test your solution in the maze6.xml world file. • Test the robot from position “S” in multiple trials to ensure it can reach the goal consistently without collisions. • The robot should efficiently avoid obstacles, staying within 0.5 meters of the obstacle boundary, and resume its path to the goal once the obstacle is cleared. • Ensure the robot can perform both left and right turns around obstacles. Figure 3. World setup for Task 2: Bug Zero Algorithm (maze6.xml) Code Evaluation Task 1 (30%) Robot reaches the cylinder from any starting position and orientation (15 points) • Full Marks (15-13 points): The robot reliably reaches the yellow cylinder from any starting position and orientation. The robot smoothly navigates toward the cylinder without deviating significantly. • Partial Marks (12-8 points): The robot reaches the cylinder but may have slight deviations or take longer from some starting positions or orientations. • Minimal Marks (7-4 points): The robot struggles to reach the cylinder from some starting positions or orientations, exhibiting significant deviations or slow performance. • No Marks (3-0 points): The robot fails to reach the cylinder from most starting positions or orientations. Robot finds the cylinder when the cylinder is not seen in the camera (10 points)Task 1: Motion to Goal (cont.) • Full Marks (10-9 points): The robot successfully turns and locates the yellow cylinder when it starts with the cylinder out of view. The robot quickly identifies the goal and begins navigating toward it. • Partial Marks (8-6 points): The robot finds the cylinder, but the process is slower or less efficient, possibly taking too long to turn or locate the cylinder. • Minimal Marks (5-3 points): The robot occasionally fails to find the cylinder, or it takes significant time to locate it when out of view. • No Marks (2-0 points): The robot fails to locate the cylinder when it starts with the goal out of view. Robot stops 0.5 meters or less from the cylinder (5 points) • Full Marks (5 points): The robot stops exactly 0.5 meters from the yellow cylinder with high accuracy and minimal oscillation. • Partial Marks (4-3 points): The robot stops close to 0.5 meters but may overshoot slightly or stop just before reaching the desired distance. • Minimal Marks (2-1 points): The robot stops significantly outside the 0.5-meter threshold, either too early or too late. • No Marks (0 points): The robot fails to stop within 0.5 meters of the cylinder. Robot hits the cylinder (-5 points) Deduction: If the robot hits the cylinder during the task, 5 points will be deducted from the overall task score. Code Evaluation Task 2 (60%) Robot completes correct Bug Zero algorithm and stops within 0.5 meters of the cylinder (20 points) • Full Marks (20-18 points): The robot successfully completes the Bug Zero algorithm, reaching the goal and stopping exactly 0.5 meters from the yellow cylinder. • Partial Marks (17-13 points): The robot completes the Bug Zero algorithm but may stop slightly outside the 0.5-meter threshold. • Minimal Marks (12-7 points): The robot completes the Bug Zero algorithm but often stops too far from the cylinder or misses key elements of the algorithm. • No Marks (6-0 points): The robot fails to complete the Bug Zero algorithm or does not stop near the cylinder. Robot moves correctly around walls (15 points) • Full Marks (15-13 points): The robot smoothly navigates around walls, maintaining the required distance of 0.5 meters and demonstrating consistent obstacle avoidance. • Partial Marks (12-8 points): The robot navigates around walls but may struggle to maintain the 0.5-meter distance, occasionally coming too close or too far. • Minimal Marks (7-4 points): The robot struggles to navigate around walls, often coming too close or deviating significantly from the desired path. • No Marks (3-0 points): The robot fails to navigate around walls correctly. Robot switches correctly from motion to goal to wall following (10 points) • Full Marks (10-9 points): The robot seamlessly switches from moving towards the goal to following the wall when an obstacle is detected. • Partial Marks (8-6 points): The robot switches from motion to goal to wall following, but the transition may be delayed or inconsistent. • Minimal Marks (5-3 points): The robot struggles with the transition, occasionally failing to recognize obstacles and switch to wall following.Task 2: Bug Zero Algorithm (cont.) • No Marks (2-0 points): The robot does not switch to wall following when encountering an obstacle. Robot switches correctly from wall following to motion to goal (10 points) • Full Marks (10-9 points): The robot correctly transitions from following the wall back to moving towards the goal when the path becomes clear. • Partial Marks (8-6 points): The robot switches back to motion to goal, but the transition may be slow or inefficient. • Minimal Marks (5-3 points): The robot occasionally fails to switch back to motion to goal, continuing to follow the wall unnecessarily. • No Marks (2-0 points): The robot does not switch back to motion to goal after following the wall. Robot performs correct left and right turn algorithm (5 points) • Full Marks (5 points): The robot performs both left and right turns correctly without switching between them inappropriately. • Partial Marks (4-3 points): The robot performs the turn algorithm but may occasionally exhibit slight errors or switch between left and right turns unnecessarily. • Minimal Marks (2-1 points): The robot struggles with performing correct turns, often switching incorrectly between left and right turns. • No Marks (0 points): The robot does not correctly perform left and right turns as expected. Robot hits the cylinder (-5 points) Deduction: If the robot hits the cylinder during the task, 5 points will be deducted from the overall task score. Robot travels a hardcoded path defined in advance (-10 points) Deduction: If the robot travels along a hardcoded path rather than implementing the Bug Zero algorithm dynamically, 10 points will be deducted from the overall task score.Report & Video Evaluation (10%) A project submitted without a report, or without including the corresponding task video, will not be graded and will receive a “0” for the complete lab. Report Sections: The report should include the following (points will be deducted if anything is missing): • Mathematical Computations: Show how you calculated the speeds of the left and right servos given the input parameters for each task. • Conclusions: Analyze any issues encountered when running the tasks and discuss how these could be improved. Discuss your results, including any navigation errors. Conclusions need to reflect an insight into what was learned during the project. Statements like “everything worked as expected” or “I enjoyed the project” will not count as valid conclusions. • Video: Upload the video to Canvas showing the robot executing the task (details below). • Authorship Statement: Include and sign the following statement at the end of the report: “I developed all the code and written report with the following exceptions: [Student name: signature, date] [Code sections (specify lines): references and additional comments] If any external tools were used to generate code, you need to include the prompt and output.” Task 1 Video: Record a video of the robot executing the ”Motion to Goal” task. The video should demonstrate: • The robot starting from various positions and orientations, successfully detecting and moving towards the yellow cylinder. • The robot finding the yellow cylinder when it starts outside of the camera’s initial field of view and turning to locate it. • The robot stopping exactly 0.5 meters from the yellow cylinder, with minimal or no overshoot. • Any issues such as the robot hitting the cylinder or stopping too early should be highlighted. If applicable, mention how these issues were resolved or could be improved. Task 2 Video: Record a video of the robot executing the ”Bug Zero” algorithm. The video should demonstrate: • The robot starting from position ”S,” moving towards the goal (yellow cylinder), and correctly switching from motion-to-goal to wall following upon encountering obstacles. • The robot following the boundary of obstacles, ensuring it stays within 0.5 meters of the wall, and switching back to motion-to-goal when the path is clear. • The robot successfully performing both left and right turns without switching between them incorrectly. • The robot reaching the goal (yellow cylinder) and stopping within 0.5 meters of it. • Any issues such as the robot hitting the cylinder or traveling along a hardcoded path should be explained. If applicable, mention how these issues were resolved or could be improved.Lab Submission For Lab 3 there are two separate Canvas Assignments: Lab 3 Code Submission and Lab 3 Report Submission. Please follow the Submission policy outlined below. Note: Failure to adhere to the submission policy will result in a reduction of 20 points. Code Submission Policy Students will need to upload a Python file (i.e. Webots controller) for each Task. The Python file should follow this naming convention USF ID LabX TaskY.py, where X is the lab number and Y is the task number. Example rockybull5 Lab1 Task1.py. Additionally, if a student has created any functions that are not present in the controller but rather the MyRobot.py, the student will also need to upload that python file as well using the following naming convention USF ID MyRobot.py. If custom functions are used in a student’s submission and they are not in the controller or the MyRobot Python files, an automatic Zero will be assigned for the code submission. Each file should be uploaded to the same submission. DO NOT upload a zip file, Canvas allows multiple uploads per submission. Failure to meet these submission policies will result in 20 points off. Report Submission Policy Students will need to upload their videos to their USF OneDrive or USF Sharepoint and generate a shareable link. Please ensure that you make it viewable to everyone or at the very least the TA and the Professor (use their emails to add them to the access list). Students will need to upload a PDF file of their lab report. Be sure that the lab report contains the link to the videos and any Metrics requested by the Lab. These Metrics will be used to determine which student receives the extra credit. Use the naming convention USF ID LabX Report.pdf. Only PDFs will be accepted. If the student used ChatGPT to generate code they will also need to provide the conversation used to generate the code. This can either consist of screenshots of the conversation or a shareable link that the system provides. This should be a separate document and not part of the report. Each file should be uploaded to the same submission. DO NOT upload a zip file, Canvas allows multiple uploads per submission. Failure to meet these submission policies will result in 20 points off. TA may ask you additional questions to gauge your understanding via Canvas Message or MS Teams. Failure to reply may result in point deduction.
Objective: To implement RTL Design of Greatest Common Denominator (GCD) of numbers. Description: Design GCD architecture control/data paths for two 4-bit numbers (in your lecture notes, we have already done this). It will output the binary value of the greatest common divisor of those two 4-bit numbers. 1. Datapath The datapath top-level module can be constructed by the instantiated individual components. You can create these individual behavioral components (Register (DFF), notequal, less-than, subtractor, MUX, etc.) for each element in the datapath. Some examples are given below. Registers are the only clocked component in datapath (Example 1 below). Registers in datapath should have enable signals coming from FSM module (to load X, Y, etc. to DFFs in your datapath), and separate reset signals. All non-register components must be purely combinational (Example 2 next page shows some but not all such components). Example 1, one way to implement your registers: Example 2, two of the components in the datapath but you need more:2. Control path with FSM The FSM top-level module, according to the current state, output signals that the datapath top-level module use to perform arithmetic and to load or output from registers. The datapath top-level module sends signals back to the FSM to use. The controller is run by the FSM, and it could run on an opposite clock of the datapath (or you could run the clock for datapath with different speed, or use any trick to make sure the “load” inputs are picked up by datapath registers correctly). Develop a Moore Verilog code for FSM. FSM templates were given in Lab 4, use them here too. In your FSM module, you can use parameters for the states, below is just one example you might want to use in your Verilog (the states and their names below match those of the GCM example): Points: (50 pts.) Write structural Verilog for datapath and behavioral Verilog for controller (FSM). Use the design we discussed in the class. The FSM Verilog code should strictly follow the FSM template. Your top_level entity (let us call it GCD_top) would instantiate the datapath and controlpath modules. Connect the inputs/outputs of these two modules correctly. Is/Os of FSM are mostly Os/Is of datapath and vice versa. Deliverables (Only one .zip file per group) A zipped file (.zip) which includes these two items: (a) Your group design files (Verilog Models and test benches). (b) A concise PDF group report (that includes your Verilog code and simulation results) needs to be included in the submitted .zip file. Who submits? Only the group lead. Do not submit the same .zip file for each student.PDF Report Organization to be included in your ZIP submission (A template is provided on Canvas): □ Cover sheet □ Verilog Code, Test Bench, and Simulation Results (Waveforms) □ Feedback: Hours spent, Exercise difficulty (Easy, Medium, Hard) □ You need to submit your group report on Canvas in ZIP format (only one .zip file).
Objective This lab will teach you how to apply a PID controller to navigate parallel to a wall and stop at a desired distance from an end wall. The lab will also teach you about Lidars to measure distances to walls. Requirements Programming: Python Robot: Webots 2023b (FAIRIS) Robot Sensors Utilized LIDAR FAIRIS Lite gives access to the LIDAR sensor mounted on the robot, which is an attribute of the MyRobot class and can be accessed via lidar sensor. This sensor scans the surroundings of the robot, providing a full 360-degree range of measurements. Each LIDAR reading represents the distance from the robot to the nearest object at a specific angle, which helps detect obstacles and assist in navigation. To retrieve the current LIDAR readings, use the function robot.get lidar range image(). This function returns a list of 800 distance measurements. The first measurement corresponds to the distance directly behind the robot, the 200th measurement is from the left side, the 400th measurement is from the front, and the 600th measurement is from the right. Refer to Figure 1 for further details. Figure 1. RosBot Lidar ranges in Webots The LIDAR sensor object is a Python object created and maintained by Webots. For more details, see the Webots Docs. The LIDAR can return an array of distances for each angle, which can be processed to map the environment, avoid obstacles, or assist in path planning. More detailed documentation on how to use the LIDAR within FAIRIS Lite can be found in the FAIRIS Lite Docs.The PID controller will use the LIDAR sensor to control robot navigation. You will use the full PID controller to control motor velocities proportional to measured distances from walls, as shown in Figure 2 and Equations 1 to 4. Figure 2. PID control of the robot velocity proportional to its desired distance to the wall. The following equations summarize the PID controller: (1) e(t) = r(t) y(t) (2) u(t) = Kp · e(t) + Ki · Xt 0 e(t) dt + Kd · e(t) e(t 1) dt (3) ur(t) = fSat(u(t)) (4) fSat(u(t)) = 8 >>>>>>>: ur(t) = cmax, if u(t) > cmax ur(t) = u(t), if cmin u(t) cmax ur(t) = cmin, if u(t) < cmin Where: • r(t) = desired distance to the wall • y(t) = current distance to the wall • e(t) = distance error • Kp, Ki, Kd = gain constants • u(t) = motor control velocity • fSat(u(t)) = motor control saturated velocity function • ur(t) = motor control saturated velocityHint (5) u(t) = Kp · e(t) + Ki · Xt 0 e(t) dt + Kd · e(t) e(t 1) dt To convert this into an iterative (or recursive) form, we break it down as follows: • The proportional term is straightforward: P(t) = Kp · e(t) • The integral term accumulates the error over time. In iterative form, we can approximate the integral using a running sum: I(t) = I(t 1) + e(t) · dt So the integral part becomes: Ki · I(t) • The derivative term estimates the rate of change of the error. In an iterative approach, we can approximate the derivative using the di↵erence between the current error and the previous error: D(t) = e(t) e(t 1) dt So the derivative part becomes: Kd · D(t) Therefore, the iterative or recursive update for u(t) is given by: (6) u(t) = Kp · e(t) + Ki · I(t) + Kd · D(t) Where: I(t) = I(t 1) + e(t) · dt D(t) = e(t) e(t 1) dt Task 1: PID forward wall stop The robot should use Kp, Ki, Kd PID “forward” gain constants for motion control, applying them only to the error values related to the forward motion. The control should be applied exclusively towards the robot’s front motion, stopping 1m away from the end wall, and should not influence side motions. The robot must avoid hitting walls. Note that you will need to use the LIDAR for this task. The robot should start 6.5m away from the end wall, as shown in Figure 3. For this task, you must load the maze in the maze2.xml file. This can be achieved by using the following line in your controller maze file = ’../../worlds/Fall24/maze2.xml’. Additionally, assume that dt = 0.032, representing the duration of a single timestep in Webots. Test your solution by experimenting with the following six values for the gain constants: 0.001, 0.01, 0.5, 1.0, 5.0, 10.0. Apply these values separately to Kp, Ki, and Kd. Your goal is to find the optimal combination of gain constants (Kp, Ki, and Kd) that provides the best performance. These may di↵er from the previously suggested values. The robot should approach the wall in front, gradually slow down, and rest 1 meter away from the wall. Test the task starting at di↵erent distances from the end wall by clicking and dragging theTask 1: PID forward wall stop (cont.) robot. Ensure that if the robot is placed closer than 1 meter from the wall, the robot should back up. During the test, continuously display the distance measurements from the end wall using sensor readings. For the video submission, select the best gain constant and record the robot moving forward until it stops 1 meter from the wall. Additionally, demonstrate what happens if the robot is manually dragged to a position less than 1 meter from the wall. Figure 3. Task 1 world setup. This maze can be loaded in Webots by using the maze2.xml file. Note that in Webots, each colored square measures 1 ⇥ 1m2 width. Task 2: Wall following Perform robot wall following on the two mazes: maze3.xml and maze4.xml. The robot will need to follow either the left or right wall, depending on the specified mode. The task is as follows: • The robot should perform wall following by either sticking to the left or right wall. For example, in left wall following, the robot must maintain contact with the left wall, navigating by treating the left side as if it is always ”touching” the wall. • When the robot encounters a wall directly in front of it, it should rotate 90 degrees and continue following the same wall (left or right) without switching sides. • If the wall the robot is following ends abruptly (such as at a corner), the robot should wrap around the sharp edge and continue following the same wall. • The robot should not switch between walls and should maintain consistent left or right wall following throughout the task. • The robot needs to complete the wall following for both mazes: maze3.xml and maze4.xml while adhering to the wall following strategy. • Some combinations of maze and wall-following modes may result in the robot being unable to reach the goal. For example, there may be cases where following the left wall prevents the robot from ever reaching the goal. If this happens, identify which maze and wall-following combination is impossible in your lab report and explain why the goal in unreachable. • Compute and have the robot navigate both mazes using both left and right wall following. The robot should predict its current pose (x, y, ✓) using the encoders and IMU throughout the navigation for valid combinations. The robot must determine if it has reached the goal position G = (1.0, 1.0) by checking if its predicted position is within ±0.5m of the goal and come to a stop when this condition is met. • Report the minimal travel time T from the start position S = (2.0, 2.0, ⇡) to the goal in maze3.xml for each valid combination of wall following. (10 points will be awarded to the robot navigating at minimum travel time in the class. Requires corresponding analysis in the report.)Task 2: Wall following (cont.) Hint Although not required, consider using an additional PID controller that utilizes the side distances to modify the robot’s angular velocity. This will help it maintain a consistent distance from the wall being followed. If done correctly, this will also allow the robot to curve around sharp turns smoothly without the need for hardcoded methods. (I) maze3.xml (II) maze4.xml Figure 4. These are the two mazes you will test your wall following solutions.Code Evaluation Task 1 (30%) Task 1: PID Forward Wall Stop (30 points total) Robot stabilizes at specified distance from end wall (15 points) • Full Marks (15-13 points): The robot reliably stabilizes exactly 1 meter from the end wall, coming to a steady stop. There is minimal to no oscillation, and the robot does not overshoot. • Partial Marks (12-8 points): The robot stabilizes near 1 meter from the end wall, but there may be slight overshooting or oscillation. It eventually corrects and stops within an acceptable range. • Minimal Marks (7-4 points): The robot struggles to maintain the correct distance from the end wall, frequently overshooting or stopping too early. Stabilization is inconsistent. • No Marks (3-0 points): The robot does not stabilize at the specified distance or fails to stop altogether. Robot utilizes PID to reduce speeds proportionally to the front wall (10 points) • Full Marks (10-9 points): The robot reduces its speed as it approaches the wall, using PID control to ensure a smooth deceleration. The motion is proportional to the remaining distance. • Partial Marks (8-6 points): The robot reduces its speed, but the deceleration is either too fast or too slow, with noticeable inconsistencies in PID control. • Minimal Marks (5-3 points): The robot reduces its speed, but there is little to no evidence of smooth PID control. The robot slows down too late or too quickly. • No Marks (2-0 points): The robot does not e↵ectively utilize PID control to adjust its speed, or it does not decelerate proportionally. Robot restabilizes when placed further or closer to the wall (5 points) • Full Marks (5 points): The robot successfully restabilizes when manually moved closer or farther from the wall, adjusting smoothly back to 1 meter. • Partial Marks (4-3 points): The robot restabilizes when moved, but it may take too long or overshoot before returning to the correct distance. • Minimal Marks (2-1 points): The robot struggles to restabilize and only sometimes returns to the correct distance. • No Marks (0 points): The robot fails to restabilize after being moved. Robot hits walls (-5 points) Deduction: If the robot hits any walls during the task, 5 points will be deducted from the overall task score.Code Evaluation Task 2 (60%) Task 2: Wall Following (60 points total) Wall Following on Maze 3 (30 points) • Full Marks (30-25 points): The robot follows the correct wall (left or right) in the maze with minimal deviation and successfully navigates through to the goal without hitting walls or switching sides. • Partial Marks (24-18 points): The robot follows the correct wall but deviates at certain points, requiring occasional corrections to stay on course. The robot completes the maze but may brush against walls. • Minimal Marks (17-10 points): The robot follows the wall inconsistently, switching sides or struggling to maintain the wall-following behavior. The robot does not always reach the goal successfully. • No Marks (9-0 points): The robot fails to follow the wall or does not complete the maze. Wall Following on Maze 4 (30 points) • Full Marks (30-25 points): The robot follows the correct wall in the maze with minimal deviation and successfully navigates through to the goal without hitting walls or switching sides. • Partial Marks (24-18 points): The robot follows the correct wall but deviates at certain points, requiring occasional corrections to stay on course. The robot completes the maze but may brush against walls. • Minimal Marks (17-10 points): The robot follows the wall inconsistently, switching sides or struggling to maintain the wall-following behavior. The robot does not always reach the goal successfully. • No Marks (9-0 points): The robot fails to follow the wall or does not complete the maze. Additional Deductions for Task 2: • Robot average travel speed less than .1 meter per second except for corners (-5 points) • Robot constantly exceeds .4 meters closer or further away from any wall (-5 points) • Robot hits walls (-5 points)Report & Video Evaluation (10%) A project submitted without a report, or without including the corresponding task video, will not be graded and will receive a “0” for the complete lab. Report Sections: The report should include the following (points will be deducted if anything is missing): • Mathematical Computations: Show how you calculated the speeds of the left and right servos given the input parameters for each task. • Conclusions: Analyze any issues encountered when running the tasks and discuss how these could be improved. Discuss your results, including any navigation errors. Conclusions need to reflect an insight into what was learned during the project. Statements like “everything worked as expected” or “I enjoyed the project” will not count as valid conclusions. • Video: Upload the video to Canvas showing the robot executing the task (details below). • Authorship Statement: Include and sign the following statement at the end of the report: “I developed all the code and written report with the following exceptions: [Student name: signature, date] [Code sections (specify lines): references and additional comments] If any external tools were used to generate code, you need to include the prompt and output.” Task 1 Video: Record a single video of the robot performing PID forward wall stop on the maze2.xml file. The video should show the best choice for gain constant and forward distance being printed. Be sure to show the robot coming to a steady state once 1 meter away from the front wall and what would happen if place too close to the wall. Task 2 Video: Record a single video showcasing the robot performing both left and right wall following on both maze3.xml and maze4.xml. The video should demonstrate how the robot navigates each maze, following the selected wall consistently, and the printouts of the robot pose estimation during navigation. Ensure that the video highlights successful completion of both wall-following strategies in both mazes, including any adjustments the robot makes when encountering walls or corners. For each combination, show the robot reaching the goal and stopping within the required ±0.5m accuracy. If any combination fails to reach the goal, explain why in the video.Lab Submission For Lab 2 there are two separate Canvas Assignments: Lab 2 Code Submission and Lab 2 Report Submission. Please follow the Submission policy outlined below. Note: Failure to adhere to the submission policy will result in a reduction of 20 points. Code Submission Policy Students will need to upload a Python file (i.e. Webots controller) for each Task. The Python file should follow this naming convention USF ID LabX TaskY.py, where X is the lab number and Y is the task number. Example rockybull5 Lab1 Task1.py. Additionally, if a student has created any functions that are not present in the controller but rather the MyRobot.py, the student will also need to upload that python file as well using the following naming convention USF ID MyRobot.py. If custom functions are used in a student’s submission and they are not in the controller or the MyRobot Python files, an automatic Zero will be assigned for the code submission. Each file should be uploaded to the same submission. DO NOT upload a zip file, Canvas allows multiple uploads per submission. Failure to meet these submission policies will result in 20 points o↵. Report Submission Policy Students will need to upload their videos to their USF OneDrive or USF Sharepoint and generate a shareable link. Please ensure that you make it viewable to everyone or at the very least the TA and the Professor (use their emails to add them to the access list). Students will need to upload a PDF file of their lab report. Be sure that the lab report contains the link to the videos and any Metrics requested by the Lab. These Metrics will be used to determine which student receives the extra credit. Use the naming convention USF ID LabX Report.pdf. Only PDFs will be accepted. If the student used ChatGPT to generate code they will also need to provide the conversation used to generate the code. This can either consist of screenshots of the conversation or a shareable link that the system provides. This should be a separate document and not part of the report. Each file should be uploaded to the same submission. DO NOT upload a zip file, Canvas allows multiple uploads per submission. Failure to meet these submission policies will result in 20 points o↵. TA may ask you additional questions to gauge your understanding via Canvas Message or MS Teams. Failure to reply may result in point deduction.
Objective: To learn and practice synthesizable Finite State Machine construction. Note: This lab does not need your FPGA boards. It is a simulation-only lab with ISE. No demonstration is needed to the TAs. Description: Design an FSM for use as a controller for a vending machine. The system has five (5) inputs: quarter, nickel, dime, soda, and diet. The quarter input will go high, then go low when a 25¢ coin is added to the machine. The dime and nickel inputs work in a similar manner for the 10¢ and 5¢ coins. The sodas (regular and diet) cost 45¢ each. The user presses the soda button to select a regular soda, and the diet button to select a diet soda. The GiveSoda output will pulse high, then low when a regular soda is released. Similarly, the GiveDiet output will pulse high, then low when a diet soda is released. There will be no change back. For example, if user inserts two 25 cents which is 50 cents (or for instance, five 10 cents which is 50 cents, or for example four 10 cents and then a 25 cents, which is 65 cents), soda is released but no change is returned. When the user has inserted enough money to buy a soda, the user cannot insert more money. You look at the soda/diet input then and release the can and go back to start state. Use Moore or Mealy as appropriate. Your design must be a pure FSM based on the templates provided in the file FSM Templates.docx. This means that there should be no complex operations (addition, subtraction) in your design. It must function solely on states and transitions. (50 pts.) Design the vending machine FSM and verify its functionality using simulation. Deliverables (Only one .zip file per group) A zipped file (.zip) which includes these two items: (a) Your group design files (Verilog Models and test benches). (b) A concise PDF group report (that includes your Verilog code and simulation results) needs to be included in the submitted .zip file. Who submits? Only the team leader identified on Canvas needs to submit. TAs will mark all group members based on that equally. Do not submit the same .zip file for each student.PDF Report Organization to be included in your ZIP submission (A template is provided on Canvas): □ Cover sheet and the state diagram and a brief description □ Verilog Code, Test Bench, and Simulation Results (Waveforms) □ Feedback: Hours spent, Exercise difficulty (Easy, Medium, Hard) □ You need to submit your group report on Canvas in ZIP format (only one .zip file).
Objective: To learn and practice advanced sequential hardware design in Verilog and synthesizable Verilog based circuit implementation on FPGA board. Note: Before you read this, you need to perform the Tutorial for Lab 3 (one PDF description which works on one included Verilog and one user constraints file (UCF)). Table 1 Figure 1: Counter Top-Level Block Diagram Table 2 Description: Design a programmable counter that outputs BCD and has a range of 0-99. The counter must count from zero to a user-input value. The valid input range is 0-99. The output from your design should be in binary-coded decimal. You will need to design a binary counter that counts at least to 99 (decimal), and a module which converts the binary output to two-digit BCD. A single-bit input, called run, determines the operation of the counter. When run is ‘0’, the counter output is set to zero and does not count. The user can set the maximum value up to 99 when run is ‘0’, using an input called max_count. When run is set to ‘1’, the counter starts counting from zero, and stops Port Width I/O Function max_count 7 I Set the maximum counter value Run 1 I Set run mode; 0 = set, 1=run digit_2 4 O Tens digit BCD output digit_1 4 O Ones Digit BCD output Port Width I/O Component Mapping Run 1 I Sliding Switch SW7 Max_Count 7 I Sliding Switches SW6 (MSB) – SW0 (LSB) CLK 1 I – USER_CLK Digit_1 4 O LEDs LED3 (MSB) – LED0 (LSB) Digit_2 4 O LEDs LED7 (MSB) – LED4 (LSB) CDA4203L Lab 3 2 when it reaches the maximum. Any changes to max_count is ignored when run is set to ‘1’. Your design will have two 4-bit outputs; digit 1 and digit 2. Each is used to output one BCD digit. Digit 2 is the tens digit, digit 1 is the ones digit. The Anvyl board includes a single 100MHz Crystal oscillator connected to pin D11. In this lab, we use 8 out of 14 LEDs (LED[0]-LED[7]), all the eight slide switches (SW[0]-SW[7]). Use SW[7] (pin P8) for run and the other seven switches for max_count. The master UCF file and that of the tutorial make writing your UCF easy. This is a picture for the I/O you use in this lab. Included Verilog files: You can modify these as you wish but some of them are already complete if you connect/use them as intended.1. Binary_bcd.v (complete) This is for binary to BCD conversion of your count_out, e.g., 00011001 in binary to its BCD which is 19. It uses an algorithm for conversion. No change needed as this file is already written for you. 2. Counter.v (complete) Source template for a simple 7-bit binary counter with enable and synchronous reset. No change needed as this file is already written for you. 3. Prog_counter.v (you need to complete) Wrapper to add programmability to a 7-bit counter. This module contains the logic to stop a counter when it reaches a designated value. The maximum value is 99 (decimal). 4. Final_bcd_counter.v (you need to complete) This is the top module for the programmable BCD counter. It instantiates your programmable 7-bit counter and your binary-to-bcd converter that can output two digits. 5. Lab_board.v (complete) Connections to test the design. CDA4203L Lab 3 3 Example waveform and testbench are also added in a folder for assisting you. 1. (30 pts.) Create a Verilog design (See Figure 1) which will provide the required functionality. 2. (20 pts.) Exhaustively test the design to ensure that it works under all conditions. 3. (20 pts., recorded video demonstration mark) Synthesize and map the design on ANVYL board using Table 2. Record a Video from your board counter results and put somewhere (YouTube etc.) and add its link to your report so that TAs can see and mark. Deliverables (Only one .zip file per group) A zipped file (.zip) which includes these two items: (a) Your group design files (Verilog Models and test benches). (b) A concise PDF group report (that includes your results as well as the link to video demonstration) needs to be included in the submitted .zip file. Who submits? Only the team leader identified on Canvas needs to submit. TAs will mark all group members based on that equally. Do not submit the same .zip file for each student.PDF Report Organization to be included in your ZIP submission (A template is provided on Canvas): □ Cover sheet □ Video link for group demonstration of results (Anvyl board results) □ Behavioral Verilog Code, Test Bench, and Simulation Results (Waveforms) □ Pin Mapping file (UCF file that you used) □ Feedback: Hours spent, Exercise difficulty (Easy, Medium, Hard) Important: □ Do not discard your designs. They may be used as starting point for subsequent Labs. □ You need to submit your group report on Canvas in ZIP format (only one .zip file which includes all the above submitted by the team lead).
Objectives: To learn and practice Verilog based circuit modeling and validation by simulation. Problem: Implement an ALU (Arithmetic Logic Unit) satisfying the following functional requirements. You should complete tutorial 2 (Tutorial_Lab2.pdf) before starting this lab. Figure 1: ALU Port Interface and Function Table 1. (10 pts.) Problem 1, Behavioral-Only Verilog: Design an ALU in behavioral Verilog with port interface and functionality as shown in Figure 1. Test the ALU functionality by simulation. Include at least two vectors per function. Hint for Problem 1: Construct a module with an always block with appropriate sensitivity list. Depending on your select, you would do these cases “~A”, “A+B”, “A-B”, or “2*A” for 4-bit inputs. 2. (20 pts.) Problem 2, Behavioral Components and Structural Top Level ALU: Design the ALU in structural Verilog. Only use the following components: 4-bit NOT, 4-bit Full Adder, and 4-bit input/output 2-to-1 MUX. First you need to develop behavioral Verilog module for each of the above three components (3 behavioral modules). Then, you can instantiate these three components to build the ALU. Test the ALU functionality by simulation. Include at least two vectors per function. You may re-use the test bench you have created in Lab 1. Hint for Problem 2: Construct three behavioral modules with always blocks with appropriate sensitivity list. Extend NOT and 2-to-1 MUX behavioral codes which have already been discussed in the tutorial to 4-bit blocks. In your final ALU code, you need to instantiate your four-bit 2-to-1 MUX three times for your four-bit 4-to-1 MUX (make sure your selects are correct). Function Select (S) S1 S0 Y Invert 0 0 A Add 0 1 A + B Subtract 1 0 A – B Double 1 1 2*A A A L U Y S 4 2 4 B 4You can also use the below example of always block for 4-bit adder as part of your design (it has to be designed so that you instantiate it in your structural ALU for addition and subtraction depending on your inputs/carry_in): always @ (a or b or c_in) begin assign {c_out, add_out}= a+b+c_in; endDeliverables: A concise PDF report that includes your Verilog codes, testbenches, and simulation results. Report Organization (A template is provided on Canvas): □ Cover sheet □ Problem 1: Behavioral Verilog Code, Test Bench, and Simulation Results (Waveforms) □ Problem 2: Behavioral Verilog for three Components, ALU Structural Code, Test Bench, and Simulation Results (Waveforms) Important: □ Do not discard your designs. They may be used as starting point for subsequent labs. □ You need to submit your report on Canvas in PDF format. No hardcopy is needed.
Welcome to CDA 4205L Lab #7! The goal of this lab is to help you understand the inner workings of an Arithmetic Logic Unit (ALU) and Control Unit (CU) modules with a basic implementation in Verilog. Prelab Arithmetic Logic Unit (ALU) As the name implies, the ALU is responsible for the majority of the arithmetic and logic operations in a processor. This can include basic binary operations, such as: addition, subtraction, shifting, AND/OR/XOR, comparators, etc. In RISC-V, the ALU normally operates on 32-bits or 64-bits data. For example: for 64-bit ALU, it takes in 2x 64-bit inputs and produce a 64-bit output. Depending on the microarchitecture, other arithmetic operations (such as multiply and divide, which can take multiple cycles) can also be part of the ALU, or can be in a separate functional unit. This lab will use a similar version of RV32IM standards (RISC-V 32- bits integer ISA with multiply and divide). For the basic ALU, you can think of it as a case statement (similar to a switch in C/C++), or a large if/else. The output of the ALU will depend on the operation control signals. Recall that the operation control signals are all in some way derived from the instruction itself. In RISC-V this comes from the opcode, funct3, and funct7 fields. Control Unit (CU) The CU is responsible for decoding the opcode field in set to 0 or 1 the respective CPU control lines. These control lines, as the name implies, controls the behavior of the CPU. For this lab, there is a total of 10-bits of control lines: – ALUSrc (1-bit): it indicates where the 2nd ALU operand comes from: (0) Reg2 (rs2) or (1) the signextended immediate. – MemToReg (1-bit): it indicates which data to use to write (if enabled) the destination register (rd) with: (0) thenALU result or (1) the data memory – RegWrite (1-bit): it enables the write operation for the destination (rd) – MemRead (1-bit): it enables the read operation for the data memory for the respective address (= ALU Result) – MemWrite (1-bit): it enables the write operation for the data memory for the respective address (= ALU Result) with the value from Reg2 (rs2). – Branch (1-bit): if the branch condition is met, then set PC=PC+Imm – Jump (1-bit): if this instruction is “jal” or “jalr”, then write PC+4 to destination register – Jalr (1-bit): if this instruction is “jalr”, thus sets PC=Reg1+Imm – ALUOp (2-bit): used by the ALU Control module to assist in determining the correct ALU operation control signal: o “00” is reserved to indicate load, store, jalr, and jal instructions. o “01” is reserved to indicate branches and comparison instructions. o “10” is reserved to indicate branches arithmetic, shift, and bitwise boolean logic instructions. o “11” is unused for “RV32IM” ISA standard. Lab Setup 1. Go to https://edaplayground.com/. Since you’re just doing ALU and CU designs, and not performing synthesis/implementation (to gather power-performance-area or PPA results), we will not be using Vivado for this lab. 2. From the left panel, select “Icarus Verilog 0.9.7” as the simulator from the “Tools & Simulators” menu. 3. On the right-hand side, you will create the Verilog module representing the ALU (part 1) or CU (part 2) design. On the left, you will create the testbench for each part. Note: you will have to create 2 different tabs: one for your Part 1 design and another for Part 2. Do NOT combine the design and testbenches. For each tab you will have to follow these same setup steps (1-3). Part 1: ALU Design 4. For part 1 you will be designing an ALU. First, create a tab following the setup steps (1-3). Make sure to write the name of your project as “alu”, set to “Private”, and then save. If it asks you to register or sign-in, use your USF email but with a different password. 5. Refer to the ALU operations list below (obtained from your assembler labs) for part 1: Category Operation Name Meaning Arithmetic 0010 add A + B 0110 sub A – B 0111 mul A * B 1111 div A / B Shift 0100 sll A > B 1110 sra A >>> B Bitwise Boolean Logic 0000 and A & B 0001 or A | B 0011 xor A ^ B Comparison 1000 eq A == B 1001 neq A != B 1010 lt A < B 1011 geq A >= B 6. Your module definition should take a 4-bits operation input, two 32-bits operands inputs (A and B), and one 32-bit ALUResult output register (reg). Set all these values as signed. Note: Your module should also have a `timescale directive to allow for proper simulation. Add `timescale 1ns/1ns to the top of your module. (This means that the simulator runs in 1ns increments, and the smallest step is 1ns). 7. You should use the Verilog case statement to structure your ALU logic. The case statement should be inside an always block. 8. Fill in the case statement with all possible ALU operations. For example, the syntax for the first operation (addition) would be: 4’b0010: Also, the default case should set ALUResult = 0. See example below for “add” and “default” cases: Once you have designed the ALU, you will need to create a testbench. In EDA Playground, the testbench is written on the left-hand window. 9. Your testbench should be called module alu_testbench(); (a testbench does not have any inputs or outputs). For every input to your ALU, declare a corresponding reg. For every output in your ALU, declare a corresponding wire. See example below: 10. Instantiate the circuit you are testing (unit under test). The syntax is: (); In this case, Note: the “instance name” can be anything but not the same as the module’s name. Also, the port list order matter, must match as in the respective module ports. 11. Exercise the inputs to your circuit using an initial block. The block should include a $monitor directive at the start – this will automatically display the specified values whenever they change. 12. Next, set your initial values for A and B – this can be anything you like. Simply assign a value to A and B. Below is an example initialization (select any other random number) 13. Change the operation by setting operation equal to different codes. Be sure to include a #delay at the start of the statement. For example, will delay by 5 time units (as defined by the `timescale directive) and then set operation equal to 4’b0010. 14. Repeat step 13 to exercise all possible ALU operations (see reference table in step 5). 15. Save and press Run. 16. Next, change your A and B to test signed arithmetic. Once again, test these values for all possible operations. 17. Once you confirm it is working, select “Download files after run” under “Tools & Simulation”. Then, rerun your code. It should automatically generate a zip file with both the ALU and testbench. Rename this zipfile to “alu.zip”. T1: Take a screenshot of your output when A and B are both positive values. Confirm that the results are correct. T2: Take a screenshot of your output when A is positive and B is negative. Confirm that the results are correct. T3: Remove the signed modifier from A in the ALU module and rerun using the same A and B values as in T2. Take a screenshot of your output and explain what happens. T4: Remove the signed modifier from B in the ALU module and rerun using the same A and B values as in T2. Take a screenshot of your output and explain what happens. Part 2: CU Design 18. For part 2 you will be designing a CU following similar steps as Part 1. First, create a new tab following the setup steps (1-3). Make sure to write the name of your project as “cu”, set to “Private”, and then save. If it asks you to register or sign-in, use your USF email but with a different password. 19. Refer to the CU control lines list below (obtained from your assembler labs) for part 1: Format Opcode ALUSrc MemtoReg RegWrite MemRead MemWrite Branch Jump Jalr ALUOp R-type 0110011 0 0 1 0 0 0 0 0 10 Load 0000011 1 1 1 1 0 0 0 0 00 Jalr 1100111 1 0 1 0 0 0 1 1 00 I-type 0010011 1 0 1 0 0 0 0 0 10 S-type 0100011 1 0 0 0 1 0 0 0 00 B-type 1100011 0 0 0 0 0 1 0 0 01 J-type 1101111 0 0 1 0 0 0 1 0 00 20. Your “cu” module definition should take a 7-bits opcode input, eight 1-bit output reg (ALUSrc, MemtoReg, RegWrite, MemRead, MemWrite, Branch, Jump, Jalr), and one 2-bits ALUOp output reg. All of these are unsigned values. Note: There is no “unsigned” keyword. Just by not writing signed, then it will make them unsigned by default. 21. Take similar steps (6-8) as part 1 to make your CU design. Name this module “cu” (all lowercased) and set the correct dimensions/data types of your inputs/outputs, as before. Note: begin and end behaves similar to curly braces {} in C/C++. Since your case statements for the CU design have multiple lines, you will need to wrap each case in a begin/end block. 22. Then, take similar steps (9-15) to create a testbench for your “cu” module. Name this testbench “cu_testbench”. Make sure to test all possible “opcode” possibilities (see table in step 19). 23. Once you confirm it is working, select “Download files after run” under “Tools & Simulation”. Then, rerun your code. It should automatically generate a zip file with both the CU and testbench. Rename this zipfile to “cu.zip”. Questions 24. Answer the following questions: T5: Take a screenshot of your output with all “opcodes”. Confirm that the results are correct. T6: The #delay specified in step 13 was 5 “time units”. What is the real value of these 5 “time units” in the simulation? T7: What is the purpose of a testbench? Why is it needed for each module (ALU and CU)? T8: Can we obtain power, performance, area (PPA) results with the testbench? Why? In-class work – Complete T1, T2, T3, and T4, show it to the TA for review and submit it on Canvas before the lab session ends. Submit your in-class work as a Word document (.docx) format. Final Report – Submit your report pdf with answers to all questions and screenshots. Also, submit the final compressed “alu.zip” file (with the ALU module and respective testbench files) and “cu.zip” file (with the CU module and respective testbench files). Thus, total of 3 files in your .zip file (one pdf and two zip files inside). One submission per group.
Objectives: To learn and practice schematic entry and simulation. Problem: Implement an ALU (Arithmetic Logic Unit) satisfying the following functional requirements. You should complete schematic capture tutorial (Tutorial_Lab1.pdf), before you start on this lab. Figure 1: ALU Port Interface and Function Table Implement your design in two’s complement signed system, discard the end-round carry. Do not use the cases that lead to overflow, i.e., adding two positive or adding two negative numbers (sometimes), and subtracting a positive and a negative number and vice versa (sometimes). – Invert example: 1101 will be 0010 – Add example: 0110 (which is +6) + 0001 (which is +1)=0111 (+7) – Add example: 0111 (which is +7) + 0001 (which is +1) gives 1000 (-8), overflow – Add example: 1111 (-1) + 0111 (+7) = 0110 (+6) – Subtract example: 1111 (-1) – 1110 (-2) = 0001 (+1) – Subtract example: 0001 (+1) – 1000 (-8), gives 1001, overflow 1. (10 pts) Design and create a schematic of the ALU using ISE Schematic Entry tool. 2. (10 pts) Create a test bench and use it to test the ALU functionality. Include at least two test vectors per 4 functions in the table above. Deliverables: A concise report that includes your design and simulation results. Function Select (S) S1 S0 Y Invert all bits 0 0 A Add 0 1 A + B Subtract 1 0 A – B Double 1 1 2*A A A L U Y S 4 2 4 B 4Report Organization (A template is provided on Canvas –Lab 1 Report Template.docx): □ Cover sheet □ ALU schematic, you need to show the top level and also inside of all your blocks including full adder, MUX, etc. You need to show screenshots of all the blocks and the details inside. □ Brief description of your design □ Simulation waveforms □ Feedback: Hours spent, Exercise difficulty (Easy, Medium, Hard)Important: □ Do not discard your design. It will be used as starting point for subsequent lab exercise(s). □ You need to submit your report on Canvas in PDF format. No hardcopy is needed.
Welcome to CDA 4205L Lab #10! This is the third portion of the ISA lab that injects in the middle of Lab #4. The goal of this lab is for you to understand how to recognize and remove data hazards from the assembly code created in part 1. This step will rearrange the sequencing of the assembly code such that the computation and output remains the same, but data hazards are removed. Prelab Pipeline Hazards In a pipelined datapath, a “hazard” is a condition that prevents the normal execution of the code. Hazards include structural, data, and control. Structural hazards refer to conditions where the same hardware unit is needed by more than one instruction at the same time. For example, if there is one ALU, and two instructions in the pipeline both need that ALU at the same time, there would be a structural hazard, and one instruction would have to wait. Data hazards, also called “read before write” hazards, refer to situations where one instruction attempts to use a value in a source register that has not yet been written back. It may have already been computed or loaded, but registers are not updated until the final stage of the pipeline. Finally, control hazards are conditions where there may be a change in control flow in the code, e.g., a branch (if/else). Branches are not evaluated until the second stage of the pipeline, and hence, the CPU may load the wrong next instruction. The RISC-V pipeline discussed in class does not have structural hazards. However, data and control hazards are common and must be addressed. In hardware, the CPU will detect hazards and try to either mitigate them (e.g., by using forwarding or branch prediction). In some cases, forwarding can “solve” a hazard by allowing values computed in the EX stage to “bypass” the register (at least, temporarily) and be passed along directly to the instruction that requires it. Sometimes, however, a stall – waiting for data to be ready – is inevitable. A good compiler can also detect potential hazards and try to reduce the number of stalls needed for the correct execution of the code. This can be achieved by reordering some of the instructions. For example, if a dependency exists between instruction A and instruction B that would otherwise require a stall cycle, a completely unrelated instruction could be inserted between them so no time is wasted. Stall Cycles This lab will focus on three scenarios for data hazards that, even with forwarding, would require stall cycles. These situations are as follows: 1. A hazard in which a load (e.g. lw, ld) instruction precedes an arithmetic instruction. Loads occur in stage 4 of the pipeline (MEM), and arithmetic instructions require input to the ALU at the beginning of stage 3. This “dependency backwards in time” means that even if we forward the output of the MEM stage to the beginning of the EX stage, the arithmetic instruction still must wait for one cycle. 2. A hazard in which an arithmetic instruction is immediately followed by a branch instruction. Note that the branch instruction itself represents a control hazard, but if the comparison (e.g. beq, bne, etc.) source operand is read from the destination of a previous instruction, it requires one cycle. 3. A hazard in which a load instruction is immediately followed by a branch instruction. The load occurs in stage 4, but the branch decision is made in stage 2. Hence, this requires two stall cycles. If possible, other unrelated instructions can be inserted in the required stall cycles such that no time is wasted. Note that this should not impact the functionality of the code. Lab 1. Open the following link to access the python notebook in Google Collab. 2. Download ‘example.asm’ from Canvas (Files > Labs > Lab 10) and upload it in the Files pane. 3. Complete Tasks 1-6, as described in the python notebook. Note: The last page contains a flow chart displaying a high-level overview of the algorithm of the tasks to be implemented in this lab. In-class work – Complete T1 and T2, show it to the TA for review and submit it on Canvas before the lab session ends. Submit your in-class work as a Word document (.docx) format. Final Report – Submit your report .pdf with answers to all questions and screenshots. Also, submit the final `.ipynb` file. Submit just one .zip file per group. T1-6: Take screenshots of outputs from Tasks 1 – 6 and include those in your report. T7: What are pipeline hazards? Give one example and provide a sample of assembly code that would cause such a hazard. T8: How can CPU hardware deal with hazards? How can a compiler help? Split instructions into subsets delimited by (but including) branch, jump, and label splitAssemblyIntoSubsets () T1 Process next subset Grab subset from subsetsT6 All subsets processed len (subset) 2 no swappable instr found T2 Scan each instruction for data dependencies (bottom up) with preceding instruction are data dependent (prev instr,curr instr) there is no data dependency there is a data dependency T3 Search instructions (bottom up) for instruction w/o data dependency to swap in between dependent instructions find_above_instruction_without_dependencies no swappable instr found T3 Search instructions (top down) for instruction w/o data dependency to swap in between dependent instructions find_below_instruction_without_dependencies () Proceed to next instr swappable instr W/o data dependency found T4 move_instruction_above_index () T5 reorder_instructions (subset) T6 Return reordered_instructions
Welcome to CDA 4205L Lab #13! The goal of this lab is to give you an example of how understanding topics in computer architecture – in this case, ISAs and memory organization – can expose software-level security issues/vulnerabilities. In particular, we will focus on a buffer overflow attack in RISC-V. Prelab Stack, Buffers, and Buffer Overflow Earlier this semester, we discussed how a program uses different regions of memory for storing program data. In particular, we covered the use of the stack and the heap. When writing low-level programs, e.g., using C or directly using assembly, management of memory is generally left to the programmer. You’ve already seen how, when writing a procedure in RISC-V assembly, you are responsible for moving the stack pointer to allocate space, preserving saved registers (s0, s1, etc.), and popping the stack before leaving the procedure code. You’ve also seen how space can be allocated on the stack for static data (in the .data segment in the code). Now consider this scenario. In the .data segment, your program allocates 1) space for a 4-word array, followed by 2) a word representing a constant (integer) in your code. This is the structure in memory: Variable Name A B Value 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 Another word for (1) is a buffer, a contiguous block of computer memory that holds multiple instances of the same data type. When writing to a buffer, it is important to know the size of the buffer. If we write the “ComputerArchLab” to the start address of A and 420510 to B, we get the following: Variable Name A B Value 0x706D6F43 0x72657475 0x68637241 0x0062614C 0x0000106D In this example, if there are no protections in place, writing “ComputerArchitecture” words beginning at the start address of A would result in overwriting the adjacent memory location (B), resulting in the following: Variable Name A B Value 0x706D6F43 0x72757465 0x68637241 0x63657469 0x65727574 While this is technically correct, you have overwritten variable B (which was supposed to be an integer). Your code will still treat B as an integer. Instead of 420510, it is now 170199998810, which is certainly not the course number. Buffer Overflow Attack This was a fairly benign example, but the possibility of overwriting data on the stack is a very serious security issue. Consider the following, less benign example. Variable Name InputBuffer Password Value 0x00000000 0x00000000 0x34616463 0x00353032 Here, there is an 8-byte input buffer, followed by an 8-byte password buffer. The program will compare the value of InputBuffer with Password and, if they match, will allow access to the “secure” system. If, however, you manage to overwrite the two words beginning at Password with your own data, you could then ensure any text you input will match; e.g., for an input of “aaaabbbbaaaabbbb”, the memory contents would look like this: Variable Name InputBuffer Password Value 0x61616161 0x62626262 0x61616161 0x62626262 Therefore, if your code compares byte-by-byte, or word-by-word, in either case, the InputBuffer would match the Password, bypassing the security measure. Modern compilers will use different techniques to prevent this, and modern programming languages will also use safer routines for taking user input (specifically, ones that truncate the size of the input to match the specified buffer size). Still, it is important to be aware of the potential vulnerability and ensure you use safe coding practices if working with low-level languages. Lab 1. Download the buffer_overflow.asm file from Canvas. This RISC-V code simulates a scenario where a buffer overflow is possible, as described in the pre-lab. 2. Compile and run the code. A dialog box will open asking for you to enter a password. The correct password is “cda4205L”. Entering this should grant you access to the system. 3. Re-run and enter an incorrect password, e.g. “cda3201”. This should deny access, as the password is incorrect. 4. Observe the data memory view. You can check the ASCII box to show the actual characters stored. You should be able to see the text you entered as well as the correct password. 5. 6. Run the code again, but this time, enter some repeating characters, e.g. “aaaaaa…..a”. Observe the data memory view to see where the entered characters were stored. Re-run/repeat this process until you have successfully gained access to the system. Final Report – Zip your report pdf with answers to all questions and screenshots. Your .zip file per group contains only a PDF report file. T1: How many characters do you need to enter to reach the region of memory where the password is stored? How many total characters do you need to enter to overwrite the password? T2: Take a screenshot of the console output showing how you gained access to the system with an incorrect password. T3: How can the code be modified to prevent your attack? Describe what changes you would make to the code to prevent unauthorized access.
Welcome to CDA 4205L Lab #5! The goal of this lab is to help you understand the trade-offs between different implementations of adders and understand how hardware description languages like Verilog can be used to design complex circuits. Prelab In Labs 3 and 4, you implemented an assembler, which took instructions, defined by an ISA, and converted them to a machine code representation that works for a given microarchitecture. In future labs, we will look at the microarchitecture level. In this lab, however, we will turn our attention to Register-Transfer Level (RTL). Recall that RTL defines how certain components, or functional units, within a microarchitecture are implemented. For example, if an ISA defines an addition instruction, the microarchitecture must support addition in some way. There are many ways to implement addition, and in this lab, we will look at two of them, while exploring Hardware Description Languages (HDLs). Verilog and VHDL are two examples of HDLs. These languages are similar to software, but instead of writing instructions for a processor, you are defining how a circuit behaves (RTL), or how gates within a circuit are connected (one level of abstraction below RTL). A synthesis tool can take a high-level description of a circuit (RTL) and generate a netlist, or a set of gates and wires, that implements the design you described in RTL. The synthesis tool will do this however it determines to be the most efficient, within some constraints. If you want greater control over exactly how the circuit is defined, you need to write a lower-level description and define the structure of the circuit or the gates themselves. In this lab, you will explore two possible implementations of adders. Two such adders are Ripple-Carry Adders and Carry Look-Ahead Adders. These both have pros and cons. You will need to implement both, according to the schematics shown in the provided empty modules, and use the synthesis tool to report the power, performance, and area for each. Lab Setup 1. Use the Vivado software pre-installed on the lab computers. 2. Download ‘Lab5_arithmetic_design.zip’ file from Canvas (Labs > Lab5) and open it in Vivado. 3. An example Vivado tutorial: https://digilent.com/reference/programmable-logic/guides/ getting-started-with-vivado Part 1: Ripple-Carry Adder 4. In “fulladder.v” design a 1 bit full adder module as in the schematic below. Note: you are NOT allowed to use Verilog’s built-in addition operator (+) for your adder modules. 5. In “ripple_carry_adder.v” design a 4 bit ripple-carry adder module as in the schematic below. Instantiate your “fulladder” module from step 4 to implement your ripple-carry adder. 6. Inside “Simulation Sources” in the “Sources” pane, ensure that testbench.v is assigned as top (bolded). If not, right click it and select “Set as top”. 7. In “testbench.v” make sure the line beginning with “ripple_carry_adder” is uncommented and the line beginning with “carry_look_ahead_adder” is commented out. 8. Make sure to uncomment the “ripple_carry_adder” instance and comment “carry_look_ahead_adder” instance in the testbench. Click “Run Simulation” to simulate your “ripple_carry_adder.v” design. If the simulation value of “score” is 10 at ~1,000ns, then your adder is functioning as expected. 9. Set “top.v” as Top (should be in bold). Make sure to uncomment the “ripple_carry_adder” instance and comment “carry_look_ahead_adder” instance in “top.v”. Click “Run Synthesis” for “ripple_carry_adder.v” and then “Run Implementation” when the synthesis is complete. T1: Take a screenshot of your ripple-carry adder simulation waveform. 10. Open the implementation report and identify: • number of FPGA resources used (LUTs, FFs, BRAMs, DSPs) IMPLEMENTATION > Open Implemented Design > tab: Design Runs > row: “Impl_1” • dynamic power consumption IMPLEMENTATION > Open Implemented Design > tab: Power Summary (#.## W, Margin: N/A): On-Chip Power > Dynamic • latency of your design IMPLEMENTATION > Open Implemented Design > tab: Timing Unconstrained Paths > NONE to NONE > Setup(#) > row: path w/ worst Total Delay Part 2: Carry Look-Ahead Adder 11. In “carry_generate.v” design a 4 bits carry generate (G) module as in the schematic below. Note: you are NOT allowed to use Verilog’s built-in addition operator (+) for your adder modules. T2: Collect and summarize in a table each of your results metrics: resources utilizations (number of LUTs, FFs, BRAMs, and DSPs), dynamic power consumption, and latency. 12. In “carry_propagate.v” design a 4 bits carry propagate (P) module as in the schematic below. 13. In “propagated_sum.v” design a 4 bits propagated sum (S) module as in the schematic below. 14. In “look_ahead_carry.v” design a 4 bits look-ahead carry (C) module as in the schematic below. Note: you are NOT allowed to use Verilog’s built-in addition operator (+) for your adder modules. 15. In “carry_look_ahead_adder.v” design a 4 bits carry look-ahead adder module as in the schematic below. Instantiate your “carry_generate”, “carry_propagate”, “propagated_sum”, and “look_ahead_carry” modules using steps 12-15 (respectively) to implement your carry lookahead adder. carry_generate carry_propogate look_ahead_carry propagated_sum 16. Uncomment the “carry_look_ahead_adder” instance and comment “ripple_carry_adder” instance in “testbench.v”. Click “Run Simulation” with “testbench.v” (set as top) to simulate your “carry_look_ahead_adder.v” design. Verify their behavior functions as expected. 17. Run “Synthesis” and the “Implementation” for “carry_look_ahead_adder.v”. Set “top.v” as Top (should be in bold). Make sure to uncomment the “carry_look_ahead_adder” instance and comment “ripple_carry_adder” instance in “top.v”. Then, run “Synthesis” and “Implementation”. 18. Open implementation report and identify the number of FPGA resources uses (LUTs, FFs, BRAMs, DSPs), the dynamic power consumption, and the latency of your design as in step 11. Questions 19. Answer the following questions: T3: Take a screenshot of your carry look-ahead adder simulation waveform. T4: Collect and summarize in a table each of your results metrics: resources utilizations (number of LUTs, FFs, BRAMs, and DSPs), dynamic power consumption, and latency. T5: Using your results from T2 and T4, compute the percent overhead of your Carry LookAhead Adder compared to your Ripple-Carry Adder for each metric: resources utilizations (number of LUTs, FFs, BRAMs, DSPs), dynamic power consumption, and latency. (T4-T2/T2)% T6: Based on the percent overheads from T5, indicate and explain which adder (Ripple-Carry vs Carry Look-Ahead) is better based on each metric: resources utilizations (number of LUTs, FFs, BRAMs, DSPs), dynamic power consump., and latency. E.g., if Overhead>0, T2 is better. T7: Based on your tradeoffs analysis from T6, identify and explain which adder would you recommend for the following devices: (a) servers (e.g., Google’s server) (b) modern high-performance PC (e.g., gaming computer) (c) modern common PC (e.g., your laptop or home desktop) (d) modern smartphone (e.g., Apple iPhone 14 Pro or Samsung Galaxy S23 Ultra) (e) modern small embedded system (e.g., handheld digital thermometer, single-use insulin pump) In-class work – Complete T1 and T2, show it to the TA for review and submit it on Canvas before the lab session ends. Submit your in-class work as a Word document (.docx) format. Final Report – Submit your report pdf with answers to all questions and screenshots. Also, submit the final compressed Vivado/Verilog files. Submit one .zip file, per group.
Welcome to CDA 4205L Lab #8! The goal of this lab is to demonstrate an example implementation of a single-cycle CPU in Verilog HDL. Prelab This lab builds on the ALU and Control Unit design from Lab #7. You will be provided a near-complete single-cycle RISC-V CPU implemented in Verilog HDL. This implementation is almost capable of running code generated by your assembler from Lab #3 and #4 – but one critical component is missing – the instruction memory. Creating an instruction memory In Verilog module is done by first declaring a generic memory variable and loading the assembler output (machine code) into it. The Verilog command $readmemb(, ) will read the data from and load it into . This is for simulation purposes only, so you can simply place the command in an initial block. If your instructions are written in hex, the command $readmemh() will work the same way. Lab Setup 1. Download the Vivado project files from Canvas. 2. Open the file instr_mem.v. This file contains a template for a memory module. 3. Declaring a memory is very similar to declaring a register (reg) variable. The main difference is that you must specify the depth as well as the width. In this case, there are already parameters as part of the module definition (DATA_WIDTH, ADDRESS_WIDTH, NUM_MEM_CELLS) that are preinitialized. Note that, while these values are defaults, when you instantiate a new InstMem module in the top-level, you can override them to be whatever size you need. This lets you reuse the module in different designs without extra work. Declare a memory using the syntax reg [DATA_WIDTH – 1 : 0] inst_mem[0 : NUM_MEM_CELLS – 1] 4. Next, initialize the memory in an initial block. Use the $readmemb() directive. This command takes two parameters, the memory file name, and the memory variable you created in step 3. Note that referencing a macro definition requires prepending with `. 5. Finally, because this memory is part of a single-cycle implementation, data is not read synchronously. Hence, we do not use a clock; instead, it is a continuous assignment statement in the form assign data = inst_mem[
Welcome to CDA 4205L Lab #4! This is the second portion of the ISA lab. The goal of this lab is for you to understand how to use the pre-processed assembly code created in part 1 and convert it to machine code. This step will finalize the conversion of the initial .asm file into machine code. Prelab An assembler is a program that converts low-level assembly code into machine code which can be understood by the target processor. It is one step in a larger toolchain that begins with a compiler and ends with a linker and object file generation. Now, as the assembly code has been preprocessed, the next step is to output the machine code. The formatted assembly code will be provided to you as a .txt file. You will need to create a function that correctly converts this code into machine code based for each instruction type (Use ‘rv32im_isa.csv’ as a reference for better understanding). Lab 1. Use the online version of JupyterLab: https://jupyter.org/try-jupyter/lab/ 2. Download ‘Lab4_assembler_design.ipynb’ file from Canvas (Files > Labs > Lab 4) and open it in JupyterLab. 3. Download the ‘example1_out1.txt’, ‘example2_out1.txt’, and ‘rv32im_isa.csv’ files from Canvas (Files > Labs > Lab 4) and upload them in the same directory as your ‘Lab4_assembler_design.ipynb’ in JupyterLab. 4. Complete Tasks 1 and 2 as described in the ‘Lab4_assembler_design.ipynb’ file. For your reference, here are example outputs from the two tasks using ‘example1_out1.txt’: Task 1 Output: Task 2 Output: T1-2: Take screenshots of outputs from Tasks 1 and 2 using “example2_out1.txt” and include those in your report. Assembly Instructions: 00000000000001001000001110000011 00000000101000000000010100010011 00000001010000000000010110010011 00000000101101010101001101100011 00000000000001011000011000110011 00000000000000000110000011101111 00000000000001010000011000110011 00000000000000000010000011101111 00000000101001001010000000100011 Saved machine code to: example1_out1_bin.bin 5. Answer the following questions: In-class work – Complete T1, show it to the TA for review and submit it on Canvas before the lab session ends. Submit your in-class work as a Word document (.docx) format. Final Report – Submit your report pdf with answers to all questions and screenshots. Also, submit the final ‘.ipynb’ file and the output ‘.bin’ file for example2 only. Submit .zip file, one per group. T3: In your own words, explain how the ‘rv32im_isa.csv’ file is used to determine how each instruction is converted to binary. T4: In your own words, explain the purpose of the ‘get_2c_binary function. T6: How many different instruction formats did you need to account for in your code? What did the different formats have in common? How did this impact your code? T5: What was the purpose of preprocessing the assembly code in the last lab?
Objective: This project is designed to assess your ability to analyze complex computing problems; apply computer science theory, principles of computing, and related disciplines to identify effective solutions; demonstrate proficiency in formulating and solving engineering problems using principles of engineering, science, and mathematics; and apply software development fundamentals to create computing-based solutions. Through this project, you will strengthen essential skills in problem analysis, complexity evaluation, and algorithm optimization. Instructions: • Prepare a Jupyter Notebook using the provided template and complete all sections as outlined below. Ensure that each section is thorough and meets the specified requirements. • You may use Google Colab to develop your notebook and then download it as a Jupyter Notebook (.ipynb) file for submission. • Review the attached example template for guidance. Please note that the examples may not cover all required sections for this project — make sure to update and adapt the template as needed. • Ensure clarity and completeness in each section, including detailed explanations and well-commented code or pseudocode. • Parts of this project may be moved to GradeScope. If this happens, we will make an announcement to inform you. • Start early! Attempting to complete the project on the due date will be extremely challenging. • Review the example pages provided for Greedy and Divide & Conquer approaches below. They include code you need such as performance testing or plotting a graph o Greedy Approach: https://colab.research.google.com/drive/1z728K8h80ur7RwMAL5V9qqJ2aGuoAwXM?usp=sharing o Divide & Conquer: https://colab.research.google.com/drive/1jZb6y5FiEQUmVOcFIhOzAiiuIOurbAaN?usp=sharing • What & How to Submit o Ensure that all code cells run without errors in your Colab-hosted Jupyter Notebook. After running the whole Jupyter Notebook page, download the page (.ipynb extension) o Name your file as “LastName-FirstName-ID.ipynb” and then submit it. o Make sure you submit the correct file. Important: Failure to follow these submission instructions will result in a deduction of minimum 10 points. 2 Drone-based Pollution Cleanup Optimization Problem Description: An environmental protection agency has deployed one AI-powered drone to clean pollution hotspots scattered across a region. Each hotspot requires a specific amount of energy to clean and has a varying importance level based on environmental impact. Hotspot Characteristics: • Priority Score: Indicates the environmental importance of cleaning the hotspot (higher means more critical). • Cleaning Effort: Total energy required to fully clean the hotspot (in energy units). • Distance from Dock: Round-trip energy cost to travel from the drone’s docking station to the hotspot and back (in energy units). 3 Drone Specifications (Per Mission): • Battery Capacity: 1000 energy units per mission. • Cleaning Efficiency: 1 energy unit cleans 1 unit of pollution. • Travel Cost: Round-trip travel consumes the specified number of energy units. Operational Rules: • The drone starts and ends each mission at the docking station. • It can visit multiple hotspots in a single mission. • Total energy usage (travel + cleaning) must not exceed 1000 units. • After cleaning a hotspot, the drone returns to the docking station to unload dirt. Every trip to another hotspot begins from the docking station. • The drone can partially clean a hotspot, earning a priority score proportional to the fraction cleaned. (Example: Cleaning 50% of a hotspot with priority 100 earns 50 points.) • Objective: Maximize the total collected priority score over one mission, which may include visits to multiple hotspots. • Use the hotspot dataset provided at the end of this document. It is also provided in the attached Jupyter Notebook page template. Project Tasks (100 Points Total): a) Solution Description — Brute Force Approach (10 Points) • Explain the brute force method to solve this problem. • Provide clear pseudocode following class standards for structure and formatting. • Note: You must present brute force only here, submitting optimized solutions for this part will result in point deductions. b) Complexity Analysis – Brute Force (5 Points) • Analyze the asymptotic time and space complexity of the brute force solution based on your pseudocode. Plot a graph demonstrating how time complexity increases as the number of hotspots grows based on the theoretical asymptotic complexity you found. • Graphs must be labeled and explained briefly (e.g., what they show, observed trends). c) Proof of Correctness (10 Points) • Select a key function or loop from your brute force pseudocode. • Prove its correctness using loop invariants, induction, or other formal methods covered in class. d) Implementation – Brute Force (10 Points) • Implement the brute force algorithm in Python inside a Jupyter Notebook. • Comment code clearly and thoroughly, explaining each step and decision. • The function should return the best total priority score and the optimal subset of hotspots that maximizes this priority. For example: priority, subset = brute_max_priority(current_hotspots,battery_capacity) 4 e) Performance Testing – Brute Force (15 Points) • Test the brute force solution on varying input sizes, starting from small sets (e.g., 2, 3, 4, 5 hotspots). • Collect timing data for around 5 minutes and plot execution time vs. input size (10 points for performance data, 5 points for graph). Ensure you have the graph plotted with the collected data in your submission, we will not run it for 5 minutes. • Graphs must be labeled and explained briefly (e.g., what they show, observed trends). f) Optimal Algorithm – Greedy or Divide & Conquer (15 Points) • Select and explain an optimized approach (clearly indicate whether it’s Greedy or Divide & Conquer). • Provide detailed pseudocode and explain steps: o If you use Greedy: Selection procedure, feasibility check, solution check. o If you use Divide & Conquer: Divide, conquer, combine. g) Complexity Analysis – Optimized Algorithm (10 Points) • Analyze asymptotic time and space complexity of your optimized solution (5 points). • Plot time complexity graph as input size increases (5 points). • Graphs must be labeled and explained briefly. h) Implementation – Optimized Algorithm (15 Points) • Implement the optimized algorithm in Python inside the same Jupyter Notebook. • The function should return the best total priority score and the optimal subset of hotspots that maximizes this priority. For example: priority, subset = greedy_max_priority(current_hotspots,battery_capacity) • Comment code clearly and thoroughly, explaining each part of the process. • Note: This part focuses only on the optimal solution implementation, not brute force. i) Performance Testing and Comparison (10 Points) • Run both algorithms on the same set of input sizes where feasible. • Increase input size for optimized solution if needed to demonstrate its efficiency. • Collect execution time data and plot comparison graph of both approaches (5 points for performance data, 5 points for graph). • Graphs must be labeled and explained briefly (e.g., what they show, observed trends). Important: Ensure that all code cells execute without errors in your Colab-hosted Jupyter Notebook. After running the entire notebook, download the .ipynb file along with its generated data and graphs, then submit it. We will not run your notebook to collect data or generate graphs. 5 Hotspot Dataset: Hotspot ID Priority Score Cleaning Effort (Energy Units) Distance from Dock (Round-Trip Energy Units) 1 90 40 20 2 70 30 10 3 120 80 25 4 60 20 30 5 100 50 15 6 80 60 20 7 150 70 30 8 50 25 10 9 110 55 18 10 95 45 22 11 85 35 12 12 130 90 28 13 75 40 16 14 105 60 24 15 65 30 14 16 115 70 26 17 55 20 8 18 140 85 32 19 100 50 20 20 125 75 30 21 80 50 14 22 68 38 1 23 114 74 18 24 67 27 22 25 94 50 15 26 78 70 11 27 156 77 38 28 52 19 0 29 116 64 17 30 104 52 28 31 91 35 3 32 138 98 36 33 78 30 18 34 108 50 29 35 58 39 9 36 113 78 30 6 37 63 21 0 38 148 83 42 39 108 43 15 40 117 70 24 41 90 48 13 42 71 39 17 43 112 90 21 44 65 28 35 45 101 50 5 46 71 57 17 47 148 78 21 48 50 24 3 49 109 52 27 50 91 49 16 51 91 35 17 52 135 87 31 53 72 34 23 54 114 50 27 55 62 38 6 56 111 69 19 57 61 24 2 58 131 92 23 59 92 56 24 60 117 85 22 61 80 36 23 62 79 36 3 63 129 71 26 64 56 16 28 65 109 59 12 66 88 53 21 67 140 72 35 68 54 24 2 69 110 50 16 70 101 41 24 71 86 35 2 72 120 86 29 73 67 45 13 74 111 70 25 75 60 27 15 7 76 106 74 25 77 58 23 1 78 131 78 42 79 93 50 14 80 127 72 21 81 97 40 13 82 75 20 3 83 120 81 27 84 66 25 32 85 106 45 14 86 79 70 29 87 153 77 32 88 46 31 11 89 109 54 10 90 93 49 29 91 77 32 3 92 122 85 25 93 76 49 8 94 115 52 20 95 75 36 23 96 108 70 19 97 60 16 15 98 134 92 38 99 108 51 14 100 123 65 33 hotspots = [(1, 90, 40, 20), (2, 70, 30, 10), (3, 120, 80, 25), (4, 60, 20, 30), (5, 100, 50, 15), (6, 80, 60, 20), (7, 150, 70, 30), (8, 50, 25, 10), (9, 110, 55, 18), (10, 95, 45, 22), (11, 85, 35, 12), (12, 130, 90, 28), (13, 75, 40, 16), (14, 105, 60, 24), (15, 65, 30, 14), (16, 115, 70, 26), (17, 55, 20, 8), (18, 140, 85, 32), (19, 100, 50, 20), (20, 125, 75, 30), (21, 80, 50, 14), (22, 68, 38, 1), (23, 114, 74, 18), (24, 67, 27, 22), (25, 94, 50, 15), (26, 78, 70, 11), (27, 156, 77, 38), (28, 52, 19, 0), (29, 116, 64, 17), (30, 104, 52, 28), (31, 91, 35, 3), (32, 138, 98, 36), (33, 78, 30, 18), (34, 108, 50, 29), (35, 58, 39, 9), (36, 113, 78, 30), (37, 63, 21, 0), (38, 148, 83, 42), (39, 108, 43, 15), (40, 117, 70, 24), (41, 90, 48, 13), (42, 71, 39, 17), (43, 112, 90, 21), (44, 65, 28, 35), (45, 101, 50, 5), (46, 71, 57, 17), (47, 148, 78, 21), (48, 50, 24, 3), (49, 109, 52, 27), (50, 91, 49, 16), (51, 91, 35, 17), (52, 135, 87, 31), (53, 72, 34, 23), (54, 114, 50, 27), (55, 62, 38, 6), (56, 111, 69, 19), (57, 61, 24, 2), (58, 131, 92, 23), (59, 92, 56, 24), (60, 117, 85, 22), (61, 80, 36, 23), (62, 79, 36, 3), (63, 129, 71, 26), (64, 56, 16, 28), (65, 109, 59, 12), (66, 88, 53, 21), (67, 140, 72, 35), (68, 54, 24, 2), (69, 110, 50, 16), (70, 101, 41, 24), (71, 86, 35, 2), (72, 120, 86, 29), (73, 67, 45, 13), (74, 111, 70, 25), (75, 60, 27, 15), (76, 106, 74, 25), (77, 58, 23, 1), (78, 131, 78, 42), (79, 93, 50, 14), (80, 127, 72, 21), (81, 97, 40, 13), (82, 75, 20, 3), (83, 120, 81, 27), (84, 66, 25, 32), (85, 106, 45, 14), (86, 79, 70, 29), (87, 153, 77, 32), (88, 46, 31, 11), (89, 109, 54, 10), (90, 93, 49, 29), (91, 77, 32, 3), (92, 122, 85, 25), (93, 76, 49, 8), (94, 115, 52, 20), (95, 75, 36, 23), (96, 108, 70, 19), (97, 60, 16, 15), (98, 134, 92, 38), (99, 108, 51, 14), (100, 123, 65, 33)] 8 Updates: March 19th: • Part e) Performance Testing – Brute Force text is updated for clarity. • Hotspot dataset size increased to 100 hotspots • Battery Capacity is now 1000 • “Operational Rules” is updated for clarity • The following is added for clarification: “The function should return the best total priority score and the optimal subset of hotspots that maximizes this priority.” For example: priority, subset = greedy_max_priority(current_hotspots,battery_capacity)
Welcome to CDA 4205L Lab #1! The purpose of this lab is to introduce you to (or refresh your memory of) RISC-V assembly and a RISC-V Assembler and Runtime Simulator (RARS). Prelab RISC-V is an open-source instruction set architecture (ISA). It was initially developed by UC Berkeley, but in the past decade, many other groups have contributed to the project, and several companies have started deploying physical processors that use the standard. In the lecture portion of this course, we will go over the principles of ISA design and important tradeoffs. In the lab portion of this course, we will go over practical implementation details. Two very important tools for any ISA are the assembler and the runtime simulator. The assembler takes human-readable code and translates it into machine code, represented in binary or hexadecimal. The runtime simulator is a platform for executing machine instructions and provides insight into the state of the processor at any given time. A simulator can also provide an environment for doing things like file reads/writes, printing text to the console, or inputting values to your program. These are called “system calls” or “environment calls” depending on the platform, and are shortened to “syscalls” or “ecalls”. In this lab, you will become familiar with one particular RISC-V assembler and runtime simulator called RARS, and experiment with some important ecalls provided by the platform. Lab 1. Download and install RARS, which can be found here: https://github.com/TheThirdOne/rars. Note that you will need the Java runtime installed as well. 2. Download the HelloRISCV.asm code from Canvas (in Files > Labs > Lab 01 directory), and open the file in RARS. You should see something like this: 3. Click the “Assemble” button (marked with the green arrow). From here, you will be able to run the code to completion (the “play” button next to the “assemble” button) or, alternatively, step through the code line by line and see values change in the memory window (the smaller “play” button with the number 1). 4. Modify the code so that it prints your name instead of “Hello World!”. Re-assemble, and re-run the code. 5. Observe that the provided code uses two ecalls: 4 and 10. The documentation for the ecalls is available here: https://github.com/TheThirdOne/rars/wiki/Environment-Calls. 6. Modify the code so that, in addition to your name, it prints “This is CDA4205L Lab 1” a. Declare a new string in the .data section of the code. This string should be “ This is CDA4205L Lab “ (without lab number for now). You can label this string “msg2”. b. Print this string using the appropriate ecall. c. Refer to the ecall documentation. Find the ecall number for “PrintInt”. Use this to print the lab number. d. The console output should look like this (but instead of “Hello, World!” it should print your name). 7. Instead of hard coding the lab number into the program, modify the code so that the user can input this number at runtime. a. First, enable the setting in RARS “Popup dialog for input syscalls (5, 6, 7, 8, 12)”. Terms syscall and ecall are used interchangeably. If your program uses one of these syscalls, you should get a popup at runtime where you can enter the desired value. b. Refer again to the ecall documentation. Find the ecall number for “ReadInt”. Use this to read the lab number from the user, then use PrintInt to print this value to the console. T1: Take a screenshot of the RARS console output. Include this in your report. T2: What do ecalls 4 and 10 do? What is the purpose of register a7 and a0? T3: Take a screenshot of the RARS console output. Include this in your report. T4: Take a screenshot of the RARS console output. Include this in your report. In-class work – Complete at least Task 1, show it to the TA for review and submit it on Canvas before the lab session ends. Submit your in-class work as a Word document (.docx) format. Final-Report – Submit your report pdf with answers to all questions and screenshots. Also, submit the final assembly file(s). Combine these in a .zip file. It is due in 1 week, before the next lab.