Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] Cse 590-54/59 homework assignment 1 (american) football simulation

For this programming assignment, you will be producing a heavily boiled-down (American) football simulator and play visualization using Python.  The process should give you experience with built-in and custom functions with the language, process simulation, and very simple depiction of data.  There are four interrelated problems with a cumulative value of 100%, and the optional components for problems 1-3 can be completed for 2-3 extra credit points each (with the max grade not exceeding 106%).For full credit, you must include brief documentation for your code at least, including a very simple README (which should indicate, for example, optional components you completed and any changes from the norm in your code) and a short (line or two) comment for each function you implement, indicating what it does.1) (20 pts) In football terminology, a down is the period in which a single play (successful or failed) is executed.  You are to write the function down(successprct, yardrange), where successprct is a number between 0-100 indicating probability of offensive success (i.e. because passes can be incomplete or plays otherwise unsuccessful), and yardrange is a tuple with two values representing the minimum and maximum number of yards gained.  Your function should return a number of yards according to the following rules: i. a random number between 1 and 100 is generated — if it exceeds successprct then the play “fails” and 0 is returned, otherwise ii. the number of yards returned is equal to a random number between the min and max according to yardrange.Optional: if you like, you can include “sacks” or “penalties” in your code, presumably which have an attached percentage chance of happening and incorporate them into the down function.2) (30 pts) A drive represents a series of downs that result in either a touchdown or turnover (assume no punting or field goal is possible).  You are to write the function drive(yards_to_TD, successprct, yardrange) where yards_to_TD is the number of yards a team must move the ball to achieve a touchdown, and successprct and yardrange are identical in form to their down counterparts.  The drive function must do the following: up to four downs will be executed in sequence using the down function above with successprct and yardrange as arguments, with the following details: i. the number of yards generated by the down function will be subtracted from yards_to_TD on each call ii. if yards_to_TD ever reaches zero or below, the team scores (see below) iii. If four sequential plays take place and the team doesn’t score, the ball is turned over to the other team with zero points scored (see below). The output of drive will be a tuple with two elements representing i. points scored and ii. field position for the other team, respectively.  If the team scores (yards_to_TD reaches zero or below), then the first element will be 7 (90% of an extra point attempt to succeed) or 6 (10% of the attempt to fail) – otherwise the first value will be 0.  If the team scores then the second element will be 80 (reflecting a kick-off/touchback position for the other team) – otherwise the second element will be calculated as 100-yards_to_TD (reflecting the other team moving the ball in the opposite direction.)Optional: if you like, you can include the notion of “first downs” in your code: in this case, the down counter “resets” to down 1 (first down) whenever ten or more positive yards are gained cumulatively within the four sequential downs.    3) (30 pts) Here you are to create a simple visual depiction of a drive.  Function drive_depicted(yards_to_TD, successprct, yardrange) is identical in form and return to the drive function, with the exception that it must also provide visual details of every down within the drive, including the progress of the side on offense (i.e. how many yards from their own end zone) and the yards remaining to touch-down (i.e. how many yards from the opponent’s end zone).  The nature of the visualization is flexible, but the simplest approach is an ascii-like depiction as in the examples below:Ex 1: Successful drive O—-|—->—-|—-|—-|—-|—-|—-|—-|—-X  1st Down, 80 Yds to Go O—-|—-|—-|—-|—>|—-|—-|—-|—-|—-X  2nd Down, 51 Yds to Go O—-|—-|—-|—-|—-|—-|—-|>—|—-|—-X  3rd Down, 28 Yds to Go O—-|—-|—-|—-|—-|—-|—-|—-|—-|—-T  TD Scored! Xtra Pt Made  Ex 2: Failed drive O—-|—->—-|—-|—-|—-|—-|—-|—-|—-X  1st Down, 80 Yds to Go O—-|—-|—-|—>|—-|—-|—-|—-|—-|—-X  2nd Down, 61 Yds to Go O—-|—-|—-|—>|—-|—-|—-|—-|—-|—-X  3rd Down, 61 Yds to Go O—-|—-|—-|—-|—-|—-|>—|—-|—-|—-X  4th Down, 38 Yds to Go O—-|—-|—-|—-|—-|—-|—-|—Q|—-|—-X  Turnover, 21 Yds to Go      Note that in the above examples yards-to-go is being rounded to the closest even yard for depiction purposes – this is not essential, but may make visualization a bit less clunky in practice.Optional: If you like, you can use any of the Python figure/image/animation libraries to make a more sophisticated visualization of drives.  But be aware that this can become a very complicated and difficult effort if you don’t have previous experience!4) (20 pts) Your last step is to create a football game simulation.  While normal football is based on quarters and time limits, you can assume a game has a fixed number of alternating drives between the two teams, and each of the teams has a different success-rate/yard-gain as input to the drive function.  More specifically, you are to write function simulategame(num_drives, prctT1, yrangeT1, prctT2, yrangeT2), where num_drives is the total number of drives played in the game by each team, prctT1 and yrangeT1 correspond to the successprct and yardrange for the first team (T1), and prctT2 and yrangeT2  correspond to the successprct and yardrange for the second team (T2).  The function will do the following: i. initialize scores for both teams at 0 and yards_to_TD at 80, ii. call the drive function with yard_to_TD and team 1’s successprct/yardrange values, iii. increment T1’s score according to drive’s return and adjust yards_to_TD according to the second return (see problem 2 above), iv. call the drive function for team 2 using its parameters and v. adjust T2’s score if relevant, and vi. repeat steps ii-v num_drives time in total.   Your return should be a 2-element tuple, with the first element being T1’s final score, and the second being T2’s final score.It is highly advised that you test your functions work correctly by calling them from an outside script.Your submission to Blackboard should be a single .zip file with the name __1.zip, where and are your last name and first name respectively.  As outlined above, the file should include your jupyter notebook and/or python file(s), plus a README file for documentation and to guide execution.  

$25.00 View

[SOLVED] Cse 590-3/53 final programming portion

Data Description: The four attached json files (savedtweets_americalatina.json, savedtweets_machinelearning.json, savedtweets_superleague.json, savedtweets_weibo.json), represent four separate classes of 100 tweets collected using a search query with the appropriate suffix.  For example, saved_tweetsamericalatina.json has 100 tweets with the query “América Latina.”Each tweet has up to seven characteristics (stored as key-value pairs): screen_name, text, location, lang, retweet_count, latitude*, and longitude*.* Many tweets are missing these characteristics: see instructions below. Instructions (Four parts in total):Part 1. Load each json file into Python (obtaining a list of dictionaries for each) and perform the following: a. discard any tweets that lack latitude (those without latitude will also lack longitude, and vice-versa) b. Use the tweet-preprocessor to clean the text for each tweet using all available (default) options.For each collection, save the modified list of tweets back into a new json file with the name prep_tweets_class#.json, where # matches the order of json files cited above (0=…americalatina, 1= …machinelearning, 2=..superleague, 3=weibo).  You should have files savedtweets_class0.json, savedtweets_class1.json, savedtweets_class2.json, and savedtweets_class3.json at the end of the process.Part 2. For each modified collection of tweets (i.e. after the transformation from part 1) calculate the # tweets with positive, negative, and neutral sentiment and depict these on a simple bar plot.  You should have 3 bars per plot (one bar for positive, one bar for negative, one bar for neutral), and 4 plots total (one per tweet query class). Part 3. Pool together all modified tweets into a single list, but maintain a combined secondary list of equal size that dictates the class (0, 1, 2, or 3) to which each tweet belongs.  Ex: If there are 44 América Latina tweets at the beginning of the pooled list of tweets, the first 44 elements of the secondary list should be 0. Part 4. Assume your combined lists each have a length of n.  Your next goal is to construct a n x 5 numpy feature array suited for machine learning, where each row matches the corresponding index in your lists, and the 5 columns represent the features for the tweet at that position as follows: Feature 1: The length of the tweet’s text. Feature 2: The tweet’s retweet count. Feature 3: The tweet’s latitude. Feature 4: The tweet’s longitude. Feature 5: one of two values as follows: 0 if the tweet is in English, or 100 otherwise.For example, the first row in your feature array may look like the below: [80.   ,  1.   , 46.2380576 ,  6.15323095, 100.   ] Part 5. Convert your secondary list of classes into an array, and then perform 10-fold cross-validation using three distinct classification estimators (either the ones we used in class, or those of your own choosing) to determine the accuracy available in using our features from part 4 in predicting the class of tweets. Part 6. Using the t-SNE estimator to compress our features into 2 dimensions, visualize the tweets on a scatter-plot with 4 different colors for 4 different classes.  Briefly comment (inline code comments are fine) on where you see distinct clusters of classes on the plot, and where you do not see any distinction. 

$25.00 View

[SOLVED] Cse 590-04 introduction to machine learning homework no. 5

1. Apply K-Means and Agglomerative clustering algorithms to real data 2. Analyze and optimize the parameters of each clustering method 3. Analyze and compare the clustering resultsa) Use the K-Means algorithm to cluster the provided data. Vary the number of clusters from 2 to 20 and select the optimal number. Justify your choice based on the SSE vs. No. clusters plot. b) Using the number of clusters selected in (a), generate the silhouette plot.c) Using the silhouette coefficients, identify 5 samples that are at the core of each cluster and 2 samples that are at the boundary of any two clusters (if they exist). Display the original images associated with these samples and comment on the results.a) Use the hierarchical agglomerative algorithm, with the Ward’s method to compute the distance between two clusters, to cluster the provided data. Generate the dendrogram and use that to identify the optimal number of clusters. Justify your choice.b) Using the number of clusters selected in (a), generate the silhouette plot. c) Repeat (a) and (b) using single-link and complete-link. Compare the silhouette plots of the 3 methods and identify the best distance for this data. Justify your choice.d) Using the silhouette coefficients of the best method identified in (c), identify 5 samples that are at the core of each cluster and 2 samples that are at the boundary of any two clusters (if they exist). Display the original images associated with these samples and comment on the results. (15 points)For each clustering method (K-Means, Agglomerative), compute the adjusted rand index by comparing the generated clusters to the provided ground truth (this should be the only time you use the ground truth). Using these ARI’s and the visualizations generated for each problem, identify the best clustering method for this application. Justify your choice.What to submit? • A report that o Describes your experiments, the parameters considered for each method, etc. o Summarizes, explains (using concepts covered in lectures) and compares the results (using plots, tables, figures) • Do not submit your source code • Your report needs to be a single file (MS Word or PDF) • Your report cannot exceed 10 pages using a font of 12 • Assign numbers to all your figures/tables/plots and use these numbers to reference them in your discussion

$25.00 View

[SOLVED] Cse 590 introduction to machine learning homework no. 4

1. Apply Kernel SVM and MLP classification algorithms to the fashion-MNIST dataset 2. Use k-fold cross validation to identify the best way to rescale and preprocess the data 3. Use k-fold cross validation to identify the parameters that optimize performance (generalization) for each method4. Compare the accuracy and identify correlation between the outputs of the two methodsFor this homework, you will apply the following classification methods to the fashion-MNIST classification data 1. Kernel Support Vector Machines 2. Multilayer Perceptrons• Apply 4-fold cross-validation to the provided training data subset to train your classifiers and identify their optimal parameters. In addition to the classifier’s parameters (e.g. regularization, kernel, Number of layers/nodes, learning rate, etc.), you should also consider the following 4 ways to preprocess and rescale the data: a) No preprocessing b) StandardScaler c) RobustScaler d) MinMaxScaler• After fixing the classifiers’ parameters, apply each method to the provided testing data subset to predict and analyze your results. Compare the accuracy obtained during training (average of the cross-validation folds) to those of the test data and comment on the results (overfitting, underfitting, etc.)• Analyze the correlation between the output of the 2 classifiers by displaying the predict_proba of SVM vs. predict_proba of MLP (using test data). Using these scatter plots (one per class), identify (if available) the following 3 groups• G-1: Samples that are easy to classify correctly by the SVM, but hard to classify by MLP • G-2: Samples that are easy to classify correctly by the MLP, but hard to classify by SVM • G-3: Samples that are hard to classify correctly by both methods For each group, display few samples (as images) and identify any common features among them.What to submit? • A report that o Describes your experiments, the parameters considered for each method, etc. o Summarizes, explains (using concepts covered in lectures) and compares the results (using plots, tables, figures)• Do not submit your source code • Your report needs to be a single file (MS Word or PDF) • Your report cannot exceed 10 pages using a font of 12 • Assign numbers to all your figures/tables/plots and use these numbers to reference them in your discussion

$25.00 View

[SOLVED] Cse 590 introduction to machine learning homework no. 3

1. Apply various classification algorithms to the movie reviews dataset 2. Use k-fold cross validation to identify the parameters that optimize performance (generalization) for each method 3. Compare the accuracy and explainability of each methodFor this homework, you will apply the following classification methods to the movie reviews classification data (available in Blackboard) 1. Multinomial Naïve Bayes 2. Random Forest 3. Gradient Boosted Regression Trees• Apply 4-fold cross-validation to the provided training data subset to train your classifiers and identify their optimal parameters. • After fixing the classifiers’ parameters, apply each method to the provided testing data subset to predict and analyze your results. Compare the accuracy obtained during training (average of the cross-validation folds) to those of the test data and comment on the results (overfitting, underfitting, etc.)• Analyze the results of each method by inspecting the feature importance (if applicable) and few misclassified samples. • Select the best algorithm and justify your choice based on accuracy, explainability, time required to train/test, etc.What to submit? • A report that o Describes your experiments, o Summarizes, explains (using concepts covered in lectures) and compares the results (using plots, tables, figures) o Identifies the best method for each dataset. • Do not submit your source code • Do not submit raw output generated by your code! • Your report needs to be a single file (MS Word or PDF) • Your report cannot exceed 10 pages using a font of 12 • Assign numbers to all your figures/tables/plots and use these numbers to reference them in your discussion

$25.00 View

[SOLVED] Cse 590 introduction to machine learning mid-term exam 2

Problem #1 (15 points)  FOR SECTIONS 590-12 and 590-53 (undergraduate) ONLYProblem #2 (15 points) Create a 2-dimensional data set with 20 samples that has the following properties:Explain why the K-Means cannot generate the correct clusters.What kind of linkage is needed for the Agglomerative algorithm to cluster the data correctly? Problem #3 (15 points)Create a 2-dimensional data set with 22 samples that has the following properties:Explain why the K-Means and the Agglomerative algorithms cannot generate the correct clusters.Explain why the DBSCAN is the appropriate algorithm for this dataset. Problem #4 (15 points)List three different reasons for trying to reduce the number of features prior to applying a machine learning algorithm. Justify and explain each reason. Problem #5 (15 points)Identify two cases where accuracy may be an inadequate measure to evaluate the performance of a classification algorithm.  Explain the reasons.For each case, provide an alternative scoring measure and explain why it is more reliable than the accuracy. Problem #6 (10 points)Given the following pseudo-code that is supposed to train and test an SVM classifier on the Iris data.  Is the above algorithm logically correct? If not, identify the problem and correct it.    Problem #7 (15 points)Suppose that we have 3 classification algorithms. Each algorithm has two parameters: P1 and P2.After performing a grid search for each algorithm (using {0.01, 0.1, 1, 10} for each parameter), we obtain the following accuracy results: Did we use the correct range of values for each parameter? Justify your answer.If the answer is no, then what other values for P1 and P2 do you recommend exploring?  Did we use the correct range of values for each parameter? Justify your answer.If the answer is no, then what other values for P1 and P2 do you recommend exploring?   Did we use the correct range of values for each parameter? Justify your answer.If the answer is no, then what other values for P1 and P2 do you recommend exploring?  

$25.00 View

[SOLVED] Cse 590 introduction to machine learning homework no. 2

1. Build and analyze simple classification algorithms based on KNN and linear models 2. Use k-fold cross validation (k=5) to identify the parameters that optimize performance (generalization) for each method3. Identify cases of underfitting and overfitting 4. Select parameters that optimize performance (generalization) 5. Compare the accuracy and explainability of each methodFor this homework, you will apply the following classification methods to the SPAM e-mail data (available in Blackboard) a) KNN binary classifier. Vary the parameter K b) Logistic Regression classifier. Vary the regularization parameter C c) Linear Support Vector Machines classifier. Vary the regularization parameter C• Apply 5-fold cross-validation to the provided training data to train your classifiers and identify their optimal parameters.• After fixing the classifiers’ parameters, apply each method to the provided testing data to predict and analyze your results. Compare the accuracy obtained during training (average of the crossvalidation folds) to those of the test data and comment on the results (overfitting, underfitting, etc.)• Analyze the results of each method by inspecting the feature importance (if applicable) and few misclassified samples. • Select the best algorithm and justify your choice based on accuracy, explainability, time required to train/test, etc.What to submit?• A report that o Describes your experiments, o Summarizes, explains (using concepts covered in lectures) and compares the results (using plots, tables, figures)o Identifies the best method for each dataset. • Do not submit your source code • Do not submit raw output generated by your code! • Your report needs to be a single file (MS Word or PDF) • Your report cannot exceed 10 pages using a font of 12 • Assign numbers to all your figures/tables/plots and use these numbers to reference them in your discussion

$25.00 View

[SOLVED] Cse 590-04 introduction to machine learning mid-term exam 1

Create a 2-dimensional data set with 30 samples that has the following properties a) Samples should belong to 2 classes (15 samples per class)b) Using a Logistic Regression classifier, all samples from both classes can be correctly classified c) Using a K-NN classifier, with K=3, two samples from each class will always be misclassified.The remaining 26 can be classified correctly. Generate a scatter plot of your data. Use a different color/symbol for each class. Indicate the 4 samples that cannot be classified correctly using the KNN and explain the reasons. Note: This data should be generated manually and you do not need to run any code on itCreate a 2-dimensional data set with 30 samples that has the following properties a) Samples should belong to 2 classes (15 samples per class) b) All samples can be correctly classified using a decision tree classifier with only 2 levels c) The data cannot be perfectly classified using a linear classifier.If it is not possible to generate such data, explain why. Otherwise, generate a scatter plot of your data using a different color/symbol for each class. Indicate the samples that cannot be classified correctly using a linear classifier.Display your 2-level decision tree indicating the feature/threshold used at each non-leaf node and the number of samples at each leaf node. Note: This data should be generated manually and you do not need to run any code on itFor this problem, you need to use the built-in sklearn California Housing dataset. You can load this data using from sklearn.datasets import fetch_california_housing cal_housing = fetch_california_housing()Divide the data into training and test sets using train_test_split and random_state=38 The goal is to experiment with few regression algorithms and compare their performance on this data.a) Build and train a LASSO Regression model. Vary the constraint parameter α and analyze the results by identifying cases of overfitting and underfitting. Select the optimal value of α and justify your choice.b) Build and train a Decision Tree regression model. Vary the pruning parameter and analyze the results by identifying cases of overfitting and underfitting. Select the optimal pruning and justify your choice.c) Compare the accuracy of the 2 methods and the relevant features identified by each method and comment on the results.For this problem, you need to use the built-in sklearn digits dataset. You can load this data using Sklearn.datasets.load_digits (*, n_class=10,return_X_y=False, as_frame=False) Divide the data into training and test sets using train_test_split and random_state=0The goal is to train a Random Forest classifier and optimize its performance on this data. a) Identify the most important parameters that affect the performance of the Random Forest classifier and outline your experimental design (using 4-fold cross validation) to learn the optimal values for these parameters.b) Analyze the results of the classifier using its optimal parameters and comment on its generalization capability. c) Visualize and explain the relevant features identified by the Random Forest classifier. • Create a white 8×8 image that represents the original 64 features. Map each identified relevant feature to this 2D image and display it using a grey scale that reflects its importance (e.g. 0 most relevant feature and 255  least relevant feature).d) Identify one misclassified sample from each class (if they exist). Visualize each misclassified sample as an 8×8 image, and use its nearest neighbors and the learned important features to explain why it was misclassified.Hint: for examples on how to read this data and visualize it, check https://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.html#sphx-glrauto-examples-classification-plot-digits-classification-py

$25.00 View

[SOLVED] Cse 473/573 – computer vision and image processing project 3

This project has two parts. Part A is to have you utilize any face detection algorithm available in opencv library (version = 4.5.4. You MUST use this version for Project #3!) and perform face detection. Part B is to CROP the detected faces and cluster them using k-means algorithm. The goal of part B is to cluster faces with the same identity so they end up in the same cluster.Given a face detection dataset composed of hundreds of images, the goal is to detect faces contained in the images. The detector should be able to locate the faces in any testing image. Figure 2 shows an example of performing face detection. We will use a subset (will be provided) of FDDB [1] as the dataset for this project. You can use any face detection modules available in OpenCV.2.1 Libraries permitted and prohibited • Any API provided by OpenCV. You may NOT use any internet examples that directly focuses on face detection using the OpenCV APIs. • You may NOT use any APIs that can perform face detection or machine learning from other libraries outside of OpenCV2.2 Data and Evaluation You will be given “Project3 data.zip” which contains 100 images along with ground-truth annotations (validation folder). You can evaluate each of your models performances on this validation set or use it to further improve your detection. During testing, you need to report results on another 100 images (testFigure 1: An example of performing face detection. The detected faces are annotated using gray bounding boxes.folder) without ground truth annotations. NOTE : Please read all the readme files present in“Project3 data.zip”. Please refer to the script in readme.md in root directory for running and validating your code. YOUR implementation should be in the function detect faces(), in the file UBFaceDetector.py. The function should detect faces in all the images in input path and return the bounding boxes of the detected faces in a list.The result list should have the following format: [{“iname”: “img.jpg”, “bbox”: [x, y, width, height]}, …] “img.jpg” is an example of the name of an image; x and y are the top-left corner of the bounding box; width and height are the width and height of the bounding box, respectively. x, y, width and height should be integers. Consider origin (0,0) to be the top-left corner of the image and x increases to the right and y increases to the bottom.We will provide ComputeFBeta.py, a python code that computes fβ using detection result and groundtruth. You can refer to the sample json provided to you for more information. There is also a piece of example code regarding how to create json file.2.3 Evaluation Rubric Rubric: 40 points – 30 points (F1 score of the detector), 10 points (report) For F1 score computed using ComputeFBeta.py. (30 points for the F1 score.) • F1 > 0.80: 30 points • F1 > 0.75: 25 points Page 2 of 6 CSE 473/573 Project #3 • F1 > 0.70: 20 points • F1 > 0.60: 10 points • F1 > 0.01: 5 points • F1 < 0.01: 0 pointsReport (10 points): • Concise description of what algorithms tried and ended up using (5-10 bullet points) (5 points) • Discussion of the results and implementation challenges (5-10 bullet points) (5 points)You will be building on top of the above face detection code to do face clustering. You will be using images present in faceCluster K folder (in Project3 data.zip) for part B (i.e., face clustering). Each image will only contain one face in it. K in faceCluster K folder name provides you the unique number of face clusters present.3.1 Steps Involved. • Step 1: Use Part A and OpenCV functions to crop the detected faces from images present in faceCluster K folder in Project3 data.zip. • Step 2: pip (or pip3) install face-recognition to install a library. Use the function face recognition.face encodings(img, boxes) to get 128 dimensional vector for each cropped face.img is the image (after cv2.imread(‘img’) or any variants). boxes is a list of found face-locations from Step 1. Each face-location is a tuple (top, right, bottom, left). top = y, left = x, bottom = y + height, right = x + width. . So, boxes would be something like [(top, right, bottom, left)]. face recognition.face encodings(img, boxes) would return a list of 128 dimension numpy.ndarrafor each face present in the image ‘img’.• Step 3: Using these computed face vectors, you need to code a k-means or other relevant clustering algorithm. If you use K-Means, or another algorithm requiring a pre-defined number of clusters, that number will be K from faceCluster K. You may not use OpenCV APIs that have ‘face’, ‘kmeans’, ‘knn’ or ‘cluster’ in their name. • Rubric : 50 points (Accuracy), 10 points (Report)3.2 Evaluation Rubric – 60 Points • Accuracy will be based on the number of faces with the same identity present in a cluster. • Rubric : 50 points (Accuracy), 10 points (code)Report (10 points): • Concise description of what algorithms tried and ended up using (5-10 bullet points) (5 points) • You need to display each of faces in a cluster for all the clusters as follows. To achieve this, you may not use any external libraries other than OpenCV or numpy or matplotlib (5 points) Cluster 0 Cluster 1 Cluster K-1 Figure 2: Face clusters.3.3 Code and Report Please refer to the script in readme.md in root directory for running your code. YOUR implementation should be in the function cluster faces() in the file UBFaceDetector.py. The function should cluster faces present in all of the images in input path into K clusters. The function should return all the clusters and corresponding image names in a list. The result list should have the following format: [{“cluster no”: 0, “elements”: [“img1.jpg”, “img2.jpg”, …]}, {“cluster no”: 1, “elements”: [“img5.jpg”, “img6.jpg”, …]}, … , {“cluster no”: K-1, “elements”: [“img12.jpg”, “img13.jpg”, …]}] Note that cluster no for say K clusters starts from 0 and ends at K-1.4 Code and Report Submission package requirements for file UBID Project3.zip. You should submit this to ”Project 3 code” on UBlearns system. • UBFaceDetector.py You must not change the file name.• results.json You must not change this name. results.json is the resultant files generated by your FaceDetection.pycode on the test folder images present in “Project3 data.zip”.• clusters.json You must not change this name. clusters.json is the resultant files generated by your FaceCluster.py code on the faceClusters K folder present in “Project3 data.zip”. You should submit your report to ”Project 3 report” on UBlearns system.You must use this name Report.pdf. The report should contain Your name and your UBID at the top. The result of the report should contain a description of what algorithms tried and ended up using, a discussion of the results, face cluster images and anything you learned. You do not need to upload any of the test images during submission.5 Submission Folder structure To ”Project 3 code” on UBlearns system • UBID project3.zip UBFaceDetector.py results.json clusters.json To ”Project 3 report” on UBlearns system • Report.pdfWe will be running an automated script. Any variations to the above submission folder structure will result in a ZERO for the project. Also, there is no need to use any hard-coded local paths in your project. Usage of such paths, would break when we run your code and will result in a ZERO.6 Submission Guidelines • Unlimited number of submissions is allowed and only the latest submission will be used for grading. • Identical code will be treated as plagiarism. Please work it out independently. • For code raising “RuntimeError”, the grade will be ZERO for the project if it can not be corrected • Late submissions guidelines apply for this project.• You will be permitted to submit a prelimiary verison of your code early by May 4th 11:59 PM on UBlearns for a dry run. We will provide feedback on whether your code has RUNTIME issues or not. No feedback would be provided on F1 scores or accuracy. • The final submission will be the one that is graded on the May 11th deadline.References [1] V. Jain and E. L. Miller, “Fddb: a benchmark for face detection in unconstrained settings,” 2010.

$25.00 View

[SOLVED] Cse 473/573 – computer vision and image processing project 1

The goal of this task is to implement an optical character recognition system. You will experiment with connected component and matching algorithms and your goal is both detection and recognition of various characters.The first input will be a directory with an arbitrary number of target characters as individual image files. Each will represent a character to recognize in the from of a ”template”. Code will be provided to read these images into an array of matrices. You will need to enroll them in your system by extracting appropriate features.The second input will be a gray scale test image containing characters to recognize. The input image will have characters that are darker than the background. The background will be defined by the color that is touching the boundary of the image. All characters will be separated from each other by at least one background pixel but may have different gray levels.The OCR system will contain three parts, Enrollment, Detection and Recognition. 1. Enrollment • Code will be provided to read in a set of test images from a provided directory • You process an enrollment set of target characters from the provided directory and generate features for each suitable for classification/recognition in Part 3 (Recognition).• You may store these features in any way you want to in an intermediate file, that is read in by the recognizer. The file you store should NOT be the same as an image file. The reason for the intermediate file is so you do not have to run enrollment every time you want to run detection and recognition on a new image. • Rubric: 20 points – 10 points (code) and 10 points (features).• Code will be provided to read in a test image. • Once you have read in the test image, you will need to use connected component labeling (that you implement) to detect various candidate characters in the image. • You should identify ALL possible candidates in the test image even if they do not appear in the list of enrolled characters.• The characters can be between 1/2 and 2x the size of the enrolled images. • Once you have detected the character positions, you are free to generate any features or resize those areas of the image in any way you want in preparation for recognition. • The detection results should be stored for output with the recognition results in part 3, and should be in the original image coordinates. • Rubric: 30 points – 10 points (code) and 20 points (evaluation)• Taking each of the candidate characters detected from previous part, and your features for each of the enrolled characters, you are required to implement a recognition or matching function that determines the identity of the character if it is in the enrollment set or UNKNOWN if it is not. • Rubric: 50 points – 10 points (code), 40 points (evaluation)1.3 Output For the output, you are expected to generate an output file ‘results.json’. It will be a list with each entry as {“bbox” : [x (integer), y (integer), w (integer), h (integer)],“name”: (string)} : • x, y are the top-left coordinates of the detected character. Consider origin (0,0) to be the top-left corner of the test image and x increases to the right and y increases to the bottom. • the h is the height, and w is the width of detection (from part 2), • matching enrollment character identify from part 3 (if any). Use “UNKNOWN” for characters that are not in the enrollment set.The order of detected characters should follow English text reading pattern, i.e.,list should start from top left, then move from left to right. After finishing the first line, go to the next line and continue.Note that • the final code should read target enrollment images from the directory “characters”, read the file “test img” from the main directory, and write the output file“results.json” to the main directory. • the size of the test characters MAY differ from the size of the enrollment images. You also need to submit a report “report.pdf” (approximately 1 page) explaining how you computed features, performed detection, and performed recognition.1.4 Evaluation • For part 1 (20 points), computing appropriate features for the task would fetch you points. • For part 2 (30 points), we will proportionately assign a score based on the number of characters detected. A character is considered to be detected if more than 50% of the character is covered by the bounding box.• For part 3 (50 points), The F1 measure is the harmonic mean of Precision and Recall, with precision being the # of true positives / # one says are positive, and the recall is the # of true positive / # of positives that exist. The F1 will be used as the metric. you can assume that if you get an F1 measure > 0.6 you will get full credit. Note that “UNKNOWN” detections will not be counted towards f1 score.• Irrespective of the F1 score, using template matching without generating features would result in a maximum of 90 points for the project.• We have provided “groundtruth.json” for the data in “data” folder. You may use it to compute F1 score with ‘evaluate.py’. However, for final evaluation, we will use similar but different test image and character group (i.e., English alphabets and numbers).2 Project Guidelines and Submission • Do not modify the code provided. • All work should be your own. You are not permitted to copy code from the internet. • You are free to use any opencv (cv2 version 3.4.5 ONLY) and numpy (np) function for generating features. For other parts of your code (especially connected components and template matching), you cannot use any API provided by opencv (cv2) and numpy (np) (except “np.sqrt()”, “np.zeros()”, “np.ones()”, “np.multiply()”, “np.divide()”, “cv2.imread()”, “cv2.imshow()”, “cv2.imwrite()”, “cv2.resize()”, any basic “cv2” and “np” API). • You may use opencv version 3.4.5 ONLY, no other versions• Do not import any additional libraries (function, module, etc.) except native Python packages, e.g., pdb, os, sys. • Compress the python files, i.e., “task1.py”, “data” folder, features folder (from part 1), “results.json” and “report.pdf”, into a zip file, name it as “UBID.zip” (replace “UBID” with your eight-digit UBID, e.g., 50305566) and upload it to UBLearns before the due date. • Late submission guidelines apply for this project. • Not following the project guidelines may result in penalty.

$25.00 View

[SOLVED] CPS721: Assignment 5

[pdf-embedder url="https://assignmentchef.com/wp-content/uploads/2024/11/assignment5.pdf"] CPS721: Assignment 5 PROLOG Instructions: When you write your rules in PROLOG, you are NOT allowed to use “;” (disjunction), “!” (cut), “->” (if-then), and the different variants of equality and inequality (“\=”, “==”, “=\=”, “=:=”, etc.). You are only allowed to use “;” to get additional responses when interacting with PROLOG from the command line. Note that this is equivalent to using the “More” button in the ECLiPSe GUI. You are also not allowed to use built-in predicates that we did not cover in class, such as “findall”, “setof”, “bagof”, etc. You may use “member” and “append”. We use ECLiPSE Prolog release 6 to mark the assignments. It is your responsibility to check that your code runs in ECLiPSE Prolog release 6. If you write your code on a Windows machine, make sure the files are saved as plain text and are readable on Linux machines. Ensure your PROLOG code does not contain any extra binary symbols. You can test this by ssh-ing onto the department servers (ie. moon) and running ECLiPSE remotely from the command-line. Submission Instructions: You should submit ONE zip file called assignment5.zip. It should contain 4 files: robocup.pl dishwashing.pl robocup_report.txt dishwashing_report.txt All these files have been given to you and you should fill them out using the format described. Ensure your names appear in all files, and your answers appear in the file sections as requested. Do NOT edit or delete any lines that begin with the “%%%%% SECTION:”. These are used by the auto-grader to locate the relevant code. You will lose marks if you edit such lines. Your submission should not include any other files than those above. Do not submit .rar, .tar, .7zip, other compression format aside from .zip, or submit multiple .zip files. If you do so, you will lose marks. All submissions should be made on D2L. Submissions by email will not be accepted. As long as you submit the file with name assignment5.zip, you may submit multiple times, as it will overwrite an earlier submission. You do not have to inform anyone if you do. The time stamp of the last submission will be used to determine the submission time. READ THIS INFORMATION BEFORE BEGINNING THE ASSIGNMENT In this assignment, you will be creating programs to solve several planning problems. For each, there are multiple provided files: A file containing the generic planner (this is shared between all problems). You should NOT edit this file. File(s) with initial state information. You should NOT edit these files, but we encourage you to create your own and use them for testing as detailed below. The main submission file in which you will be defining the axioms and declarative heuristics. A report file in which you will be documenting your results. Below, we provide more information on each of these files. We also provide some information about how we will test your programs. Main File The main files for questions 1 and 2 are robocup.pl and dishwashing.pl, respectively. To run your programs you should load these files. They have been set up so that they will also load the additional files as needed. You will notice these files have several Prolog features that you haven’t seen before. First, in the setup section, you will see rules of the form :- dynamic robotLoc/2. This line tells Prolog to allow the predicate robotLoc to be defined in different non-consecutive lines in your program. This is necessary to allow the initial state of the planning problem to be stored in a different file than your axioms. You should NOT edit these sections of the files. The setup section also contains the rule :- [planner]. This line tells Prolog to load in the file planner.pl, which contains the generic planner used for all problems. You should NOT edit the planner section. You will also notice that a similar rule is used to load in the initial state from a file in the init section. We will describe this file in further detail below. Next, you will see the goal_states section. Here, multiple different goals have been defined for you. Notice that the predicate is defined as goal_state(ID, S), where S is the situation, and ID is the number of the corresponding goal. This is slightly different than the goal_state predicate introduced in class. This newer version will allow you to easily jump between goals when testing your program, by calling solve_problem with a different goal ID. We will describe this in detail below. You may find it useful to add additional goals for testing, especially when starting out. You can do so in this section. Please include this section in the submission, but its contents will be ignored. The remaining sections then provide space to define your action precondition axioms, your suc- cessor state axioms, and your declarative heuristics. Planner File The planner can be found in the file planner.pl. You should NOT edit this file, but you should look at it to understand how to run the planner. In it, you will see mostly familiar predicates, though with slight changes in some cases. The main predicate you will call to run the planner is solve_problem(Mode, GoalID, Bound, Plan) Just as in class, the Bound variable is a maximum on the length of the plan to find, and Plan is the found plan. The other two arguments are input arguments that are intended to simplify the process of testing your program. GoalID defines which goal to use. This will allow you to easily test with the different goal conditions defined in the main files. The Mode argument allows you to specify which “mode” to run the planner in. If it is set to heuristic, then the planner will use declarative heuristics in the form of the useless predicate to cut off search to avoid unnecessary work. If it is set to regular, then the declarative heuristics are ignored, and the standard reachable definition is used (ie. without pruning). Notice that the reachable predicate has been modified accordingly to allow for these different usages. For example, if you call solve_problem(regular, 2, 5, Plan), you are asking the planner to find a plan of no more than 5 actions, such that the goal with ID 2 is satisfied, and it should do so without using declarative heuristics for pruning. You should NOT submit the planner file as part of your submission as we will automatically include it when testing. We will ignore your submitted file if you do. Initial State File For each problem, we have provided multiple initial state files which identify the fluents and auxiliary predicates which hold in different initial situations. These are loaded in the init sections of the main files using a line like :- [dishwashingInit1]. To change between initial states, simply comment out the unused initial states and uncomment the one you want to use. Please include this section in the submission, but its contents will be ignored. Note that it is a good idea to restart eclipse when switching initial states to avoid having fluents for multiple initial states open at the same time. Changing the initial state is a good way to test your program in a variety of situations. You can add your own by creating new files that use this format. You then merely need to modify the init section in the main files to load in your desired initial state. This is an especially good idea when testing out your preconditions and axioms or debugging your program. However, you should complete your final tests with the given initial states. You should NOT submit any initial state files as part of your submission. Self-Testing Your Axioms Debugging axioms can be challenging. Don’t forget you can always directly check if an action is possible by using poss as a query. For example, to check that an action act is applicable in some situation given by [c, b, a], you can always query for poss(act, [c, b, a]). Similarly, if you want to check whether a fluent f(5, S) is true in a particular situation [c, b, a], you can always query for f(5, [c, b, a]). Doing so is a much more effective approach for debugging than just testing your axioms by calling solve_problem, as the latter gives very little feedback for why failure is occurring. It is also one of the primary ways we will test your submissions. The initial states and goals given are of varying difficulty. Some are not feasible to solve without declarative heuristics. Thus, it is useful for you to first confirm your program is working on the easier ones, before even attempting the harder ones. To that end, you may find it useful to create your own initial states and goal states that will help you better understand if your program is working. Marking Your Program We will be testing your programs in two ways. First, we will try your axioms on different initial states to ensure that they correctly compute preconditions and effects. Second, we will also run your complete planner to ensure the whole system works together. Importantly, we will run your program on DIFFERENT initial and goal states than those given. Thus, your declarative heuristics should not be specific to those provided. In other words, make sure your declarative heuristics are general enough so that they can be applicable to solving any planning instance of the planning problem with arbitrary constant names, and different combinations of initial/goal states. We have tried to give different combinations of initial and goal states so you better understand the set of possible planning problems you will be tested on. More details is given on this topic in each of the questions. When grading your programs using solve_problem, we will not require you to get all solutions by calling “More” or “;”. You should try to get additional solutions when testing your own program, but we will not consider it for marking. Instead, we will take plans outputted by your system, and verify it using our solution to ensure it is a valid plan, and is the shortest of all possible plans. For your declarative heuristics, you don’t need to get the absolute fastest performance possible. Full marks will be awarded if a reasonable speedup is seen, and the returned plans are still valid (and optimal). Different parts of the assignment may also be marked manually. The Robocup Problem [45 marks] We have already looked at variants of the robocup problem on previous assignments, largely to determine who can score or who can pass to whom. In this assignment, we will consider using planning to identify how to satisfy given objectives that a team may have while playing a game. Here, we are assuming you are controlling all the robots, and so you are planning for all of them collectively. We are also making several simplifying assumptions, but they are different than those considered on previous assignments. In particular, we will represent the field as a grid, such that there can be only one robot at any location at one time. There will be opponent robots at certain locations in the grid, but to keep things simple, they will not move. Robots can move, pass, and shoot, but only one robot can do one of these actions at any time (ie. robots do not move concurrently).   Figure 1: An example robocup scenario. An example initial state is shown in the figure above. In this example, the field consists of 5 rows and 5 columns, there are five robots (r1, r2, r3, r4, and r5), there are five opponents (shown as x’s), and robot r1 currently has the ball. Note, we have given this file to you as robocupInit2.pl. Let us begin by defining the predicates that will be used to define states of this environment. We begin with the fluents: robotLoc(Robot, Row, Col, S): Robot is at row Row and column Col in situation S. hasBall(Robot, S): Robot has the ball in situation S. scored(S): the ball is in the net in situation S. We will also have the following auxiliary predicates that do not have a situational argument: numCols(X) defines that the number of columns for the field is X. numRows(X) defines that the number of rows for the field is X. goalCol(X) defines that the net is in column X. opponentAt(Row, Col) defines that there is an opponent at the location Row, Column. robot(R) defines that R is a robot. Facts based on these auxiliary predicates are included in the initial state file. See robocupInit1.pl and robocupInit2.pl as examples. We can now define our actions as follows: move(Robot, Row1, Col1, Row2, Col2): this action means Robot moves from the location of Row1, Col1 to Row2, Col2. Robots can only move one row or column at a time, and cannot move diagonally. They also cannot move into a location where there is another robot or an opponent. A robot does not need to have the ball to move. For example, in Figure 1, r5 can move to row 3, column 1 in a single step, but they cannot move to row 3, column 0 or row 4, column 1 in a single step. Note that a robot cannot move off the field. A robot can also not move to the same location they are already at (ie. move(R, 1, 2, 1, 2) is not a valid action). pass(Robot1, Robot2): this action means Robot1 passes the ball to Robot2. In order to do so, Robot1 must have the ball. They can pass the ball any number of rows or columns in the vertical or horizontal directions, but they cannot pass diagonally. A robot may not pass the ball through a location where there is an opponent robot, but they MAY pass the ball through a grid location where there is a teammate robot (ie. the teammate just lets it pass by). After a pass, Robot1 no longer has the ball, and Robot2 has the ball. For example, in Figure 1, r1 can pass the ball to r2 or r3, but not r4 (no diagonal passes) or r5 (opponent in the way). shoot(Robot): this action means Robot shoots the ball at the net (and scores). The robot must have the ball to shoot it. They can only shoot the ball if they are in the same column as the net (ie. no diagonal shots). They can shoot any number of locations as long as there are no opponents in the way. Like passes, teammates do not get in the way of a shot. The robot no longer has the ball after they shoot it. For example, in Figure 1, a robot could shoot (and score) from locations row 2, column 2, as well as row 3, column 2, and row 4, column 2. There are no other locations in the Figure from which a robot can shoot from. Whenever a robot shoots, they will score. For this problem, you will write the precondition and successor-state axioms for this problem, as well as declarative heuristics. Notice that we have given you 6 goals for the first initial state and two for the second. For all of these, you should manually solve the problem yourself so you know what the optimal solution is. In addition, the following will hold for a working solution: Goals 11-14 should only take a few seconds to solve. Goal 15 will take roughly 10-30 seconds in regular mode. Goal 16 is only solvable in a reasonable time with declarative heuristics. Goal 21 (for initial state 2) make take 1-3 minutes to solve in regular mode. Goal 22 only solvable in a reasonable time with declarative heuristics. You should now complete the following tasks: [18 marks] Write precondition axioms for all actions. You should put your precondition ax- ioms and any helpers for them in the precondition_axioms_robocup section. Recall that to avoid potential problems with negation in Prolog, you should not start bodies of your rules with negated predicates. In addition, make sure that all variables in a predicate are instantiated by constants before you apply negation to the predicate that mentions these variables. Finally, remember that your precondition axioms should only capture if you can apply an action, not if you should apply the action. Just let the planner decide if it should apply the action or add such information as declarative heuristics as required for part (c). HINT: You may want to introduce helper predicates to more easily check some of the action pre- conditions. [12 marks] Write successor-state axioms that characterize how the truth value of all fluents change from the current situation S to the next situation [A | S]. These should be added to the section successor_state_axioms_robocup. Recall that you will need two types of rules for each fluent: rules that characterize when a fluent becomes true in the next situation as a result of the last action. rules that characterize when a fluent remains true in the next situation, unless the most recent action changes it to false. Note that when you write successor state axioms, you may sometimes start bodies of rules with nega- tion of a predicate, e.g., with negation of equality. This can help your program work a more efficiently. [10 marks] You will now write declarative heuristics that the planner can use for pruning when run in “heuristic” mode. Recall that the predicate useless(A, ListOfActions) is true if an action A is useless given the list of previously performed actions. This predicate provides (domain dependent) declarative heuristic information about the planning problems that your program solves. The more inventive you are when you implement this predicate, the less search will be required to solve the planning problems. However, any implementation of rules that define this predicate should not use any information related to the specific initial or goal situations. Your rules should be general enough to work with any of the initial and goal states, as well as similar ones involving specifying where the robots should end up at or where the ball should end up at. When you write rules that define this predicate use common sense properties of the application domain. Write your rules for the predicate useless in the file robocup.pl in the section called declarative_heuristics_robocup. You should include at least 5 declarative heuristics. Put com- ments beside each declarative heuristic explaining what they are doing. We note that many of the declarative heuristics considered in class worked by avoiding sequences of actions that go back and forth between two states. For example, this was done when avoiding doing climbOnBox immediately after climbOffBox in the bananas problem from the lectures. There are some such heuristics possible for this domain (and you should implement them), but not many. Instead, you should consider other types of useless actions. This can include pruning actions that shouldn’t be done more than once, or eliminating some of the “symmetries” that arise in this problem due to the fact that certain sets of actions can be done in a different order, but still end up in the same state. For example, if we do pass(r1, r2) and then move(r3, 1, 1, 2, 1) in situation S, the resulting state is the same as first doing move(r3, 1, 1, 2, 1) and then doing pass(r1, r2). This is because these actions are independent in the sense that they have no effect on each others preconditions or effects. Using declarative heuristics to prune such cases may not allow the planner to find all possible plans when calling “More” or ”;”, but it can dramatically speed up the planning process when searching for a single solution. You are encouraged to use such declarative heuristics for this problem. However, you should ensure your heuristics do not make it impossible to find solutions (or prune optimal solutions) in the types of initial state/goal state pairs we have given you. TIPS: It is generally best to ensure your axioms work correctly in regular mode before testing your declarative heuristics. This will help you debug issues by helping you isolate which part of your program is causing issues. When working on your declarative heuristics, it is often best to first focus on problems of “medium difficulty” (ie. that take 10-30 seconds) in reqular mode. This is because such tasks take long enough that you can see appreciable gains using the declarative heuristics, while not taking so long that it is hard to try a test out different heuristics. Once you make gains there, you can test your heuristics (and possibly add more complex heuristics) on harder problems. [5 marks] Document the results of testing your planner (ie. calling solve_problem) on this problem and put them in the file robocup_tests.txt. Fill out the sections with the following details: cpu_details : include information about the processor (mainly speed), amount of RAM, and operator system you ran your tests on. summary : summarize your results in 5-10 sentences. In particular, describe which states you tested on, your timing results, and how much speedup you saw when using declarative heuristics. Report any other interesting behaviour you saw. log : show the log of your tests (ie. copy the interaction) including the runtime and the output plan, when using both the regular and heuristic modes. You do not have to find more than one plan per problem (though you may want to do that yourself when testing). You tests should be performed on at least goals 11, 12, 13, 14, 15, and 21. You may wish to include tests on other goals as well, but this is not required. Dishwashing [55 marks] In the future, people may have household robots to help them with chores. For this question, you will build a planning system for dishwashing that could be used by such a robot. We consider a simple version of this problem described as follows. A robot with two arms is standing by the sink. There are plates and glasses on the counter. The robot should pick up the dirty dishes, wash them, and put them in the dish rack to dry. The robot has two utensils (called scrubbers) they can use for cleaning the dishes. These are a sponge and a brush. Plates can only be cleaned with a sponge and glasses can only be cleaned with a brush. To clean a dirty dish, it should be scrubbed with a soapy scrubber and then rinsed to get rid of the soap and dirt. Note that we are assuming that a scrubber can be used to scrub an arbitrary number of dishes without needing more soap or becoming dirty. To describe this planning problem, we will use the following auxiliary predicates: place(X) states that X is a place that items can be located at. X will either be counter or dish_rack. scrubber(X) states that X is a utensil for cleaning dishes. X will either be a sponge or a brush glassware(X) states that X is glassware (ie. a glass cup or glass bottle). plate(X) states that X is a plate. dish(X) states that X is a plate or glassware. item(X) states that X is an item the robot can hold. This includes glassware, plates, and scrubbers. The definition of place, scrubber, and item are given in the section aux_dishwashing in dishwashing.pl. Do not change this section. Notice that glassware and plate are used to assign the types to objects in the initial state files, while dish and item are useful predicates that can be used to refer to different groups of objects. The fluents in this problem are as follows: holding(X, S) holds if the robot is holding item X in S. numHolding(C, S) holds if the robot is holding C different items in S. Since the robot has 2 hands, this is at most 2 items. faucetOn(S) holds if the faucet is on in S. loc(X, P, S) holds if the location of item X in situation S is P. Here, P is a place. If the robot is holding X in S, then no such loc fluent will hold for X in S. wet(X, S) holds if the item X is wet in S. dirty(X, S) holds if the item X is dirty in S. Recall that scrubbers never get dirty. soapy(X, S) holds if the scrubber X is soapy in S. We can now define our actions as follows: pickUp(X, P): picks up item X from place P. After applying this action, X will no longer be at P and it will be held by the robot. Note that the robot must not already be holding two items to pickup another. putDown(X, P): puts down item X to place P. Only applicable if X is being held by the robot. After applying this action, X is no longer held by the robot. turnOnFaucet: turns on the faucet. This action is only applicable if the robot has a free hand (ie. it is holding at most one item) to turn on the faucet. turnOffFaucet: turns off the faucet. This action is only applicable if the robot has a free hand to turn off the faucet. addSoap(X): adds soap to X. This action is only applicable if X is a scrubber that is held by the robot. The robot must have a free hand to add soap to the scrubber. The scrubber will be soapy after adding soap. scrub(X, Y): scrubs dish X using scrubber Y. Both X and Y must be held by the robot to apply this action. If X is glassware, it can only be scrubbed with a brush. If X is a plate, it can only be scrubbed with a sponge. If Y is soapy, then X will be soapy after it is scrubbed. If X is a dirty dish, it will remain dirty after it is scrubbed (ie. the food stays on it until the dirt and soap is rinsed off). Note that scrubbing a dish with a scrubber that has no soap on it will have no effect on the dish. rinse(X): rinses item X. The faucet must be on and X must currently be held by the robot in order to rinse it. If X is a scrubber that is soapy, it will no longer be soapy after rinsing it. If X is a dish that is both soapy and dirty (ie. because it was scrubbed using a soapy scrubber), then it will be clean and not soapy after being rinsed. In all cases, an item will be wet after being rinsed. As in the first question, you will write the precondition and successor-state axioms for this problem, as well as declarative heuristics. We have provided 3 initial states and a variety of goal states for testing: Goal states 11, 12, and 13 for initial state 1 have an optimal solution length of 2, 4, and 6 steps, respectively. These should all be solved in a matter of seconds. Goal state 14 for initial state 1 has a solution length of 8. It will take 10-30 seconds in regular mode. Goal state 15 for initial state 1 has a solution length of 10. It will take 30-120 seconds on heuristic mode, and quite a while on regular mode. Goal state 21 for initial state 2 has a solution length of 6, and will take 10-40 seconds in regular mode to solve. Goal state 22 for initial state 2 takes 11 steps, and will take 5-15 minutes when using declarative heuristics. Goal state 31 for initial state 3 takes 10 steps and will take 2-10 minutes with declarative heuristics. It is a good idea to start by convincing yourself these are the actual plan lengths (at least for the easy problems), before starting to write your axioms. This will help ensure you fully understand the tasks given. You should now complete the following tasks: [15 marks] Write precondition axioms for all actions in your domain in the precondition_axioms_dishwashing section of dishwashing.pl. See the description of part (a) of Question 1 for useful suggestions when defining your preconditions. [20 marks] Write successor-state axioms in section successor_state_axioms_dishwashing of dishwashing.pl. See the description of part (b) of Question 1 for useful suggestions when defining your successor-state axioms. [15 marks] You will now write declarative heuristics for the planner to use when solving this problem in “heuristic” mode. You should write at least 10 rules. When creating your heuristics, you can assume that the initial and goal states they will be used on will be “reasonable”. That is, they should allow your planner to handle problems that involve getting all (or some subset) of a given set of dishes cleaned and into the dish rack. The initial states and goal states will never include states that would not be helpful to achieving this objective. For example, you can assume that the dishes on the counter always start dirty, you will never need to put dirty or soapy dishes in the dish rack, clean and wet dishes should always end up in the dish rack, and soapy and wet dishes should never be put on the counter. However, your declarative heuristics should never result in the planner finding suboptimal solutions. You may wish to consider heuristics of different types such as those discussed in part (c) of Question 1. There are also other possible types of heuristics. Try to be creative about what you can do with the useless predicate. However, you are also not obligated to cover all different kinds of heuristics, just that you include at least 10 rules. It is also worth testing them individually (by removing and adding them), to see if they are actually helping at all. Please see part (c) of Question 1 for further tips on generating declarative heuristics. Remember to put a comment beside each declarative heuristic indicating what it is doing. [5 marks] Document the results of testing your planner (ie. calling solve_problem) on this problem and put them in the file dishwashing_tests.txt. Please see part (d) of Question 1 for full details on the expected documentation. You tests should be performed on at least goals 11, 12, 13, 14, and 21. You may wish to include tests on other goals as well, but this is not required.

$25.00 View

[SOLVED] MATHS 7107 Data Taming Java

MATHS 7107 Data Taming Primary Examination Written Questions 1.  A committee has been established to organise 200-year anniversary celebrations in each Australian capital city. A large statue has been built and needs to be moved to each city for the celebration. The following tibble contains some of the information that the committee has gathered   The variables in the table are: ● Name: the name of the state or territory ● population: the number of people residing in the state ● urban  pop:  the percentage of the state’s population who lives in the state’s capital city ● Capital city: the name of the state’s capital city ● Year:  the year the  city became the state’s capital city, which is used for planning where the statue needs to be. (a) For each of the variables in the dataset identify the type of variable, ie. is it quanti- tative continuous, quantitative discrete, categorical nominal or categorical ordinal. Make sure you write a short description justifying your choice. (b)  For each variable state whether the corresponding column in the tibble is the correct data type. If it is incorrect, say what the correct data type should be. Log−Log plot   7 -   6 -   5 -   4 -   3 -   2 -   1 -   0 -  −1 -  −2 -  −3 -  −4 - −2                                −1                                 0                                  1 log(x) Figure 1:  Log-Log plot of data for Question 2. 2.   (a)  Assume that a data set consists of continuous variables x and y, which are related by the formula y = αxk . Using the logarithm laws, show that the Log-Log plot of this data will be a straight line. (b) Using the straight line in part (a), write down the: i). gradient of the line, ii). the value at which the line cuts the vertical axis. (c)  Using the Log-Log plot in Figure 1 determine the explicit relationship between x and y.   x1 x2 x3 x4 x5 1 0 2 -1 3 Table 1:  Dataset for Questions 3 and 4. 3.  For this question, use the dataset in Table 1. (a) Transform the dataset in Table 1 to x*  by applying Min-Max scaling. Calculate your answers to 2 decimal places.  (b) What are the minimum and maximum values of the transformed dataset x* ?  Give exact values for your answers.  4.  For this question, use the dataset in Table 1. (a)  Find the mean x and (sample) standard deviation σx  of the dataset.  Calculate your answers to 2 decimal places.  (b)  Standardise the dataset by calculating the z-scores xj(*) for each xj .   Calculate your answers to 2 decimal places.  (c) What is the mean x*  and standard deviation σx*   of the transformed data set.  Give exact values for your answers.  5.  Show that when x  0 the Box-Cox transformation is continuous at λ = 0, by showing that   6. We are looking to measure the amount of pollen in the air. We have built 31 cubic boxes, each one with a pollen counter in it.  The side lengths of each box is recorded in the list (s1,..., s31 ), where the side lengths are measured in metres. We collect pollen counts from the air samples inside each box, and record the pollen counts in box j as pj . We expect to find a good linear fit with the model pj  = β0 + β1 zj  + ϵj ,       N(0,σ)                                      (1) where zj  = sj(3) . We use R to fit our linear model to the data and we obtain the following output: > pollen_lm    summary(pollen_lm) Call: lm(formula  = p  ~  z,  data  = pollen) Residuals: Min -22 . 195 1Q -6 .719 Median -0 . 049 3Q 6 .826 Max 20 .877 Coefficients: Estimate  Std .  Error  t  value  Pr(>|t| ) (Intercept)    6 . 23517       3 . 59069      1 .736      0 . 0931  . z                        1 .31177        0 . 05785    22 . 675       

$25.00 View

[SOLVED] Enterprise data analysis management Matlab

The Excel File To submit your excel assignment file, please upload an "Excel Macro-Enabled Workbook" to the Avenue. You can select "save as" and from the file type list choose Excel Macro-Enabled Workbook (*.xlsm). The PDF File This Memo document should include your findings in addition to any screenshots of charts and tables as instructed in the assignment tasks. NOTE Save ALL the pivot tables independently so we can match them to the answers in the memo. Background You are hired by an online office store in the US to analyze sales data by reviewing and evaluating the orders and shipments details. You are supposed to use the advanced Excel skills you noted on your CV to help them with these analyses. You must analyze and report on data for all the regions (i.e., Central, East, South, West) that received orders from different customer segments including consumer, corporate, and home office. These filtered data have been provided to you in the worksheet titled "Sales Data". Management is interested in the regions, customer segments, product categories, and subcategories that are most problematic in terms of profit, shipping, and preparation. Management would also like to know which product subcategories have the best and worst gross profit margins. You know that using pivot tables and macros can help you manage the large volume of data you have been given, and upon which asked to perform. the following specific tasks. For parts 1 to 3, create the appropriate pivot tables and pivot charts and include your report in the memo file to the management. Part 1 Task 1.1: The online store is interested in having categorized information about the total sales of each region from 2013 to 2016 (i.e., sales, region, orderdate_year, and category). Therefore, you are asked to provide the management with a PivotTable and a PivotChart (line chart) that include total sales of product categories. Present your work on a separate worksheet labeled "Q1". Task 1.2: Include a synopsis of your findings and trends in your memo to management. Part 2 Task 2.1: Insert five PivotTables to calculate the sum of sales, quantity, and profit of the products, along with the average discount and preparation time for all the regions. Next, insert three PivotCharts with the type of "PieChart" for the total of sales, quantity, and profit and also insert two PivotCharts with the type of "Clustered Bar" for the average discount and preparation time. Present your work on separate worksheets labeled "Q2.1-Q2.5". Then, in a different worksheet entitled "Dashboard" put all the PivotCharts and apply visual filtering for subcategories to create an interactive dashboard by which you can compare the values in the charts in terms of product subcategories. Task 2.2: Try to find important or interesting information for each item (sales, quantity, profit, discount, and preparation time) in terms of different subcategories in the four regions. Then include a synopsis of your findings in your memo to the management. Part 3 Task 3.1: The company wants to evaluate the average gross profit margin for each month and region from 2013 to 2016. You realize that you can provide management with a pivot table showing the gross profit margin attributable to the sales. Using this PivotTable, determine which month in each region for each year has the highest gross profit margin and highlight them with the help of conditional formatting. Task 3.2: Provide your interpretation of the results in the memo to management. You may select either region or year to interpret your findings. Note: To perform. the tasks of this part, you need to create a new field (i.e., gross profit margin). Make sure that you create it in your PivotTable and not in the original dataset (i.e., Sales Data Worksheet). Be sure to label your worksheet "Q3".

$25.00 View

[SOLVED] COMP3161/COMP9161 Concepts of Programming Languages Session 2 2014 R

COMP3161/COMP9161 Concepts of Programming Languages Sample Exam Solutions Session 2 2014 Question 1 [25 Marks] Consider the following inductive definition of evaluation rules for a restricted form of boolean expres- sions. Boolean expressions: Evaluation rules: A) [2 marks] Give the derivation of the evaluation for the following expression: • (And  (Not False) (And True (Not True))) B) [3 marks] Are the rules unambiguous? If so, briefly explain why. If not, give an example expression for which the set of rules allow more than a single derivation. C) [4 marks] The rules listed above give a small step semantics. List the inference rules which specify an equiv- alent big step semantics. D) [16 marks] Give a single step semantics of this language with explicit control stack, adapting the C-machine discussed in the lecture. Start by i)  (3 marks) defining a term representation for a control stack frame. ii)  (3 marks) defining a term representation for a control stack iii)  (2 marks) describing what the initial and final states of the machine look like iv)  (8 marks) listing the evaluation rules. Remember, each of the evaluation rules has to be an axiom. Question 2 [25 Marks] A) [10 marks] In the lecture, we discussed the E-machine as an example of an abstract machine which handles value bindings explicitly by maintaining a value environment. One of the possible return values of the E-machine are function closures. i)  What is a function closure? ii)  Give an example of an expression whose evaluation in the E-machine requires the creation of a closure. B) [15 marks] We discussed two distinct methods to handle exceptions: the first method required that, when an exception is thrown, the evaluation unrolls the stack until the matching catch-expression is found. The second method made it possible to directly jump to the matching catch-expression.  Describe the second method: i)  What are the components of the state of the abstract machine? ii)  How does the state of the machine change when a catch-expression is evaluated? iii)  How does the state of the machine change when a raise-expression is evaluated? For (ii) and (iii), you do not have to give the exact transition rule — it is sufficient to describe how the state is affected. Question 3 [25 Marks] A) [6 marks] For each of the following three pairs of type expressions determine whether the pair has a most general unifier? If so, please provide it. i)  (a , b)  →  (b , a) and (Int ,  c)  →  (c ,  c) ii)  a  →  (a , a) and (b , b)  →  b iii)  Int  →  Int and Float  →  Int B) [9 marks] Give the principal type of the following (polymorphic) MinML expressions: i)   (Inr  (Inl True)) ii)  letfun  f  x  =  fst   (snd  x)  end iii)  letfun  g  x  = case  x  of  Inl  a  ->  a Inr  b  ->  b end end C) [10 marks] What is the difference between the function type 8a.(a, a) → a and the function type 9a.(a, a) → a?  Assume g  : 8a.(a, a) → a and f  : 9a.(a, a) → a.  Give an example each (if it exists) for a concrete valuev such that g(v) is type correct, and a value w such that f (w) is type correct. Question 4 [25 Marks] A) [5 marks] Progress and preservation are central concepts for strongly typed languages. i)  Give the definition of progress and of preservation in the context of a strongly typed language. ii)  The presence of partial functions can be problematic with respect to progress.  Describe how they can be handled in a strongly typed language such that both progress and preservation still hold. B) [5 marks] Briefly describe the difference between parametric and ad-hoc polymorphism, and give an example function for each. C) [5 marks] Give an example each for a type constructor which is covariant and a type constructor which is contravariant in at least one of its argument positions. D) [5 marks] Why is it important what the variance of a constructor is? Give an example of what can go wrong if a language designer/implementor gets it wrong. E) [5 marks] In the lecture, we discussed the Software Transactional Memory (STM) approach to control con- current access to shared data; i)  In contrast to semaphores, STM is said to be an optimistic programming model to control concurrent access to shared data. Why? ii)  How does the type system in Haskell ensure that STM actions are not applied outside of an atomic block?

$25.00 View

[SOLVED] EEE225 Semiconductors for Electronics and Devices Problem Sheet 2 Web

EEE225 Semiconductors for Electronics and Devices Problem Sheet 2 1.    Germanium has a band gap of 0.7eV and, for pure material, the Fermi level lies near the middle of the gap. What is the probability that a state in the upper band is occupied at 0K, 300K, and 500K? Explain the physical significance of these results. What other property of the band structure, other than the probability of occupancy of states, influences the electrical properties of the material? 2.   Athermistor made from intrinsic silicon is to be used to control the current surge in a projector when it is switched on. The thermistor has a resistance of 100Ω at room temperature (17°C). When it is connected in series with the projector lamp, at what temperature will its resistance fall to 1.0Ω? Assume that the energy gap for silicon is 1.08eV and that the carrier mobilities do not vary appreciably over the operating range of temperatures. Comment on the result. 3.   Intrinsic germanium has a gap energy of 0.72eV and its conductivity is 2.13 S m-1  at 300K. What is its conductivity at 400K? Comment on the result. Would the conductivities at each of these temperatures be changed if the semiconductor received radiation of wavelength (a) 1μm and (b) 2μm? 4.   A particular semiconductor, which is initially intrinsic with an energy gap, Eg, of 1.1eV, is doped very slightly n-type, in such a manner that the Fermi level is displaced by 10% of the gap energy from its intrinsic position. Compare the conductivities at 20°C  before and after doping, assuming carrier mobilities of μh  = 0.05m2 V-1s-1  and μe  = 0.13 m2 V-1s-1, commenting on the result. 6    In a certain semiconductor the ratio of electron mobility, μe, to hole mobility, μh, is equal to 10, the number density of free holes isp = 1020m-3, and the number density of free electrons is n = 1019 m-3. The measured conductivity is 0.455 S m-1. Calculate the mobilities. 7    A sample of germanium doped with 1020 donor atoms per cubic meter and 7 ×  1019  acceptor atoms per cubic metre. At the same temperature as the sample, the resistivity of intrinsic germanium is 0.6Ω m. Find the total current density when an electric field of 200V m-1  is applied. The electron and hole mobilities in germanium may be assumed to be 0.38 m2 V-1s-1 and 0.18 m2 V-1s-1 respectively. 9    A 10mm ×10mm × 10mm cube of silicon at room temperature has 1019m-3  of gallium (p-type) impurities and  1.5  ×  1019m-3   of arsenic  (n-type)  impurities  in the material. Determine the resistance of the cube between any two faces, assuming: ni  = 1.5 × 1016  m-3; μe  = 0.12 m2 V-1s- 1; and μh = 0.05 m2 V-1s-1

$25.00 View

[SOLVED] COMP0015 2024-2025 Term 1 Coursework Python

Vampire Hunting COMP0015 2024-2025 Term 1 Coursework – 60% of the module This document explains the arrangements for the coursework. You will create an application that analyses a dataset to determine how a vampire infiltration spreads based on testing and contact tracing data. This document is fairly lengthy; do not be deterred by this. The coursework has been carefully designed so that you can complete it part by part and know that you have the correct functionality at each point. Each part is described in its own section. Strong suggestion: start your coursework as early as possible to give yourself time to resolve issues you encounter. Deadline December 13, 2024 at 16:00 (4pm – UK time). How to submit your work Submit your  contact.py file at the assignment link on Moodle. Do not submit any other files. Do not upload a folder containing your files because this can cause compatibility issues for the marking team. You must ensure that your program works properly on your own computer before you submit the code. Important: make sure your student number (not your name) is included in the comments at the top of your program. Testing You are responsible for testing your program carefully. Make sure that you have thought about all the things that can go wrong and test your program to ensure that you know it works correctly in all circumstances. However, as an aid, we have developed a web-based testing service.   We strongly encourage you to take advantage of this service.  More details can be found in Appendix 1, the following points are key: 1.    The tool may be extremely helpful—in previous terms, students in aggregate used the service 5,000+ times. 2.    Nevertheless, the tool does not provide any guarantee of a final grade. a.    Your final submission will be tested on additional datasets, which can change scores. b.   Your final grade will include marks for comments/style, which can change scores. c.    We reserve the right to modify your grade after manual inspection.  For example, attempts to “trick” the autograder will result in zero marks. d.    If  your code does  not work with the autograder, we may attempt to modify it to award a non-zero grade. If successful, we will deduct some marks (typically 10-20%, although we make no guarantees). Accordingly, please make sure your code works properly with the autograder before submitting. 3.    We do not guarantee that the testing service will always be available (e.g., the server may crash). 4.    The testing service tests each part individually. If you get stuck on one section but can  get something working for a later section, then you may still be able to get marks for that later section, even if running it on your own machine (which will use the pre-defined main () function from the template) doesn’t work. 5.   The testing service can sometimes get confused by non-ASCII characters.  Accordingly, and with apologies, avoid using Amharic/Arabic/Chinese/Thai/etc. characters in your code, even within comments. Assessment You are expected to show that you can code competently using the programming concepts covered in the course including (but not limited to): the use of files, strings, lists, dictionaries, sets, conditions, loops, and functions. Marking criteria will include: ●   Correctness – your code must perform. as specified. ●   You must apply Python concepts appropriately. ●    Programming style. – see section ‘Appendix 2 Style Guide’ for more detail. ●   Your assignment will be marked using the rubric in Appendix 3. This is the standard rubric used in the Department of  Computer  Science.  Marks for your project work will be awarded for the capabilities  (i.e. functional requirements) your system achieves, and the quality of the code. Categories 5 and 6 of the rubric will be used for coding assignments. Before starting your assignment You are provided with some starter code in the file template.py. Your first action should be to make a copy of this code into a new file,  contact.py, where you will work afterwards. You have also been given a helper file format_list.py; you shouldn’t modify this but will need to have it in the same directory.  You have also been given some text input files containing infiltration data (Data Set0.txt … Data Set5.txt) and associated text output files (Data Set0-out.txt … Data Set5-out.txt).  You should put these in the same directory also. Before you start writing code, read through this entire document so that you have a sense of what’s coming and can start to think about how different parts are going to work together.   The  overall task is more complex than anything you’ve done so far, but it’s also been broken into—forgive me—bite-sized pieces.   In section 22, there is a summary of all of the different functions you have to write, the number of lines in the official solution (just to give yourself a sense of relatively difficulty —your own solution might be quite a bit longer, and that’s ok), and an informal sense of which parts are a bit trickier than others (sometimes short functions can be trickier than long ones). Keep in mind that the testing service you have access to (see appendix 1) and the way we will do the grading will be function-by-function.  So, even if you can’t get some critical function working correctly, work on other functions. Goodluck and enjoy! Running the contact tracing program There are two ways to run the program from the terminal depending on whether you want to provide the data file name on the command line or whether you want the user to be prompted for the file name. The code in main() contains code to handle this.  Important: you should not edit the code in main(). Entering a file name on the command line (suggested) On Windows, run the program in the terminal, specifying the data file name:   or on macOS type:   The meaning of the terms on this line is: python or python3 The python interpreter. On macos this will be python3 and on Windows, this will be py or python. contact.py The name of the python program. DataSet0.txt Name of the data file. Table 1  Running the program, specifying the file name on the command line Prompting the user for a file name (possible, but perhaps a bit painful to test) To prompt the user for a file name, simply run the program in your editor (IDE) as you would normally. Section 1 : Orientation You have been given several files containing testing and contact tracing data for vampire infiltrations.  This data has a “preamble”  (initial  part), followed  by a  list of testing and contact tracing data for a series of days  (“body”).   The format will be explained in detail below. In order to work with the contact tracing data, you will need to load a data file storing contact tracing data and create appropriate  data  structures  such  as  a  dictionary  or  a   list.  These  data  structures  can   be   used  to  identify  the relationship between the individuals in the data. Many  of  the  sections   require  you  to   print  out  data  after  you  have  calculated  it,  often  in  separately-specified functions.    Sometimes  functions  will   require  you  to  check  for  some  kinds  of  data  errors,  print  specified  error messages, and then exit by calling s ys.exit (). Important: do not modify the main () function; all of your work should be in the other functions as specified in the various sections.  Do not add code that just runs at the “top level”. All of your code should be inside functions. Important:  Part  of  our  testing  procedure  is  to  run functions  individually.     Make  sure  that your function  names, parameters, and return values don’t change from our specification.   Moreover,  make  sure  that your functions do what they are supposed to do, rather than e.g., trying to do one big function that does the job of several of our specified functions. Important: Ensure that your program’s output matches exactly the output given to you unless otherwise specified. Important: do not import any additional packages; lines 7-9 of the template.py file already imports sys, os.path, and format_list; that’s all you need (and all you are allowed). Data files The files (as well as your assignment’s expected output for each file) are posted on moodle with the assignment information. Each file ends with the .txt extension. You can open the files in Visual Studio Code or in a text editor to see the contents. Preamble The preamble begins with a (comma-separated) list of individuals (without specifying which are humans and which are vampires).  These are all of the individuals that may appear in the rest of the dataset.  On the next line, the preamble gives the number of days of data that will follow.   For example, a valid preamble would be as follows: Bella,  Edward,  Jacob,  Carlisle, Alice, Emmett, Charlie, Renee, Jessica, Angela 4 N.B.  Sometimes  spaces  are  included around  names to aid human readability as in the above example; these are optional, but your code will have to remove them if they appear.  The strip () method can help you do this. Important: names may include a mixture of upper and lowercase letters, as well as some special characters such as dashes.   Names will not include numbers, commas, colons, the newline character (“ ”), tabs (“t”), or the symbol #. Spaces may appear within a name, e.g. “Kuan Yew”, but not at its beginning or end, e.g. no “ Kuan Yew ”. Day Unlike in some of the legends, our vampires are able to move around during the day, rather than sleeping in coffins. (Some of them do sleep in coffins at night, but that’s not important for this exercise.)  We divide each of our days into two parts (AM, when testing occurs; and PM, when contact occurs).  Each day has the following format.  On the first line is a comma-separated list of people who have been given a vampirism test in the morning (AM) of that day (perhaps a brief exposure to sunlight); the result of the test (“V” for vampire or “ H” for human) is given after each name, separated by a colon.  If no one is given a test on a given day, this line will be the special string ##. On the second line is a number of groups of people who had interactions in the afternoon (PM) of that day.  On the third and following lines are comma-separated lists of individuals who have been in contact.  To keep things simple: 1.    Each individual can be in at most one contact group on a given day. 2.    On the other hand, there’s no need for an individual to meet with anyone on a given day; in that case, their name won’t appear in any contact list for that day.  When vampires are in the neighbourhood, staying home alone can be the safest option! 3.    All people in a group meet simultaneously (perhaps to play a game of baseball). Here’s an example of a valid day: Edward : V, Bella : H, Jacob : H, Jessica : H 3 Bella, Edward Charlie, Jacob Alice, Renee N.B. Again, spaces can appear to aid human readability; in this case you’ ll have to remove them. To complete the demonstration of the input, here are the remaining three days promised in the preamble above: Bella : H, Jacob:H, Alice :V, Charlie: H 2 Bella, Edward, Charlie Angela, Jessica Emmett : V, Renee : H, Jessica : V 0 Bella : V, Charlie: H 2 Bella, Edward, Charlie, Jessica, Angela Jacob, Renee Notice that on the third day everyone stayed home (0 contract groups).   None of the days had zero tests (which we would have indicated with ##). Note: the above file is in fact one of the input testing files we give (Data Set1.txt). Section 2: Check the file exists (1 mark) Take a look at the following code in main():   The python code sys.exit() will cause the program to terminate if the file cannot be opened and read. Important : do not change the code in main(). The function file_exists() takes the file name given as a parameter, and checks that the file exists. Your first task is  to  complete the function file_exists(). The function file_exists()must  return True if the file exists and False if it does not. Hint: use the function isfile() from the python library os.path. Section 3: Create a structure to hold the input data (6 marks for correctly storing the data + 1 mark for correctly printing the error message when needed) Complete the function  parse_file(). This function takes a file name as a parameter, reads the file line by line and creates a data structure to represent the input.  The structure has the following format: at the “top” it is a pair.  The  first element of the pair is a list of names (the participants) in the order given in the file.  The second element of the  pair is a list of pairs; each list cell will correspond to a given day. Within each list-pair, the first element should be a dictionary whose keys are the names of those tested for vampirism and whose values are the Booleans True (that is, a vampire, indicated by “V” in the input file) or False (that is, a human).  The second element of a list-pair is a list of lists, with each “outer” list-element being an “inner” list of those groups who were in contact on that day.  After you finish processing the file, make sure to close it, and then return the structure you’ve created.  Here’s the structure we’d expect for the sample file given above, as printed by Python: ( [ 'Bella ', 'Edward ', 'Jacob ', 'Carlisle ', 'Alice ', 'Emmett ', 'Charlie ', 'Renee ', 'Jessica ', 'Angela '], [ ({ 'Edward ': True, 'Bella ': False, 'Jacob ': False, 'Jessica ': False}, [ [ 'Bella ', 'Edward '], [ 'Charlie ', 'Jacob '], [ 'Alice ', 'Renee ']]), ({ 'Bella ': False, 'Jacob ': False, 'Alice ': True, 'Charlie ': False}, [ [ 'Bella ', 'Edward ', 'Charlie '], [ 'Angela ', 'Jessica ']]), ({ 'Emmett ': True, 'Renee ': False, 'Jessica ': True}, []), ({ 'Bella ': True, 'Charlie ': False}, [ [ 'Bella ', 'Edward ', 'Charlie ', 'Jessica ', 'Angela '], [ 'Jacob ', 'Renee ']])]) If the file isn’t somehow formatted correctly – for example, if one of the numbers isn’t a number when you convert it to an integer – then print the error message Error found in file, aborting.  Then, exit (using s ys.exit ()). Important : for this section, and all sections that follow, you should print to the screen (rather than, e.g., writing your results to a file).  Also, it is critical that you produce the exact error message we specify to get the mark. Section 4 : Pretty-print the data structure (4 marks) Please fill in the function body of the function pretty_print_infiltration_data (data).  By “ pretty,” what we mean is in a more human-readable format than the Python default shown above.  Specifically, we’re expecting something in the following exact format (the reason one line is in red is explained below): Vampire Infiltration Data 4 days with the following participants: Alice, Angela, Bella, Carlisle, Charlie, Edward, Emmett, Jacob, Jessica and Renee. Day 1 has 4 vampire tests and 3 contact groups. 4 tests Bella is human. Edward is a vampire! Jacob is human. Jessica is human. 3 groups Bella and Edward Charlie and Jacob Alice and Renee Day 2 has 4 vampire tests and 2 contact groups. 4 tests Alice is a vampire! Bella is human. Charlie is human. Jacob is human. 2 groups Bella, Charlie and Edward Angela and Jessica Day 3 has 3 vampire tests and 0 contact groups. 3 tests Emmett is a vampire! Jessica is a vampire! Renee is human. 0 groups Day 4 has 2 vampire tests and 2 contact groups. 2 tests Bella is a vampire! Charlie is human. 2 groups Angela, Bella, Charlie, Edward and Jessica Jacob and Renee End of Days There are a lot of parts here, so let’s take them one at a time. As you can see, the printout begins with “Vampire Infiltration Data” and ends with “End of Days”.    The next line, in red, is actually a single line, but to avoid small fonts in this writeup, it appears to be broken over two lines.  (You shouldn’t try to change the printing colour in your solution; it’s just here for clarity on this issue.)  All of the other lines for this dataset are short and thus do not have this issue. There are many lists of participants, separated by commas and with a final “and”.  You have been provided with module format_list.py which contains the function format_list (data). Use the function format_list (data) to format a list as a string in this way. The second line (in red) in the printout gives the information that was in the preamble of the input file.  Afterwards,  we have a series of days, which begin with the day number, the number of vampire tests, and the number of contact groups.  Notice that when there is only 1 test or group, there is no “s” after “test” or “group.”  In general, we make     sure to get all of the plurals correct, in the printout.  Hint: you should too. Within a day, we start by repeating the number of tests.  The line is indented to the right by two spaces.  Then we     give the results of each test, with the participants grouped in alphabetical order.  Participant results are indented by four spaces.  Then we repeat the number of groups (two space indent), and then a list of each of the groups (four space indent). Hints: There are a number of other things to be careful of, like periods at the end of sentences (but not at the end of lists), and so forth.  It’ ll take some careful work to make sure you have the format exactly right. Other than within a list of test results, participants should appear in the same order as the test file.  (The test results were stored in a dictionary, which does not guarantee that order will be preserved.   Rather than require that you somehow restore the original order, we simply require that you alphabetize the participants.  Reminder: if you have a list mylst, then mylst.sort () will sort it; there are other ways to sort it too.) Important: check the format of the output expected for the files given to you; unless we specify otherwise we expect you to follow this format exactly. The output expected for each data file can be found in the corresponding file name with the word “out” in the name. For example: the output for DataSet1.txt is given in DataSet1-out.txt. Section 5 : Write lookup helper function (1 mark for correctness) To analyse the infiltration carefully, we are going to need a notion of time.  Recall that days are divided into two periods: AM, when testing occurs; and PM, when contact occurs.  We also have a special “initial” period, before the scenario begins.  This can be a bit unwieldy, so it’s better to have a more uniform treatment.  Therefore, we represent units of time with integer values as follows: a.    When time = 0, we are in the initial period, also known as day 0. b.    When time = 1, we are just after the AM tests on day 1. c.    When time = 2, we are just after the PM contacts on day 1. d.    When time = 3, we are just after the AM tests on day 2. e.    When time = 4, we are just after the PM contacts on day 2. f.     And so forth … Accordingly, given d days, we will have 1 + 2d time units. We’ve provided some useful functions to navigate this in the format_list.py file: 1.    time_of_day (d,b) to convert a day (integer) and AM/PM time (Boolean, with True for AM and False for PM) into one of these time codes (AM/PM doesn’t matter for day 0). 2.    day_of_time (t) to convert a time unit into a day. 3.    period_of_time (t) to convert a time unit into an AM/PM (will return AM for day 0) 4.    is_initial (t) to check whether tis the initial period (i.e., 0) 5.    str_time (t) to convert a time period to a string for printing, e.g. “3 (AM)” (i.e., day 3 in the morning). When t = 0 this returns just “0” (since day 0 does not have an AM or PM). Your task is to write the helpful to have a lookup function contacts_by_time (participant, time, contacts_daily).  The first parameter is the name of a participant, e.g. “Bella”.  The second parameter is a  time unit, e.g. 4 (which represents day 2 PM).  The third parameter is a list of the contact groups on each day.  Notice that the third parameter is indexed by day, with day 1 in index spot 0; accordingly, part of your task is to convert the   time unit parameter to the day, and then adjust to the list index.  The other part of your task is to search within the contact groups for that day to find the correct list for the given participant.  If the participant didn’t meet with anyone on that day, then their contact list is empty.  As a special case, contacts for day 0 should also be considered the empty list for every participant.  Once you have the correct list, return it. To help you test your function, there is some code in main () that examines the initial participant on an early day, in both the AM and PM (which should return the same list).  In the specific dataset we’ve used so far, main () will print Bella 's contacts for time unit 3 (day 2) are Bella, Charlie and Edward. Bella 's contacts for time unit 4 (day 2) are Bella, Charlie and Edward. Section 6: Create the initial vampire knowledge data structure and a way to pretty print it (1 mark for correctness + 2 marks for pretty-printing) Our  goal  will   be  to  identify  vampires  and  humans  using  logical  deduction from  data.   We  have  no  interest  in guesswork and thus no interest in concluding that someone is probably a vampire (or human); we want certainty. Certainty is hard to come by, but hard does not mean impossible.  Over the remainder of this assignment, we will explain some basic logical principles that justify various deductive steps.  Our goal in this section is to set up the core vampire knowledge data structure (“vk structure”) we will need to track the information we’ve learned. Logical principle 1 : at a given point in time t, reality is binary.  That is, every individual is either a human or a vampire at time t: no one can be both a human and a vampire at the same time.  On the other hand, humanness is essentially a state of grace, and thus easy to lose over time (e.g., a human at time t can be a vampire at time t + 1). However, at a given point in time t, our knowledge about reality is ternary.  At time t, every individual has one of three  statuses:  definitely   human   (“ H”),  definitely  vampire  (“V”),  or  unclear  (“ U”).     Here,  definitely  means  with certainty: there’s no chance that a definite human is “ really” a vampire, or vice versa.  This means that at a specific moment in time t, once H or V status is established for an individual, it should never change for that time t for that individual.   Unclear  status  doesn’t  mean  the individual is  both a  human and a vampire (that’s impossible!); it just means that we aren’t sure.  On the other hand, as we make deductive steps, U status might change (to “V” or “ H”). To represent knowledge at time t, we use a dictionary where the keys are the participants’ names and the values are one of three strings “ H”, “V”, or “ U”.  Complete the function create_initial_v k (participants), where participants is the list of participants, and which returns such a dictionary structure.  Initially we have no data about who is human and who is a vampire, so everyone’s initial status should be unclear (“ U”). The main () function will create 1 + 2d copies (where d is the number of days) of the initial vk structure to represent the vampire knowledge at each point in time.  Initially, as you’ve created it, everyone’s status is unknown at all times. Complete the function pretty_print_vampire_knowledge(v k) to print the results of a vk structure. Here is an example of what the output should look like on the initial vk structure: Humans: (None) Unclear individuals: Alice, Angela, Bella, Carlisle, Charlie, Edward, Emmett, Jacob, Jessica and Renee Vampires: (None) Each row should be indented by two spaces.  Again, text in red should be understood to be on a single line.  Notice that format_list(data) helpfully returns the string “ (None) ” when the given list is empty.  Also notice that individuals are alphabetized within the various lists. If you do this correctly, main () will use the function pretty_print_v ks (), which in turn calls your pretty_print_section_6 (), to print out the initial vk structures for each point in time specified in the data.        

$25.00 View

[SOLVED] Commercial Law Assessment 2 Matlab

Module Title Commercial Law Assignment Mode Individual Assignment Word Count Limit 1000 words (+/- 10%) Citation Format APA Marks 100 marks Assignment Brief Yusof works as a car salesman at Luxury Car Dealers. On 2nd January Kenny saw an advertisement of Luxury Car Dealers and telephones Yusof and asked if he could trade in his Range Rover. Yusof says that he is happy to accept Kenny’s car but he would need to inspect the car before the sale is finalised. Kenny visits Yusof the next day and say ‘it is yours for $ 185,000. ’ Yusof examines the car and replies ‘I love it but $185,000 is too expensive for the Range Rover’ . Yusof says he will buy the car for $135,000. Kenny claims that his car is in a perfect condition and $135,000 is too low for his car. Yusof says he need to speak to his employer before he accepts to buy the car for $185,000. Yusof’s employer agrees to buy the car for $185,000. On 4th January Yusof sent a text message to Kenny stating that he would buy the car for $185,000 and will assume this is acceptable unless Kenny tells him otherwise. On 3rd January Kenny met his friend Aisyah and he made an offer to sell his car to her for $185,000. On 5th  January Kenny emailed Aisyah stating that he has found a buyer for his car. However, she replied stating that she agrees to buy Kenny’s car for $185,000. Discuss, incorporating your understanding of contract law, the following: a.  Whether a contract was formed. If so, when was it formed and who were the contracting parties?

$25.00 View