N1569 Financial Risk Management Workshop Topic 2 Watch The Big Short film on YouTube here before the workshop. During the workshop you discuss the following questions: 1. What is the difference between subprime and prime mortgages and why were subprime mortgages so prevalent in the USA in 2008? 2. What are mortgage-backed securities (MBS) and how did they contribute to the housing bubble? 3. How do collateralized debt obligations (CDOs) work and why did credit rating agencies not warn banks of their risk? 4. Explain the concept of naked credit default swaps (CDS) and their use by Scion Capital in 2008 5. How did Goldman Sachs collude with credit rating agencies to offload their credit risk from issuing CDSson their own CDOs? Before the workshop, please use the ChatGT prompt: “ In relation to story of the film “The Big Short” followed by one of the questions above and limit the answer so that it is not too long. For example, you could type into chatGPT: In relation to story of the film “The Big Short” what are mortgage-backed securities (MBS) and how did they contribute to the housing bubble? Give your answer in 100 words. Read the answers before the workshop, where they will be discussed.
PERFORMING IN EXTREME ENVIRONMENTS Hypoxic-Hyperoxic Performance – Abstract and Discussion Coursework Assessment due date: Monday 11th November, 13:00. Assessment submission window: Monday 4th November – Monday 11th November (closes at 13:00) - Late submissions will receive a 5% deduction for each day it is late. Assessment weighting: This assignment will count 50% towards your final mark. Assessment requirements: You are required to: 1. Read/attend/watch the hypoxic-hyperoxic performance practical class material 2. Write a structured 300-word abstract using the data collected during your week 4 practical class and the statistical analysis completed during your week 5 lecture. 3. Write a 1000-word discussion reporting the key findings and critically evaluating how these findings relate to the relevant literature. 4. Submit cover page, including AI use statement. Word limit Abstract: 300 words total. Discussion: 1000 words total, including in-text citations but excluding the reference list. Assessment Guidance Please refer to the marking rubric at the end of this document for more detailed information on mark allocation and format of how your feedback will be provided. Abstract Your abstract should: Have a background statement that highlights the rationale for the study. You should also incorporate an aim/hypothesis in this section. Have a methods section that briefly outlines the methods used in the experiment. Report the key data (e.g., Means ±SD) and include statistical analysis (e.g., p-values). In Week 5’s lecture we will complete the statistical analysis together in class and these data will be made available to you. Have a concluding statement that summaries the key findings from the experiment. Note: No references are required in your abstract. Discussion Your discussion should: Outline the key findings from the experiment (opening overview paragraph). Demonstrate analytical and critical thinking by: Evaluating each key finding and contrast/compare with key relevant literature. Including a “perspective” section that considers the application of the key findings. Considering the experimental limitations Include a conclusion that summaries the balance of evidence presented. Reference relevant studies included in the recommended reading list (as a minimum). References You should use the Harvard referencing system for this assignment. Information on referencing is available here. Reading list We expect you to engage in your own independent research on this topic but below we have provided some references to get you started (also in the Resource List and Pre-practical material): - Deb et al. (2018). Quantifying the effects of acute hypoxic exposure on exercise performance and capacity: A systematic review and meta-regression. European Journal of Sport Science, 18:2, 243-256, DOI: 10.1080/17461391.2017.1410233. - Cardinale & Ekblom (2018). Hyperoxia for performance and training, Journal of Sports Sciences, 36:13, 1515-1522, DOI: 10.1080/02640414.2017.1398893. Assessment Support Your lectures in Weeks 4 and 5, as well as your practical lab class in Week 4 will be key formative opportunities. - Week 4 (Tuesday): We will discuss the background, rationale, aim and hypothesis for the experiment. - Week 4: We will collect the data during the 'Hypoxic-hyperoxic performance' practical (you will be timetabled to attend one of four 3-hour practical classes during this week). - Week 5 (Monday): We will analyse the data and also discuss key elements to help inform. your abstract and discussion coursework. - Week 5 (Tuesday and Friday): We will work through a formative abstract and discussion exercise. Mark return deadline and feedback opportunities: - Your marks and feedback will be released on the 3rd of December 2024 (15 working days following submission date). - In Week 11, the Tuesday lecture on the 3rd of December will be a general feedback session. - There will also be a drop-in session (details to be announced later in the term). This session will be an opportunity for you to ask any questions you may have relating to your feedback. Plagiarism Please ensure that you understand what constitutes plagiarism and collusion and the consequences of either. If you are unsure please see the following advice on the University intranet here. Generative AI Please ensure that you understand what constitutes an appropriate and ethical use of generative AI for this assignment. The Level of AI use permitted in this assignment is at Level 2: Marking and feedback rubrics The rubric below outlines the marking criteria and mark boundaries the assessors will use to rate your coursework – note that there are 3 specific criteria for the discussion section (compared to 1 for the abstract). Outstanding (>80%) Excellent (70 - 79%) Good (60 - 69% ) Sound (50 - 59%) Adequate to poor (
Project Proposal: Fingerprint Recognition System for Identity Verification 1. Project Overview This project focuses on developing a fingerprint recognition system for identity verification using a combination of traditional image processing techniques, Support Vector Machines (SVM), and Convolutional Neural Networks (CNN). Fingerprint recognition is widely used in mobile devices and security systems. The project will compare three methods: minutiae-based matching, SVM classification, and CNN-based recognition. Each method will be tested under conditions such as noisy and partial fingerprints to evaluate their accuracy, efficiency, and robustness in real-world scenarios. 2. Project Idea and Originality Fingerprint recognition has been extensively researched over the years. However, challenges remain, particularly when dealing with partial fingerprints, noisy images, and fingerprint distortions. Modern deep learning approaches, such as CNNs, have shown great promise in overcoming these challenges. However, traditional methods and classical machine learning models like SVM still hold significant potential, especially when combined with advanced preprocessing techniques. The originality of this project lies in: Combining traditional and modern methods: We will use the classical minutiae-based method to detect fingerprint features, a SVM model for fingerprint classification, and CNNs for automated feature extraction and matching. The SVM model will provide a machine learning-based baseline to compare against both the classical and deep learning approaches. Performance comparison: We will evaluate and compare the performance of minutiae-based methods, SVM classifiers, and CNN models under various conditions, such as degraded image quality, partial fingerprints, and distortions. This comprehensive evaluation will shed light on the strengths and weaknesses of each method, offering valuable insights into fingerprint recognition technology. 3. Problem Statement and Motivation Problem Definition: Fingerprint recognition systems rely on the uniqueness of human fingerprints for identity verification. However, these systems often face several challenges: Partial fingerprints: Fingerprints may not always be fully captured due to sensor limitations or user finger positioning, which can hinder accurate identification. Image noise: Fingerprint images may contain noise or be of low quality due to environmental factors, affecting the system's ability to match fingerprints accurately. Variability and distortions: Fingerprint images can be distorted by different capture angles, finger pressure, or deformation, reducing the performance of traditional recognition algorithms. Motivation: With the increasing use of fingerprint recognition in mobile devices, security systems, and other applications, improving the robustness and accuracy of these systems under challenging conditions is essential. Support Vector Machines (SVM) and Convolutional Neural Networks (CNN) have shown promising results in other areas of image classification and pattern recognition. By integrating these techniques into a fingerprint recognition system, this project aims to improve performance, particularly in handling noisy, partial, or distorted fingerprints. This will make fingerprint recognition systems more reliable and widely applicable in real-world scenarios. 4. Relevant Prior Work Fingerprint recognition systems have a strong research foundation. The minutiae-based approach is one of the most well-established methods, relying on detecting key points such as ridge endings and bifurcations in the fingerprint image. These systems perform. well with high-quality fingerprint images but struggle with degraded or partial fingerprints. Maltoni et al. (2009) have extensively documented these traditional approaches. In recent years, machine learning techniques such as Support Vector Machines (SVM) have been applied to fingerprint recognition. SVM is a powerful classifier, capable of distinguishing patterns in high- dimensional spaces. Chikkerur et al. (2006) demonstrated the efficacy of SVM in fingerprint classification, achieving strong results in controlled environments. More recently, Convolutional Neural Networks (CNNs) have gained prominence in fingerprint recognition research. CNNs can automatically learn and extract complex features from fingerprint images, making them particularly effective for handling distortions, partial prints, and noise. Nanni et al. (2015) showed that CNNs outperformed traditional approaches, especially in recognizing incomplete or noisy fingerprints. This project builds on these prior works by comparing minutiae-based methods, SVM, and CNNs on the same dataset, offering a unique perspective on the performance of each method under different conditions. 5. Proposed Methodology The project will follow these key steps: Data Collection: We will use the publicly available FVC dataset, a standard benchmark for fingerprint recognition. This dataset includes fingerprint images collected from different sensors and conditions, making it ideal for evaluating the robustness of different recognition methods. Preprocessing: Minutiae-based method: We will perform. preprocessing steps such as noise removal, contrast enhancement, and ridge thinning to ensure the minutiae points (e.g., ridge endings and bifurcations) can be clearly extracted. SVM method: After preprocessing, we will extract relevant fingerprint features (minutiae points or global features) and convert them into feature vectors, which will be used as input for the SVM classifier. SVM will then classify the fingerprints based on their feature vectors. CNN method: We will use minimal preprocessing for CNNs, allowing the network to learn features directly from the raw images. Data augmentation techniques (e.g., rotation, scaling) will be applied to improve model generalization. Feature Extraction and Classification: Minutiae-based matching: The extracted minutiae points will be used for fingerprint matching using an algorithm like RANSAC to improve the robustness of the matching process. SVM Classification: After transforming the fingerprint images into feature vectors, we will train an SVM model to classify the fingerprints. We will tune the SVM hyperparameters (e.g., kernel type, regularization) to optimize performance. CNN Model Training: We will implement a deep learning pipeline using a CNN model (e.g., ResNet or VGG) and fine-tune it for fingerprint recognition tasks. The model will be trained on a portion of the FVC dataset and evaluated on accuracy and F1 score. Evaluation: The performance of each method will be evaluated using metrics such as accuracy, precision, recall, F1 score, and processing time. In addition, we will assess how each method handles noisy fingerprints and partial fingerprints to determine the robustness of the different approaches. Feasibility: This project is feasible within the given time frame. The FVC dataset is publicly available, and the necessary tools (e.g., Python, OpenCV, Scikit-learn for SVM, TensorFlow/Keras for CNN) are well- documented and accessible. The scope of the project is appropriate, balancing traditional methods with machine learning and deep learning approaches. Both SVM and CNN models are well-researched, and their implementation does not require excessive computational resources. 6. Tasks and Timeline 1. Data Collection and Preprocessing (Week 1): Gather fingerprint data and apply preprocessing techniques. 2. Feature Extraction and Model Implementation (Week 2): Extract features for SVM and develop the CNN-model. 3. Model Training and Hyperparameter Tuning (Weeks 3-4): Optimize SVM and CNN models for accuracy. 4. Evaluation and Comparison (Weeks 5-6): Evaluate model performance on noisy and partial fingerprints. 5. Final Report and Presentation (Weeks 7-8): Summarize results and prepare the final presentation. 7. Team Roles and Contributions Our team has two members. Jiayu Du will focus on preprocessing the data, including noise removal and feature extraction for SVM. Member A will also tune the SVM model to improve accuracy. Zhuoyang Zhou will be responsible for the design and implementation of the CNN model and will apply data augmentation techniques to enhance its performance. Both members will collaborate on evaluating the models and preparing the final report. 8. Conclusion This project aims to contribute to fingerprint recognition by comparing minutiae-based methods, SVM, and CNN. By testing these methods on the FVC dataset, we aim to determine the best approach for handling noisy and partial fingerprints. The results will help improve the accuracy and robustness of fingerprint recognition systems in real-world applications.
INTERNATIONAL A-LEVEL CHEMISTRY (9620) Unit 3: Inorganic 2 and Physical 2 One of the characteristics of transition metals is that they form. complexes. A solution contains aqueous copper(II) ions. When an excess of chloride ions is added to this solution, a reaction occurs in which there is a change in the co-ordination number of the copper ion. • Write an equation for the reaction. • State the type of reaction occurring. • State the name of the shape of the complex ion formed. • Give a reason for the change in co-ordination number. [4 marks] Equation Type of reaction Name of the shape of the complex ion Reason for change in co-ordination number Explain why transition metal complexes are coloured. [3 marks] State why there is often a colour change when there is a change in ligand in a reaction involving a complex. [1 mark] Figure 1 shows the structure of a complex ion of copper. Figure 1 State the co-ordination number of copper in the complex ion shown in Figure 1. [1 mark] Name the species that acts as a bidentate ligand in the complex ion shown in Figure 1. State how this species can act as a bidentate ligand. [2 marks] Name How species acts as a bidentate ligand Identify a reagent that could be used in a test to show that [Fe(H2O)6] 3+ is a better proton donor than [Fe(H2O)6] 2+ Describe the expected result of your test. [2 marks] Reagent Observations This question is about some Period 3 oxides and chlorides. Suggest why silicon dioxide can be described as an acidic oxide even though it is insoluble in water. [1 mark] Table 1 shows the melting points of some Period 3 oxides. Table 1 Explain, in terms of structure and bonding, why sodium oxide has a high melting point. [3 marks] Explain why sulfur trioxide has a higher melting point than sulfur dioxide. [2 marks] A small amount of each of the Period 3 chlorides NaCl, MgCl2, AlCl3 and PCl5 is added to separate samples of deionised water. The pH values of the resulting solutions are measured. State why NaCl forms a neutral solution. [1 mark] Both AlCl3 and PCl5 form. acidic solutions. The equation for the reaction of AlCl3 with water is Explain why the solution formed is acidic. Use an equation in your answer. [2 marks] Identify the two acids formed when PCl5 reacts with water. [1 mark] Born–Haber cycles can be used to show the enthalpy changes involved in the formation of an ionic compound. Figure 2 shows an incomplete Born–Haber cycle for the formation of potassium oxide (K2O). The Born–Haber cycle is not to scale. Figure 2 Complete Figure 2 by writing the formulae, including state symbols, of the appropriate species on each of the two blank lines. [2 marks] Table 2 shows the enthalpy changes involved in the formation of potassium oxide. Table 2 Give the meaning of the term enthalpy of atomisation. [2 marks] Suggest why the second electron affinity of oxygen is endothermic. [1 mark] Use the data in Table 2 to calculate the enthalpy of lattice dissociation of potassium oxide. [3 marks] enthalpy of lattice dissociation = kJ mol–1 A theoretical value for the enthalpy of lattice dissociation can be calculated using a perfect ionic model. The theoretical enthalpy of lattice dissociation for silver fluoride is +870 kJ mol–1 Explain why the theoretical enthalpy of lattice dissociation for silver fluoride is different from the experimental value calculated using a Born–Haber cycle. [2 marks] The theoretical enthalpy of lattice dissociation for silver chloride is +770 kJ mol–1 Explain why this value is less than the value for silver fluoride. [2 marks] Ethyne gas (C2H2) is manufactured from methane in a reversible reaction. What is the effect of increasing the pressure on the equilibrium yield of ethyne and on the equilibrium constant (Kp) for this reaction? [1 mark] Tick () one box. Write an expression for Kp for the manufacture of ethyne from methane. [1 mark] Kp At a given temperature a sealed flask contains an equilibrium mixture of 0.10 mol of methane, 0.18 mol of ethyne and 0.52 mol of hydrogen. The pressure in the flask at equilibrium is 500 kPa Calculate the value of Kp under these conditions. Give your answer to three significant figures. State the units of Kp [4 marks] Figure 3 represents an alkaline fuel cell. Figure 3 Give two reasons why it is not correct to describe the cell in Figure 3 as a rechargeable cell. [2 marks] 1 2 Gas B is oxygen. Identify Gas A and Liquid C. [2 marks] Gas A Liquid C It would be cheaper to use air instead of pure oxygen. One disadvantage of using air is that the carbon dioxide in the air would react with the electrolyte and decrease the life of the cell. Complete the equation for the reaction between carbon dioxide and the electrolyte, KOH [1 mark] An equation for the reaction at the positive electrode is Write an equation for the reaction at the negative electrode. [1 mark] Use the two equations in Question 05.4 to deduce an overall equation for the fuel cell. [1 mark]
Assessment (non-exam) Brief Module code/name MSIN0053 Mastering Entrepreneurship Academic year 2024/25 Term 1 Assessment title Analysis of a high-potential business Individual/group assessment Individual Submission deadlines: Students should submit all work by the published deadline date and time. Students experiencing sudden or unexpected events beyond your control which impact your ability to complete assessed work by the set deadlines may request mitigation via theextenuating circumstances procedure. Students with disabilities or ongoing, long-term conditions should explore aSummary of Reasonable Adjustments. Students may use thedelayed assessment schemefor pre-determined mitigation on a limited number of assessments in a year. Check the Delayed Assessment Scheme area on Portico to see if this assessment is eligible. Return and status of marked assessments: Students should expect to receive feedback within 20 working days of the submission deadline, as per UCL guidelines. The module team will update you if there are delays through unforeseen circumstances (e.g. ill health). All results when first published are provisional until confirmed by the Examination Board. Copyright Note to students: Copyright of this assessment brief is with UCL and the module leader(s) named above. If this brief draws upon work by third parties (e.g. Case Study publishers) such third parties also hold copyright. It must not be copied, reproduced, transferred, distributed, leased, licensed or shared with any other individual(s) and/or organisations, including web-based organisations, without permission of the copyright holder(s) at any point in time. Academic Misconduct: Academic Misconduct is defined as any action or attempted action that may result in a student obtaining an unfair academic advantage. Academic misconduct includes plagiarism, self-plagiarism, obtaining help from/sharing work with others be they individuals and/or organisations or any other form of cheating that may result in a student obtaining an unfair academic advantage. Refer toAcademic Manual Chapter 6, Section 9: Student Academic Misconduct Procedure - 9.2 Definitions. Referencing: You must reference and provide full citation for ALL sources used, including AI sources, articles, text books, lecture slides and module materials. This includes any direct quotes and paraphrased text. If in doubt, reference it. If you need further guidance on referencing please seeUCL’s referencing tutorial for students. Failure to cite references correctly may result in your work being referred to the Academic Misconduct Panel. Use of Artificial Intelligence (AI) Tools in your Assessment: Your module leader will explain to you if and how AI tools can be used to support your assessment. In some assessments, the use of generative AI is not permitted at all. In others, AI may be used in an assistive role which means students are permitted to use AI tools to support the development of specific skills required for the assessment as specified by the module leader. In others, the use of AI tools may be an integral component of the assessment; in these cases the assessment will provide an opportunity to demonstrate effective and responsible use of AI. See page 3 of this brief to check which category use of AI falls into for this assessment. Students should refer to theUCL guidance on acknowledging use of AI and referencing AI. Failure to correctly reference use of AI in assessments may result in students being reported via the Academic Misconduct procedure. Refer to the section of the UCL Assessment success guide onEngaging with AI in your education and assessment. Content of this assessment brief Section Content A Core information B Coursework brief and requirements C Module learning outcomes covered in this assessment D Groupwork instructions (if applicable) E How your work is assessed F Additional information Section A: Core information Submission date 27/11/2024 Submission time 10am UK time Assessment is marked out of: 100 % weighting of this assessment within total module mark 40% Maximum word count/page length/duration 3,000 Words (Level 6 Students); 4,000 Words (Level 7 Students) Section B: Assessment Brief and Requirements Details of the assessment brief. Generic assessment criteria are included in section E. Any additional criteria specific to this assessment are detailed in section F. This 3,000 word coursework requires you to use the Business Model Canvas, the New-Business Road Test (Macro Industry analysis) and associated SWOT analysis to assess the potential of an early-stage business start-up that has demonstrated product/market fit. You must select an opportunity from: • You can choose any one from this list of 2024 Startups 100: UK’s Best new Startups - https://startups.co.uk/startups-100/2024/list-in-full/ - • For your chosen business, you are required to: • Give a brief overview of the startup • Document and explain their business model(s) using an appropriate business model canvas. • Undertake Macro Industry (Porter’s 5 Forces) analysis • Perform. a SWOT analysis of the business. • Summarise your conclusions regarding the overall attractiveness of the opportunity along with any recommendations for mitigation or exploitation of factors identified in your analysis. • (Level Student 7 students only) Reflect on the utility of tools and frameworks in the context of your chosen opportunity (*additional 1,000-word allowance) IMPORTANT: All data sources (journal articles, market reports, or others) should be clearly acknowledged/referenced. Section C: Module Learning Outcomes covered in this Assessment This assessment contributes towards the achievement of the following stated module Learning Outcomes as highlighted below: 1. Identify and use frameworks to judge the potential of a high-potential business concept 2. Understand the difference between a true opportunity and just another "neat" idea 3. Recognise the effort and dedication needed to make a business succeed Section D: Groupwork Instructions (where relevant/appropriate) Specific requirements for groupwork are available here. If this section is blank, no specific requirements for groupwork are applicable to this assessment. n/a Section E: How your work is assessed Within each section of this assessment you may be assessed on the following aspects, as applicable and appropriate to this assessment, and should thus consider these aspects when fulfilling the requirements of each section: • The accuracy of any calculations required. • The strengths and quality of your overall analysis and evaluation; • Appropriate use of relevant theoretical models, concepts and frameworks; • The rationale and evidence that you provide in support of your arguments; • The credibility and viability of the evidenced conclusions/recommendations/plans of action you put forward; • Structure and coherence of your considerations and reports; • Appropriate and relevant use of, as and where relevant and appropriate, real world examples, academic materials and referenced sources. Any references should use either the Harvard OR Vancouver referencing system (see References, Citations and Avoiding Plagiarism) • Academic judgement regarding the blend of scope, thrust and communication of ideas, contentions, evidence, knowledge, arguments, conclusions. • Each assessment requirement(s) has allocated marks/weightings. Student submissions are reviewed/scrutinised by an internal assessor and are available to an External Examiner for further review/scrutiny before consideration by the relevant Examination Board. It is not uncommon for some students to feel that their submissions deserve higher marks (irrespective of whether they actually deserve higher marks). To help you assess the relative strengths and weaknesses of your submission please refer to SOM Assessment Criteria Guidelines, located on the Assessment tab of the SOM Student Information Centre Moodle site. The above is an important link as it specifies the criteria for attaining the pass/fail bandings shown below: At UG Levels 4, 5 and 6: 80% to 100%: Outstanding Pass - 1st; 70% to 79%: Excellent Pass - 1st; 60%-69%: Very Good Pass - 2.1; 50% to 59%: Good Pass - 2.2; 40% to 49%: Satisfactory Pass - 3rd; 20% to 39%: Insufficient to Pass - Fail; 0% to 19%: Poor and Insufficient to Pass - Fail. At PG Level 7: 86% to 100%: Outstanding Pass - Distinction; 70% to 85%: Excellent Pass - Distinction; 60%-69%: Good Pass - Merit; 50% to 59%: Satisfactory - Pass; 40% to 49%: Insufficient to Pass - Fail; 0% to 39%: Poor and Insufficient to Pass - Fail. You are strongly advised to review these criteria before you start your work and during your work, and before you submit. Upon receipt of your mark, you are strongly advised to not compare your mark with marks of other submissions from your student colleagues. Each submission has its own range of characteristics which differ from others in terms of breadth, scope, depth, insights, and subtleties and nuances. On the surface one submission may appear to be similar to another but invariably, digging beneath the surface reveals a range of differing characteristics. Students who wish to request a review of a decision made by the Board of Examiners should refer to the UCL Academic Appeals Procedure, taking note of the acceptable grounds for such appeals. Note that the purpose of this procedure is not to dispute academic judgement – it is to ensure correct application of UCL’s regulations and procedures. The appeals process is evidence-based and circumstances must be supported by independent evidence.
MATHS 7107 Data Taming Assignment 3 Trimester 3 2024 1 Background The software company was happy with your work on their previous problem about the amount of debugging time required for their software programs. Their new software project is almost finished, and (as always) things are more complicated than expected. The length of the new project is now expected to be about 100,000 lines of code, and they are looking at putting together a debugging team to work on it before it is released to the customers. Since this is such a large project, they’d like some estimate of whether the program will have any fatal errors (a very serious bug) before they release it. The chief of product development has put together the debugging team for this project, with Bob as the team leader. Bob is quite new to the company — he only completed his computer science degree last year and was immediately employed at the company, just in time to collect his yearly bonus. (This was somewhat controversial, but since Bob is the chief’s son, nothing was done about it.) In this short time he has become very popular, as he has managed to attend every work party and function, and also become the office’s champion foosball player. He has promised to take his team along with him to all of these events, in the interests of “team building”. The team members that Bob will be leading all have at least a moderate amount of experience, and they all met their KPIs last year, so they all received their pay bonuses. Yet, the CEO is a little concerned about Bob being in charge of such an important project, so they’d like you to do some analysis on the probability of Bob’s team missing any fatal errors. They have provided you with 4 data sets, one from each of the company’s worldwide divisions. Using this data, try to help the CEO determine if this new team will be any good. As with the last report, the CIO uses R and R Markdown, and even completed Data Taming in the past. So make sure you only use commands from the course, so that the CIO can easily see what analysis you’ve done. In your R Markdown code chunks: make sure that you do not set echo = FALSE so that they can see what R code you used to generate your output. But of course, they don’t want to see irrelevant warnings or messages. But remember that your report is for the CEO, who is not really a technical person, and who certainly doesn’t know R. So make sure you include descriptions allowing the average person to understand what you are doing and what the output means. 1.1 Number of digits When writing your own text, or USING the output from R: • For integer results, report the whole integer. • For non-integers with absolute value > 1: use 2 decimal places • For non-integers with absolute value < 1: use 3 significant figures. For example: ◦ 135.5681 ≈ 135.57 ◦ −0.0004586 ≈ −0.000459 Exceptions: • If you’re just PRINTING the output from R, then just keep the output as it is. – But if you have R do the rounding for you then you need to conform to these two conventions listed above. • If your data has fewer digits of precision than specified above (eg. because of the way it was stored in the original data, or because of the way it was calculated) then only report that level of precision. 2 The data The company has four datasets labelled programs 0 .csv, programs 1 .csv, programs 2 .csv and programs 3 .csv (one from each of the divisions). Each dataset contains 6 columns: • FError: if the program was found to have a fatal error. • LOC: the number of lines of code in the program. • XP: the experience level of the debugging team leader. • foosball: if the office with the debugging team has a foosball table this is yes, otherwise no. • Parties: the number of office parties that the program’s debugging team attended during the debugging project, none, some or many. • Bonus: yes or no indicating whether or not the team members received their bonus in the previous year. Each dataset has data on 36,000 software programs. Luckily, the data itself has already been cleaned and so there should not be any missing or erroneous rows in the data. However, the data cleaner is a typical Gen-Y who has trouble spelling, and a keen interest in deep cuts of Coolio, so be on the lookout for some required data taming. IMPORTANT! If you remove any data, then make sure you only remove data that you MUST remove. Do not just delete data because it is inconvenient. You must have specific instructions from the client, or it must be an impossible value, before you remove any data from your analysis. Even then, you need to describe why it was removed. 3 Your job To help the company, we will analyse the data of fatal errors, and then make a prediction about how Bob’s team will go. Note Make sure you write text to explain what you are doing at each point and why you are doing it. You need to justify all the things you do or claim. Also describe the results. 1. Load the correct dataset and save it as a tibble. Output the first 10 lines of the dataset and the dimensions of the data set. 2. Using dot points, identify what types of variables we now have in our data set, i.e., “Quantitative Discrete”, “Quantitative Continuous”, “Categorical Nominal”, “Categorical Ordinal”. (Don’t just describe what data type they are in the data set — you need to think about the type of variable in the context of the meaning of the data.) Make sure you provide some justification for your choice of variable types. • Don’t just provide vague statements, but be very concrete about describing this particular set of data. 3. Now it’s time to tame our data. But since we are going to fit a logistic regression model, we need to modify our requirements a little bit. • Fix and tame all column names. • Convert the fatal error status to a data type, with yes for 1 and no for 0. • Treat the number of lines of code as a quantitative continuous variable. This is because we want to fit a series of lines, which assumes that the predictor is continuous. • If you have identified any Categorical Ordinal variables, store them as a . • Make the remaining variables conform to the Tame Data conventions in Module 2 (page 3). Output the first 10 rows of your data and the dimensions of the data set. 4. Setting the correct seed, split your data into a training set (with 25,000 rows) and a testing set, with the remaining rows. Output the first 10 lines of each dataset and the dimensions of each data set. 5. Fit a logistic regression model to your training data, with the fatal error status as the response and all other variables as the predictors. (Just use them individually, don’t include any interaction terms.) Output the summary of the model. 6. Since we are using general linear models, the model summary in Question 5 describes linear geometric ob- jects, where the dimension of the geometric object is determined by the number of quantitative continuous predictors. We have only a single quantitative continuous predictor so our model describes a set of lines. How many lines are described by the model in Question 5? Make sure you give some justification for your answer. • (Hint: see the Week 8 seminar and pages 12 – 16 of Module 7. The model summary output should help.) 7. Now it is time to get serious with our data. There may be some interactions between the variables in the data set, so fit a new model to your training set using all the individual variables and all the second-order interaction terms. Use Anova() to find the p-values for each of the variables. Identify all interaction terms that meet the 90% significance level. • (Hint: if you have three predictors x1 , x2 , x3 , then the second order interaction terms are x1 x2 , x1 x3 , x2 x3 . There is an easy way and a hard way to do this — see the Reminder sheet for the easy way.) 8. We’ll now apply backwards stepwise regression. As we learned in Module 7, best practice is to only remove terms one-by-one starting with the least significant. However, the CEO has said that we are not to consider the foosball table (since the staff are likely to strike if there is any suggestion that we might recommend removing it). (a) So ignore anything to do with the foosball table, and fit a new model with all the remaining individual variables and interactions. Show the Anova() output. (b) Then ONLY looking at the interaction terms, continue with step-by-step backwards stepwise regres- sion to find a model where all interaction terms meet the 90% significance level. At each step, identify the interaction term that you will remove, and why you will choose that one. Then show the resulting Anova() after you fit each model. • The “principle of marginality” tells us that a variable shouldn’t appear in an interaction term if we don’t have the variable appear by itself. (c) Finally, now focus on the individual terms and finish the backwards stepwise regression so that all terms (individual terms and interaction terms) meet the 90% significance level. At each step, identify the variable that you will remove, and why you will choose that one. Then show the resulting Anova() after you fit each model. • (Note: since everybody’s training set is different, you may find that there are no non-significant individual terms at this point. In which case, just explain that in text and show the Anova() again.) 9. (a) Which interaction terms are significant (at the 90% level) in your final model? (b) Thinking about the context of the data, provide some reasonable hypotheses for why those interaction terms might represent real effects (and are not just statistical noise). 10. So we have now fit a logistic regression model for the log-odds, which has the general form. log ( 1 − πˆi πˆi ) = ˆbi where b(ˆ)i is an estimated function of the predictors. Write down the general form of b(ˆ)i for your final model in Question 8. Keep the coefficients as pronumerals for now, so it should look like: b(ˆ)i =β(ˆ)0 + ... Be sure to define all variables in your equation. 11. Looking at Question 10, the geometric situation is slightly more complicated now than in Question6, although our model should still produce a set of lines. (a) How many lines does your final model describe? Make sure you provide some justification for your answer. (b) Are the lines all parallel? If not, explain why not. 12. Now output the summary of your final model showing the estimated coefficients, and use that to write b(ˆ)i with all the estimated coefficients replacing the βˆ j pronumerals. 13. What is our estimate for the log-odds of a program still containing a fatal error if: (a) the office has a foosball table, and the team is lead by a highly experienced staff member, who doesn’t allow his team to attend any work functions, but they all received their bonuses last year? (b) the office has a foosball table, the team leader has a moderate amount of experience, who lets his staff attend some functions, but the staff did not get their bonus last year? 14. Now apply your final model to the testing data. Produce a new tibble containing the true classes, the predicted classes and the prediction probabilities. Output the first 10 lines of this tibble and the dimensions of the data set. 15. Now we need to evaluate our model. (a) Find the confusion matrix and the accuracy of the model. (b) If “having a fatal bug” is classified as a success, find the sensitivity and specificity of our model. (Hint: make sure you calculate the values yourself, as R may not choose the right level.) (c) Plot the ROC curve. You might want to add the following code to your autoplot() + geom vline(xintercept= . . .) + geom hline(yintercept= . . .) to help you identify which is the correct ROC curve. (d) What is the AUC of this ROC curve? 16. Finally, let’s try to answer the CEO’s question. Based on your model, do you predict that Bob’s team will result in a fatal error in the final program? Write some text to interpret your results for the CEO, and make sure you give the probabilities of your predicted class. 4 Submission You must submit your assignment via MyUni. Do not email it to the teaching staff. Detailed instructions are on the assignment submission page in MyUni. Make sure that all your output is relevant to the questions being asked. 5 Deliverable Specifications (DS) Before you submit your assignment, make sure you have met all the criteria in the Deliverable Specifications (DS). The client will not be happy if you do not deliver your results in the format that they’ve asked for.
N1572- Services Marketing Academic Year 2024 – 2025 Group Assignment (GWS) Instructions GOAL The goal of this coursework is assessing the performance of a real service provider based on online customer reviews and formulating actionable managerial recommendations for the company. You can choose whether to analyse a negative/dissatisfactory service performance – i.e., review(s) that highlight the issues the consumer(s) experienced in the service encounter(s) – or a positive/satisfactory service performance – i.e., review(s) that highlight what went well in the service encounter(s). CONTENT/STRUCTURE 1. Select the review(s) and identify the issues (or “what went well”) in the service encounter(s). You are tasked with inferring either the issues (i.e., areas requiring improvement) if you decided to analyse a negative/dissatisfactory service performance, or what went well (i.e., aspects that were successful) you decided to analyse or a positive/satisfying performance, by scrutinising online customer reviews posted on platforms (e.g., Trustpilot, Google Reviews, Booking.com, Yelp.com) regarding (a) previous service encounter(s). This analysis may be based on a single detailed review or a set of reviews that demonstrate a consistent pattern. The reviews cannot be older than one year – i.e., they had to be published between September 2023 and September 2024. Some tips: The more detailed the review, the easier it will be for you to understand what the issues are (or what went well). Look for specific examples and recurring themes in the customer feedback. Consider how different elements of the service (e.g., staff,facilities, processes) contribute to the issues (or positive aspects) you identify. Being familiar with the industry or company you choose can also ease the identification and analysis. 2. Diagnose the problems (“what went well”) in the service encounter(s) To diagnose the dissatisfying or satisfying service encounter you will use: • Actor-Network Theory (ANT): Describe the human and non-human actors involved in the service encounter that contributed to the (dis)satisfaction and identify their actions. • Gaps Model: Based on the analysis of the reviews and the ANT analysis, identify the gaps in which the service provider underperformed (in case of a negative/dissatisfying service experience) or excelled (in case of a positive/dissatisfying service experience). 3. Formulate managerial implications Discuss the most important actions that the management should take in response to the analyses you did. That is, how can the service provider address the gaps (in case of a negative/dissatisfying service experience) or keep up with the good quality provided (in case of a positive/satisfying service experience)? You need to provide actionable and clear recommendations. Some tips: Make sure to use and explicitly identify theories and concepts from the class, textbook or other academic sources to support your diagnosis and suggestions. Your diagnosis and suggestions should draw on the content covered throughout the module, but you can also use additional theories that we did not cover, but you believe you are relevant. 4. Description of lessons learned Please address the following three questions. Be as specific as possible. • What did you learn from this assignment about yourselves as consumers? • What did you learn from this assignment that will help you be better (marketing) managers? • What did you learn from this assignment about working in a group? 5. Reference list (please use Harvard or APA referencing) 6. Appendix: the screenshot(s) of the review(s) and the links to the review(s) you selected and analysed OUTCOME • The content discussed above should be contained in the slides. • The recommended number of slides is 20 – 25 (excluding the title slide, reference list and appendix). • There should be no text in the notes section. • The title and the appendix slides should include all the required information (i.e., name of the service provider and students’ ID numbers). Marking Criteria The group written submission is weighted at 30% of your final grade. Your work will be evaluated based on the following: eview(s)identify ermined theproblemsqualityoftheserviceandthereforethe() review(s)10Reading & ResearchCritical ThinkingThe ce dingApplicationCritical ThinkingReading & ResearchThe ability to match the service probl appropriate managerial acbasedd/orin yourreadings.35Knowledge & Understan : rules, .10Reading & Research Presentati You should expect detailed feedback on your work, highlighting what worked, what did not work, and how you can improve in the future. This feedback will relate to the grading criteria mentioned above and be summarised in a small chart like this. dingApplicationCritical thinkingReading & researchPresentation &styleTeamwork
COMP3161/COMP9161 Concepts of Programming Languages Sample Exam Session 2 2011 Question 1 [25 Marks] Consider the following inductive definition of evaluation rules for a restricted form of boolean expres- sions. Boolean expressions: Evaluation rules: A) [7 marks] Give the derivation of the evaluation for the following expression: • and(not(false); and(true; not(true))) B) [7 marks] Are the rules unambiguous? If so, briefly explain why. If not, give an example expression for which the set of rules allow more than a single derivation. C) [11 marks] The rules listed above give a small step semantics. List the inference rules which specify an equiv- alent big step semantics. Question 2 [25 Marks] A) [10 marks] In the lecture, we discussed the E-machine as an example of an abstract machine which handles value bindings explicitly by maintaining a value environment. One of the possible return values of the E-machine are function closures. i) What is a function closure? ii) Give an example of an expression whose evaluation in the E-machine requires the creation of a closure. B) [15 marks] We discussed two distinct methods to handle exceptions: the first method required that, when an exception is thrown, the evaluation unrolls the stack until the matching catch-expression is found. The second method made it possible to directly jump to the matching catch-expression. Describe the second method: i) What are the components of the state of the abstract machine? ii) How does the state of the machine change when a catch-expression is evaluated? iii) How does the state of the machine change when a raise-expression is evaluated? For (ii) and (iii), you do not have to give the exact transition rule — it is sufficient to describe how the state is affected. Question 3 [25 Marks] A) [6 marks] For each of the following three pairs of type expressions determine whether the pair has a most general unifier? If so, please provide it. i) (a , b) → (b , a) and (int , c) → (c , c) ii) a → (a , a) and (b , b) → b iii) int → int and float → int B) [9 marks] Give the principal type of the following (polymorphic) MinHs expressions: i) Inr(Inl(True)) ii) letfun f (x) is fst (snd (x)); iii) letfun g (x) is case x of Inl(a) -> a Inr(b) -> b end end C) [10 marks] Consider the following MinHs types: • 8a.8b.(a * b → c) → (a → b → c) • 8a.8b.(a → b) → (b → a) • 8a.8b.8c.(a → b) → (b → c) → (c → a) • 8a.() → a • 8a.a → () For which of these types exist terminating MinHs functions? Question 4 [25 Marks] A) [10 marks] Progress and preservation are central concepts for strongly typed languages. i) Give the definition of progress and of preservation in the context of a strongly typed language. ii) The presence of partial functions can be problematic with respect to progress. Describe how they can be handled in a strongly typed language such that both progress and preservation still hold. B) [5 marks] Give an example each for a type constructor which is covariant and a type constructor which is contravariant in at least one of its argument positions. C) [10 marks] Java’s array type is covariant. Why is this problematic?
VHDL Circuit Design and Simulation VHDL Modelling and Simulation of a Polynomial Evaluator a. Assignment Aims This assignment will enable you to become familiarised with, and gain experience of the industrially relevant software, Intel Quartus Prime (digital system design) and ModelSim (simulation software). The design and modelling of digital circuitry will be undertaken using a hardware description language (HDL) called VHDL within the Quartus Prime software. VHDL is an industry standard programming language for representing complex digital circuits, at different levels of abstraction, quickly and easily. The VHDL code will be compiled and simulated using the ModelSim software, to enable the functionality of the design to be verified. An introductory tutorial for the software tools, on how to implement VHDL code and verify the function/operation through simulation, is provided. b. Learning Outcomes: The following learning outcomes will be assessed by this assessment Knowledge and Understanding Outcomes Apply modern digital CAD tools and Hardware Description Languages (HDLs) to the development and analysis digital design problems. Evaluate the methodologies, techniques, performance limitation factors and cost drivers for the design of digital systems. Ability Outcomes Demonstrate an awareness of developing technologies related to integrated circuit design. Analyse data for a design problem (before solving) using an appropriate design methodology and assess the quality of the solution. c. Assignment Brief Section 1 The VDHL code provided in Figure 1, models a digital circuit that evaluates a polynomial function for a fixed ‘x’ parameter value and 4 variable inputs (a0 to a3), using a high-level of abstraction. The polynomial being evaluated is; f(x) = a0 + a1x + a2x2 + a3x3 Note: The input ai enter in series with a3 first followed by a2 etc, (a3, a2, a1, a0) Value of x is fixed. a, x and fx are all integers. The value of fx will be updated every clock cycle. library IEEE; use IEEE.STD_LOGIC_1164.all; use IEEE.STD_LOGIC_unsigned.all; use ieee.std_logic_arith.all; ENTITY Section1_NHE2483 IS PORT ( clk, res : IN BIT; ai, x : IN INTEGER:=0; fx : OUT INTEGER:=0); END Section1_NHE2483; ARCHITECTURE bhv OF Section1_NHE2483 IS SIGNAL reg1 : INTEGER:=0; BEGIN PROCESS BEGIN WAIT UNTIL (clk'EVENT AND clk = '1'); IF res = '1' THEN reg1
ACFIM0019 Financial Management December 2024 Overview • Your summative coursework represents 100% of the final mark for the unit. • The coursework is in the form of three pieces of reports (see detailed requirements in the below). • Late penalties will apply if the coursework is submitted late. • The coursework is an individual work - you must work on this yourself and not as a group. You will be required to make a plagiarism statement and your submission will be tested for originality. • The maximum word allowance has been set for each of the three reports. You must include a word count for each report at the cover page. Coursework requirement You are expected to prepare three separate reports in accordance with the below requirements: Report A (25%, 700 words maximum): Sam is an individual investor and he currently holds Amazon’s stocks (Nasdaq: AMZN). He plans to form a portfolio by investing in a different stock. As a financial consultant, you are expected to host an investment analysis meeting with Sam. You are required to prepare a report to address the following: • In addition to Amazon, select one firm from the real world and you can pick any firm that is listed on any of the stock exchanges. Collect historical daily price/return data of these two stocks for at least five years. You also need to collect their past two years’ accounting information. Ratio analysis (10%): • Select one ratio by yourself and illustrate the chosen ratio over the past two years with numerical calculations. You are expected to discuss the purpose and the meaning of your chosen ratio, and compare the ratio of two companies. Furthermore, you need to discuss one limitation in financial ratio analysis. You don’t need to include original financial statements (e.g., statement of profit & loss), but you need to provide a data resource link within your report. Portfolio analysis (15%): • You are expected to calculate the expected returns and standard deviation of two firms’ stocks (the original price/return data are not required to be shown in your report) and the correlation coefficient between two stocks. You need to provide a data resource link within your report. • You are then expected to calculate the expected returns and standard deviations (variance) of the portfolios with different sets of weights among two stocks. You are expected to show detailed steps of your calculation. Based on your calculation, discuss the importance of diversification. • For the above analyses, you can choose the way of showing formulas and calculation process by yourself (e.g., screenshot from other software, equations in Word, etc.). The formulas and calculation process are not counted for word limitation. Report B (50%, 1600 words maximum): Helix, a UK transportation company, is considering investing in a new project in the personal care industry. The project will be partly debt-financed. Helix hired you to evaluate this hypothetical investment project. • You are expected to apply two different appraisal approaches to conduct an investment feasibility study and produce an evaluation report. The Net Present Value is a mandatory approach, and you can choose another approach by yourself. • Detailed discussions on how relevant elements/factors would potentially affect the evaluation results should be included. • Illustrations with numerical calculations based on your own estimated data are required. You can create your own estimations without sourcing real-world data. For example, you may set the project length to 5 years with a project size around 1 million. You are expected to setup your own data on the project- related elements/factors (e.g., project length, project size, potential revenues, relevant costs, financing channels, tax rates, estimated cost of capital, inflations, and etc.). • You need to show and explain all the detailed steps for your numerical calculations. • You can choose the way of showing formulas and calculation process by yourself (e.g., screenshot from other software, equations in Word, etc.). The formulas and calculation process are not counted for word limitation. Report C (25%, 700 words maximum): Discuss whether the capital market is efficient. • You are expected to make reference to the market efficiency theories, and discuss the Efficient Market Hypothesis (EMH), relevant theories and the empirical findings in the academic literature. • A consistent referencing style. should be used (e.g., Harvard referencing style). The reference list is not counted for word limitation.
EMS724: Finite Element Analysis Coursework: Part 1 (70%) For this part, you are required to analyse either an Axisymmetric or 3D (solid) component using static analysis in ABAQUS. The aim is to model a realistic structure, perform a mesh convergence study, and interpret the results. The report should be no more than 5 pages, excluding the title page and, if necessary, appendix and references. Your analysis should be clear, reproducible, and concise, and the report must include the following sections: Report Structure 1. Abstract (10 marks) Provide a short and concise summary of the report, detailing the objective, results, and key findings. It should give the reader a clear overview of the work you performed and conclusions found. 2. Introduction (15 marks) Define the problem you are solving and provide the necessary background information. This includes: . A clear statement of the geometry of the component, the material properties, loading conditions, and boundary conditions. . Ensure that all necessary information is included so that someone can replicate your model. . If you reference sources for dimensions, materials, or other specifications, ensure they are correctly cited. . You should also analysis a parameter to improve design (e.g. different materials. rounded-off corners, or support placements). This is important for your education, but also it makes it realistic as models are used to improve designs, in addition to just analysing them. So explain what you are doing in the introduction section (this should only take a couple of sentences!). 3. Results (25 marks) Present your analysis results, focusing on: . Mesh convergence analysis: Demonstrate that the solution converges as the meshis refined. Include graphs such as max stress vs. mesh size to illustrate this. This is so important in modelling analysis as you must never rely on a single calculation. Instead, perform multiple runs with increasing mesh resolution, comparing stresses, strains, or deformations at key points and make sure they converge. Hint: Mention any challenges due to mesh refinement limitations imposed by academic licenses, and explain how this might have affected your convergence results. . Discuss how changing the mesh type (triangular vs. quadrilateral or test Vs hex dominated meshes) and/or the polynomial order (linear vs. quadratic) affects the convergence results. . Show relevant output plots and explain the key findings, focusing on areas of high stress or potential failure. . Show how your parameter you changed affects the performance of the structure. For this, do not repeat the mesh convergence. Use a suitable mesh resolution from previous analysis and change your problem parameter with this mesh. 4. Conclusions (20 marks) Summarise the key findings of your analysis. . Highlight the accuracy of your results and any recommendations for improving the model or conducting future studies. . If the results didn't fully converge due to software limitations, acknowledge this and suggest what could be done to overcome these issues in future analysis. Important Hints and Guidelines . Hint 1: Don't rely on just one calculation! Perform. a mesh convergence study by running multiple analyses with increasing mesh resolution, and compare key results like stress or deformation at important points in the structure. . Use graphs to illustrate convergence, such as max stress vs. mesh size. . Experiment with different polynomial orders and element types, and discuss which ones work best for your problem. . Hint 2: Choose a model with moderate complexity, allowing for meaningful results without running into difficulties with mesh refinement due to software license limits. . Simple structures (like a single bar) will result in penalties due to insufficient complexity, but overly complex models may not converge within license restrictions. . Hint 3: Presentation matters! . Avoid unnecessary screenshots from ABAQUS, poor-quality graphs, or untidy figures. Many student reports contain double figures of plots which is a total waste of space. Personally, I would try and limit it to within 6 solution plots. . Ensure all plots are properly labeled with axes, and keep the report organized and concise. Make sure I can read the legend in Abaqus – so often it is left too small. . Think of your report as something you'd present to a future employer; clarity and professionalism are key. . Hint 4: Unit consistency is critical! Always use SI units (e.g., mm-N-MPa or m-N-Pa). Failure to use correct units can result in a deduction of up to 20 marks. Optimisation Coursework: Part 2 (30%) In this section, you will design and optimise using ATOM (Abaqus Topology Optimization Method) a 2D structure (e.g., a bracket) that supports a specified loading. Your task is to start from a generic domain (e.g., a rectangle) and apply appropriate boundary conditions and loading. The goal is to maximize stiffness while using only a percentage of the volume of the original domain, with VF (the Volume Fraction or % of the original structure) ranging from 10% to 40% in increments of 10%. Additionally, you will investigate the effects of mesh resolution for VF = 30%, varying the mesh from, say, coarse (e.g., 20x20 elements) to fine (e.g., 200x200 elements). The report for this part should be no longer than 800 words and should focus on summarising the setup and findings. Often in the workplace you are requested to provide very short reports that are quick to read by busy bosses. So this is a good skill to develop, I.e. fitting information into a limited space. Part 2 Report Structure 1. Introduction & Results (15 marks) . Problem definition: Introduce the 2D structure you are analyzing (e.g., a bracket or other load-bearing component). Briefly explain the loading conditions, boundary conditions, and the goal of maximizing stiffness under the constraint of reduced volume. . Clearly state the range of VF values you will explore (from 10% to 40% of the volume of the original domain). . Topology optimization results: Present and summarize the results for each VF value (10%, 20%, 30%, and 40%). Focus on key findings, such as the stiffness achieved and the material distribution for each optimization run. . Use visual representations (figures showing optimised designs) to illustrate how the structure changes as the volume constraint is varied. . Mesh resolution analysis for VF = 30%: Discuss the effects of mesh resolution (e.g., coarse vs. fine mesh) on the optimization results for VF = 30%. Perhaps comment on the trade-offs between computational time and solution accuracy, and whether finer meshes lead to better-optimized designs or convergence. . You can include graphs or tables comparing key metrics like stiffness or computation time for different mesh resolutions. 2. Conclusion (15 marks) . Summarise the key outcomes of your optimization study. Highlight the best design achieved (i.e., which VF value produced the best stiffness-to-volume ratio). . Mention any challenges encountered during the optimization process, such as mesh sensitivity or computational limitations, and suggest potential improvements for future analyses. . If relevant, provide insights into how the optimized structure could be improved or adapted for industrial applications. Hint: Think critically about the practicality of the optimized design in terms of manufacturability, cost, and material efficiency. If the optimized structure would be difficult to manufacture, suggest design improvements. . Hint 1: Use SI units throughout the report (e.g., mm-N-MPa). Be consistent with units to avoid unnecessary deductions. . Hint 2: Present your results clearly and concisely, focusing on key findings without overwhelming the reader with unnecessary details. Given the word limit, focus on representative examples (e.g., one optimized design for each VF value and one example of coarse vs. fine mesh).
ITS66704 (Sept 2024) Advanced Programming Assignment Task 2 & Task 3 (Group Project) 15% (PART A - ANALYSIS AND DESIGN) 15% (PART B - DEVELOPMENT) OBJECTIVES (MLO3) The objectives of this assignment are to enable you to: 1. MLO3 Construct the program solutions using object-oriented programming language and approach. Scenario Fitness Tracker Application (FTA) Design and Development In this group project, students will have the opportunity to challenge their knowledge and skills by developing an innovative and usable Fitness Tracker Application (FTA). The development of a Fitness Tracker Application involves creating an intuitive interface that allows users to set and track fitness goals, log activities, and monitor progress over time. Background: The project will be divided among the team members. One group will be responsible for building the user profile management and fitness goal-setting features. Another group will handle the exercise logging module, allowing users to input details like duration, type of exercise, and calories burned. A third group will focus on data visualization, creating charts and graphs that display progress in key areas like weight, distance run, or calories burned over time. Additional features, such as a nutrition and diet planning module, could be developed by another team, allowing users to log their meals and track nutritional intake. Advanced functionality could include integrating data from wearable devices (optional) and providing real-time updates on user activity. Throughout the project, collaboration will be required to ensure smooth data flow between the different modules, ensuring an efficient and user-friendly fitness tracking experience. Objectives: 1. To develop a comprehensive understanding of the application development process, from conception to completion. 2. To demonstrate creativity, problem-solving skills, and attention to detail in designing and implementing a most viable prototype (MVP). 3. UI/UX Design and Implementation: a. Understand and apply principles of user-centered design to create an intuitive, aesthetically pleasing interface that enhances user experience. b. Develop interactive and responsive layouts using JavaFX or Swing, focusing on form. inputs, data visualization, and user feedback mechanisms. 4. Event Handling and Data Binding using different UI components: a. Learn how to handle various user interactions such as form. submissions, button clicks, and real-time updates. b. Implement data binding to seamlessly connect the user interface with underlying data models for smooth updates and real-time data display. 5. Data Management and Persistence: a. Design and implement data structures for managing user profiles, exercise logs, and fitness goals. b. Learn techniques for persisting data, such as using local storage, file systems, or integrating with databases to save and retrieve user information 6. Modular Programming and Team Collaboration: a. Gain experience in modular software development by dividing tasks into separate modules (e.g., user profiles, logging activities, data visualization) and integrating them cohesively. b. Practice collaboration using version control systems (e.g., Git) to manage contributions from multiple team members, ensuring smooth project development. 7. Data Visualization: a. Learn how to generate charts and graphs to represent user progress, exploring libraries for data visualization and how to dynamically update visuals based on user input. b. Develop skills in managing real-time data updates for tracking fitness metrics such as calories burned, distance run, and weight loss. 8. Integration with External Devices (Optional): a. Explore how to integrate third-party APIs or wearable devices (e.g., fitness trackers) to import real-time data into the application, enhancing the application’s functionality and realism. 9. User Authentication and Security: a. Implement secure login and user authentication, applying security measures such as password hashing and role-based access controls to ensure data privacy and protection. You are required to assemble a team of Java developers (2 – 6 members) to design and implement the Fitness Tracker Application. In this assignment, your team must use an object-oriented approach to model the main application classes, their subclasses (if any) and their corresponding attributes and methods. The application must have a relevant layout, data and corresponding business logic. Each member must contribute to the design of at least one class and/or subclass definition. In addition, group members may collaborate with people from other faculties or industries who have the domain knowledge to define the scope and coverage of your intended FTA. You may want to refer to the article here (Part 1-5) to kick-start your project. • Note that database usage is not allowed for this assignment. You must use Text or Binary File I/O for persistent data storage on a local machine. The Data Access Object (DAO) hence must be redesigned to cater for local file access. • You do not need to develop the web service endpoints. Instead, you should just develop the common classes and methods in your application as a substitute for the endpoints. The API request will be submitted by method calls. • For the graphics, you may implement each of the above using suitable UI components available in Java FX or Swing. Remember to document and justify your UI selection. Figure 1 below provides an overview of basic fitness app functionality. Figure 1: Basic Fitness App Functionality The intended system should maximize the implementation of object-oriented concepts such as instantiation, encapsulation, inheritance, and polymorphism. The data storage method is confined to what is covered in the syllabus, i.e., text and binary files only and does not cover various SQL, network, or cloud storage. You must strictly adhere to this rule. No marks will be awarded to data storage methods that are not within the syllabus coverage. Step 1: Conceptualization The step involves brainstorming and defining the core objectives and features of the fitness tracker. The goal is to establish the vision for the app. Identify Core Features: Begin by defining the key functionality the app will provide. For a fitness tracker, this might include: • User Profiles for tracking personal data (e.g., name, weight, fitness goals). • Fitness Goals to allow users to set targets (e.g., weight loss, distance, calorie goals). • Exercise Logging for users to record their daily workouts (e.g., type, duration, intensity). • Nutrition and Diet Tracking for users to monitor their meals, calories, and nutrients. • Progress Visualization through charts and graphs showing fitness trends over time. • Wearable Device Integration (optional) to automatically sync data from fitness devices. • User Authentication for security and data protection. Target Audience: Define who the users are (e.g., fitness enthusiasts, people aiming to improve their health). This helps tailor the features to user needs. Technology Stack: Decide the technology to be used, such as: • JavaFX or Swing for the user interface. • Local DAO for storing data.5 • APIs for integration with fitness devices (optional). Team Roles: Assign roles to each team member, e.g.: • UI/UX designer • data management • Class and Module developer • visualization specialist • Security expert for authentication and data protection Step 2: Prototyping Prototyping involves creating a mock version of the application, usually focusing on the user interface and flow without fully functional code. Wireframes: Start with low-fidelity wireframes that outline the layout and flow of the app. This can be done on paper or using digital tools like Figma or Adobe XD. • Include screens for user profiles, logging workouts, diet tracking, and progress graphs. • Sketch how users will navigate between screens (e.g., menus, buttons, icons). Basic UI Flow: Develop a clickable prototype that shows the user’s journey through the app: • Login/Sign-Up page • Dashboard showing progress and goals • Screens for logging workouts and meals • Data visualization page for graphs Feedback Gathering: Test the prototype with potential users to gather feedback. Ensure that the navigation is intuitive, the layout is user-friendly, and the key features are easy to access. Step 3: Art and Design In this step, focus on creating a high-fidelity design of the app, working on both the aesthetics and the technical architecture. UI/UX Design: • Visual Design: Choose color schemes, fonts, button styles, and icons that align with the brand identity of the app. Ensure that the interface is clean and visually appealing. • User Interaction: Map out how users will interact with different elements on the screen (e.g., input forms, buttons, sliders). Ensure the app is responsive and accessible on different devices. • Responsive Layouts: Consider how the app will adapt to various screen sizes (e.g., desktop, tablets). Data Access Objects (DAO): • Entity-Relationship (ER) Diagrams: Design the DAO to store user data (e.g., fitness goals, exercise logs, meal entries). The data store might include: • User: Stores personal details, login credentials. • Workouts: Stores logs of each workout. • Nutrition: Tracks meals and calorie intake. • Progress: Logs metrics like weight, distance run, calories burned. • Decide on suitable file types and structure to store the data. System Architecture: • Decide on the communication between the UI and the data management. • Plan how data will be fetched from the DAO and displayed in the UI. • Consider integrating APIs for device syncing. Step 4: Implementation This is the most extensive step, where the actual coding and development of the app happens. This stage will be broken into sub-tasks that correspond to the previously designed modules. User Authentication: • Implement login and registration forms using JavaFX/Swing. • Add password encryption for security. • Implement a session system to keep users logged in. User Profiles and Fitness Goals: Develop forms for users to input their personal data and set goals (e.g., weight loss, distance targets). Store the data in the database and display it on the dashboard. Exercise Logging Module: • Create a UI for users to log their workouts (e.g., exercise type, duration, calories burned). • Include options for adding custom workouts. • Save workout logs to the database and retrieve them when needed. Nutrition and Diet Tracking: • Build a form. to allow users to input their meals (e.g., food items, calories, macronutrients). • Use a database of common food items or integrate a third-party API to retrieve nutrition data. • Display nutritional summaries and allow users to track their daily intake. Data Visualization: • Implement charts using libraries like JFreeChart to show users' progress over time. • Visualize key metrics like weight, calories burned, and distance covered. • Allow users to filter data by date range and specific metrics. Wearable Device Integration (Optional): • Integrate APIs to sync data from fitness devices like smartwatches. • Automatically log workouts and display them in the app. Data Persistence and Storage: • Ensure all user data (workouts, meals, progress) is properly stored in the data files and can be retrieved efficiently. • Implement backup mechanisms for data safety. Step 5: Optimization After the core functionality is implemented, optimize the app for performance, usability, and scalability. Code Optimization: • Refactor the codebase to improve efficiency and readability. • Optimize database queries to reduce load times for retrieving user data. Performance Tuning: • Test the app for performance under different conditions (e.g., high number of users, large datasets). • Address any slow-loading elements, especially in the data visualization module. User Experience (UX) Refinement: • Use feedback from initial testing to make the app more user-friendly. For example, reduce the number of steps needed to log a workout or set a fitness goal. • Ensure the UI is responsive, and that the app runs smoothly on different screen sizes. Security Enhancements: • Strengthen the authentication system, adding features like two-factor authentication. • Ensure that sensitive user data (passwords, personal info) is encrypted and stored securely. Step 6: Presentation In the final step, the team presents the completed application to stakeholders or peers, showcasing the app’s functionality, design, and impact. Presentation Materials: • Create a slide deck that summarizes the project's goals, development process, and key features. • Include demo videos or live demonstrations showing how users can log in, track their workouts, monitor nutrition, and visualize progress. Highlight Key Features: • Emphasize unique or standout features such as wearable device integration, real-time data visualization, or the app's seamless user experience. Technical Walkthrough: • Explain the system architecture and how the different modules (UI, database, security) work together. • Share code snippets or flowcharts to illustrate key technical decisions. Challenges and Solutions: • Discuss the challenges faced during development (e.g., performance issues, security considerations) and how they were resolved. Future Enhancements: • Suggest potential future improvements, such as more advanced data analytics, gamification (e.g., fitness challenges), or support for additional wearable devices. BONUS (Extra credit) 1. Your application has full support for the basic operations of persistent storage for saving user data. 2. The use of encryption to secure the user’s password is highly desirable. 3. Data is stored in a binary data file. Follow proper coding style, naming convention, indentation, and comment on the code appropriately. Assessment marks will also take aesthetics (how beautiful the system and user interface are) and uniqueness (how your application differs from that of other groups) into consideration. Part A – Analysis and Design (30%) Conduct an in-depth analysis and design of the solution to the problem above. Specifically, you need to provide the following: 1. Works responsibilities and delegation. 2. Documentation of the system including the concepts, design, and prototype. 3. Your UML use case and UML class diagrams. 4. Your user interface (static) prototypes. Deliverables A well-structured and properly formatted academic document that contains the detailed specifications for the proposed system, associated solution high-level design diagrams, and interface prototype diagrams. Ensure that your submission includes a cover page which shows your group member names and student IDs. All submissions should be in pdf format (ProjectPartA_GroupNo.pdf). Part 1 Due Date: 22/11/2024 11:59pm submit via mytimes.taylors.edu.my submission link. Part B - Development (30%) Develop the system based on your analysis and design in Part A. Specifically, you need to provide the following: 1. Java source code of the application (system) including the resource files (pictures etc.). 2. Relevant screenshots to prove adequate testing are done. 3. Your OOP documentation to point out where the OOP concepts are implemented in your application. Deliverables Your pdf report along with your zipped application project folder. Ensure that your submission includes a cover page which shows your group member names and student IDs. File names should be named as follows: “ProjectPartBReport_GroupNo.pdf” – The pdf copy of your report “ProjectPartBProgram_GroupNo.zip” – The application project folder Part 2 Due Date: 13/12/2024 11:59pm submit via mytimes.taylors.edu.my submission link.
Problem Set 3 ECO3121 - Fall 2024 Due 3 PM, November 28, 2024 No late submission is allowed Please combine your answer, Stata code and requested output in one pdf file and upload it to Blackboard Question 1 Download aghousehold.dta and village rainfall.dta datasets from the blackboard site and load into Stata. The main data we use is the National Fixed Point Survey (NPFS), which is a nationally representative panel dataset (unbalanced) of roughly 5000 households in 88 villages between 1995 and 2002. It is collected by the Ministry of Agriculture and Rural Afairs of China. On the blackboard, we also uploaded a second dataset regarding precipitation records in each village in 2001. See the variable list below. Write up the answers to 1) - 6) below. In addition, also attach the do file that you used to answer the following questions. variable name type format label vl id int %8.0g village ID, consistent with household data lat o float %8.0g latitude of each village lon o float %8.0g longitude of each village provname str24 %24s province name in Chinese cityname str33 %33s city name in Chinese countyname str33 %33s county name in Chinese av rain float %9.0g average precipitation in 2001, measured by mm sd rain float %9.0g standard deviation of precipitation across months in 2001 z rain float %9.0g zero score of precipitation in 2001 First, limit the sample to households in the year 2001 by running code “keep if year==2001” in Stata. You can generate variable yield (output per unit of land) via , and variable fertilizer (fertilizer application per unit of land) through nh6 3 hx95 nh112 . 1. The variable h95 nh269 indicates how many days for household members in each household had been working as temporal migrants (measured by days) in 2001. Generate a binary variable regarding household migration decision. It takes the value of 1 if h95 nh269 is greater than zero, otherwise it is 0. Generate the natu- ral log of yield and fertilizer application intensity (using log(fertilizer + 1) and log(yield + 1) to smooth zero values.). (2 points) 2. Run a linear probability model of household migration decision in question (1) on the natural log of yield. Interpret the result and comment on its statistical significance. (2 points) 3. List three plausible arguments why the point estimate in question (2) could be biased, and the corresponding bias directions (upwards or downwards) relative to the true causal efect of household’s agricultural production on household migration decision. (3 points) 4. Now your professor suggests that the rainfall (precipitation) could be a valid in- strumental variable (IV) for your measure of household’s yield. Try to merge the household’s production dataset and precipitation dataset via vl id, the specific vil- lage identifier, using stata command merge (Many-to-one merge, type “help merge” in Stata for assistance). You decide to use the natural log of average rainfall in 2001 (log(av rain)) as the IV for the natural log of yield. Verify if the assumption of instrument relevance is satisfied using the first stage regression, and obtain the results in Stata. Write down the first stage regression model and interpret the result and statistical significance of your result. (2 points) 5. Now use Stata to estimate the 2nd stage IV point estimate (using linear probability model) as suggested by the professor, and export your result. Write down the second stage regression model and interpret the result and statistical signifi- cance of your result. (2 points) 6. Now your professor tells you that you can use ivregress 2sls command directly to replicate the results in question (5). (a) Do you find any diference in the IV estimations (βIV ) using ivregress 2sls command regarding the coe cients and standard errors relative to (5). (2 points) (b) In reference to your answer to questions (2) and (3), is the diference between the linear probability model and IV point estimates as you expected or rather not? (2 points) Question 2 Consider the two-way relationship between crop yield and fertilizer usage Crop = α0 + β0 Fertilizer + u Fertilizer = α 1 + β1 Crop + v The first equation models the determinant of crop yield given the amount of fertilizer usage. The second equation models the amount of fertilizer the farmer chooses to apply given the crop yield in the area. 1. What do you expect the signs of β0 and β1 are? Explain. (2 points) 2. Explain why the OLS estimator for β0 and β1 are biased. If we use the OLS estimator to estimate β0 and β1 , what directions are the biases? Explain. (4 points) 3. Suppose the only variables available are Crop, Fertilizer , Sunshine (the sunshine of the area), and Budget (the budget constraint of the farmer). To estimate β0 and β1 by the two-stage least squares estimator, which variables among the data you have should be used as instruments? Be specific, what IV is for Fertilizer and which IV is for Crop. Explain. (4 points)
EE5434 final project Data were available on Nov. 5 (see the Kaggle website) Report and source codes due: 11:59PM, Dec. 6th Full mark: 100 pts. During the process, you can keep trying new machine learning models and boost the learning accuracy. You are encouraged to form. groups of size 2 with your classmates so that the team can implement multiple learning models and compare their performance. If you cannot find any partners, please send a message on the group discussion board and briefly introduce your expertise. If you prefer to do this project yourself, you can get 5 bonus points. Submission format: Report should be in PDF format. Source code should be in a notebook file (.ipynb) and also save your source code as a HTML file (.html). Thus, there are three files you need to upload to Canvas. Remember that you should not copy anyone’s codes, which can lead to faisure of this course. Files and naming rules: If you have two members in the team, start the file name with G2, otherwise, G1. For example, you have a teammate and the team members are: Jackie Lee and Xuantian Chan, name it as G2-Lee-Chan.xxx. 5 pts will be deducted if the naming rule is not followed. In your report, please clearly show the group members. How do we grade your report? We will consider the following factors. 1. You would get 30% (basic grade) if you correctly applied two learning models to our classification problem. The accuracy should be much better than random guess. Your report is written in generally correct English and is easy to follow. Your report should include clear explanation of your implementation details and basic analysis of the results. 2. Factors in grading: a. Applied/implemented and compared at least 2 different models. You show good sense in choosing appropriate models (such as some NLP related models). b. For each model, clear explanation of the feature encoding methods, model structure, etc. Carefully tuned multiple sets of parameters or feature engineering methods. Provided evidence of multiple methods to boost the performance. c. Consider performance metrics beyond accuracy (such as confusion matrix, recall, ROC, etc.). Carefully compare the performance of different methods/models/parameter sets. Being able to present your results using the most insightful means such as tables/figures etc. d. Well-written reports that are easy to follow/read. e. Final ranking on Kaggle. For each of the factor, we have unsatisfactory (1), acceptable (2), satisfactory (3), good (4), excellent (5). The sum of each factor will determine the grade. For example, student A got 4 good and 1 acceptable for a to e. Then, A’s total score is 4*4+2=16. The full mark for a to e is 25. So, A’s percentage is 64%. Note that if the final performance is very close (e.g. 0.65 vs 0.66), the corresponding submissions belong to the same group in the ranking. Factors that can increase your grade: 1. You used a new learning model/feature engineering method that was not taught in class. This requires some reading and clear explanation why you think this model fits this problem. 2. Your model’s performance is much better than others because of a new or optimized method. The format of the report 1. There is no page limit for the report. If you don’t have much to report, keep it simple. Also, miminize the language issues by proofreading. 2. To make our grading more standard, please use the following sections: a. Abstract. Summarize the report (what you done, what methods you use and the conclusions). (less than 300 words) b. Data properties (data explortary analysis). You should describe your understanding/analysis of the data properties. c. Methods/models. In this section, you should describe your implemented models. Provide key parameters. For example, what are the features? If you use kNN, what is k and how you computed the distance? If you use ANN, what is the architecture, etc. You should separate the high-level description of the models and the tuning of hyper-parameters. d. Experimental results. In this section, compare and summarize the results using appropriate tables/figures. Simplying copying screening is acceptable but will lead to low mark for sure. Instead, you should *summarize* your results. You can also compare the performance of your model under different hyperparameters. e. Conclusion and discussion. Discussion why your models perform well or poorly. f. Future work. Discuss what you could do if more time is given. 3. For each model you tried, provide the codes of the model with the best performance. In your report, you can detail the performance of this model with different parameters. The code The code should include: 1. Preprocessing of the data 2. Construction of the model 3. Training 4. Validation 5. Testing 6. And other code that is necessary This is the link that you need to use to join the competition. https://www.kaggle.com/t/79178536956041b8acb64b6268afb4de
LM Data Mining and Machine Learning (2024) Lab 2 – Clustering and PCA Objectives The objective of this lab is to use the methods described in lectures to discover the structure of a particular data set. At the end of the lab your task is to write down an intuitive textual description of the data. The techniques that you should apply are clustering and PCA. What you will need All of the files that you will need are in the zip archive lab2-2024.zip which is on the Canvas page. The Data The data is stored in a text file called lab2Data (contained in the lab2-2024.zip file). The data consists of 1050 points in 6 dimensional space. Each point appears as a row in the data file – have a look at the file to see its structure. There is a ‘header’ at the top of the file that specifies the number of columns and rows. Part 1: Clustering Your first task is to use clustering to try to determine whether there are natural clusters in the data, and if there are, how many. To do this you need to apply clustering to the data. You need two C programs agglom.c and k-means.c. Use the provided .exe files (or compile these two source C programs if needed). The program agglom.c is an implementation of the agglomerative clustering algorithm described in lecture material. You should apply this to the data set to obtain a set of K initial centroids for k-means clustering (see the lecture notes to understand how). Then use k-means.c to locally optimize the centroids. As well as producing a locally optimized set of centroids, k-means.c returns the distortion for that set of centroids relative to the data. I recommend 15 iterations of k-means clustering. Usage of agglom program: agglom dataFile centFile numCent Runs agglomerative clustering on the data in dataFile until the number of centroids is numCent. Writes the centroid coordinates to centFile. Usage of k-means program: k-means dataFile centFile opFile numIter Runs numIter iterations of k-means clustering on the data in dataFile starting with the centroids in centFile. After each iteration writes distortion and new centroids to opFile. You should use agglom.c and k-means.c to plot a graph of distortion as a function of K, the number of clusters. Plot distortion for values of K between 1 and 12. To clarify: for K=1 to 12 • Apply agglom.c to the data set to obtain K initial centroids • Apply 15 iterations of k-means clustering. A list of 15 numbers will appear on the screen. What are they? For each K make a note of the final number. Plot a graph of these 12 final numbers against K. Note that to analyse data structure, it might sometimes be useful to plot the distortion in a log-scale or to plot the ratio of the distortion for K clusters to the distortion for K+1 clusters. Conclusion to Part 1: What does the graph tell you about the structure of the data? Part 2: Principle Components Analysis (PCA) To apply PCA to the data you will need to use MATLAB. MATLAB will complain about the header at the start of the data file lab2Data. Therefore I have created a version of this file without the header, called lab2Data-Matlab. Use this file with MATLAB. The procedure for applying and interpreting PCA is described in lecture material. In brief, the stages are as follows: 1. Load the data into a matrix, X say, in MATLAB. 2. Compute the covariance matrix of the data. You can either do this by implementing the formula for covariance given in the lectures, or you can simply use the MATLAB cov function: >> C = cov(X) 3. Apply eigenvector/eigenvalue decomposition to the covariance matrix: >> [U,D] = eig(C) Conclusion to Part 2: Write down the eigenvalues. What does the eigenvector/eigenvalue decomposition of the covariance matrix C tell you about the structure of the data set? Explain how your Part 2 conclusion is likely to change if each sample in the dataset was modified by adding the value of 15 in the dimension 1 and value of 30 in the dimension 4 (e.g., considering original data sample was [1, 3, 5, 0, 2, 3], the new data sample would be [16, 3, 5, 30, 2, 3]). Finally: Your summary Summarize your findings (with a support for your arguments).
Project: Synthesis Objective: Students will create a cohesive and meaningful artwork that synthesizes two or more themes, styles, or techniques studied during the semester. The project should reflect personal insights and demonstrate an understanding of artistic concepts, styles, and mediums. Project Guidelines: 1. Choose Your Themes: Select at least two projects or themes we explored this semester. Some options include: ○ Identity/Self-Portrait: Explore who you are and your place in the world. ○ Starry Night: Use expressive, dynamic brushstrokes to evoke emotion and movement. ○ O'Keeffe Flower: Focus on magnification, abstraction, and the beauty of natural forms. ○ Thiebaud Cupcake: Highlight repetition, color, texture, and playful realism. ○ Andy Goldsworthy: Utilize natural materials and ephemeral processes to connect with the environment. ○ Ecology/Climate Change Postage Stamp: Address pressing environmental issues through design and symbolism. ○ Artist Podcast ○ Tara Donovan 2. Blend and Innovate: Combine techniques, styles, or conceptual approaches from your chosen themes into one artwork. For example: ○ Create a self-portrait using the color palette and expressive strokes of Starry Night. ○ Design a climate change-themed postage stamp inspired by the scale and abstraction of O'Keeffe’s flowers. ○ Merge Andy Goldsworthy’s use of natural materials with a Thiebaud-inspired focus on pattern and composition. 3. Art Medium: You may work in any medium appropriate for your chosen themes, such as: ○ Painting ○ Drawing ○ Collage ○ Mixed media ○ Photography ○ Sculpture ○ Digital art 4. Reflection Paper: Accompany your artwork with a 1–2 page reflection paper that explains: ○ Why you selected these themes. ○ How your project synthesizes the chosen themes. ○ What techniques or styles you applied and why. ○ Any personal connection or meaning in your work. Project Timeline: ● Proposal Due: Submit a one-paragraph description of your concept by Dec. 3. ● Work-in-Progress Check: Bring a draft or progress photo of your project to class on Dec. 5 or Dec. 10. ● Final Artwork and Reflection Due: Dec 12 (there won't be a presentation - this is a work session day. Assessment Criteria: Your final project will be evaluated based on: 1. Conceptual Depth (30%) – Clear synthesis of two or more themes; thoughtful integration of ideas. 2. Artistic Execution (30%) – Use of techniques, craftsmanship, and attention to detail. 3. Originality and Creativity (20%) – Innovative approach to combining themes and mediums. 4. Reflection (20%) – Insightful analysis of your choices and processes.
Assignment #3 CSCI 201 Fall 2024 8% of course grade Title WeatherConditions Topics Covered Java Classes HTML CSS MySQL Java Servlets Databases JavaScript AJAX JDBC Overview Assignment 3 follows the ideas of the previous Assignment 1, Weather Conditions. This assignment focuses both on web front-end and on database backend portion of your website, and you will need to use MySQL, Google Maps API, and OpenWeatherMap API. OpenWeatherMap API & Google Maps API Instead of reading in data through a hard-coded JSON file, you will use the Open WeatherMap API to retrieve weather data. In other words, the weather data should now be pulled from this API as a JSON data stream, and you will not have to read in data from a JSON file. You can learn more about the API here: https://openweathermap.org/api. You can learn more about the retrieved data at this site: https://openweathermap.org/current. The data will be returned in JSON format. You will use the Google Maps API to generate a map overview when searching by latitude and longitude. You can learn more about the API here: https://developers.google.com/maps/documentation/javascript/tutorial You will need this to create the overlay displayed in Figure 3. You will have to generate API keys for both sites, so make sure you do this early. Validation can take up to a few hours, so do not start this assignment on the last day. Parsing JSON The data from the APIs is going to be returned as a JavaScript. Object Notation (JSON) file. JSON is a lightweight data-interchange format. In other words, it is a syntax that allows for easy storage and organization of data. It is commonly used to exchange information between client and server, and it is popular because of its language independence and human readability. JSON is built upon two basic data structures that you should already be familiar with: dictionaries (maps) and ordered lists. An object in JSON is represented by an unordered set of name/value pairs (i.e. dictionary). Objects are contained by braces, { }, inside of which will list the object’s attributes (with the syntax name : value), using a comma as the separator. As in Assignment #1, your will use GSON to deserialize JSON. GSON is known for its ease and flexibility of converting Java objects into JSON objects (and vice versa), and it is simple and straightforward to use. See Assignment #1 for the steps needed to include the GSON JAR file into your project. MySQL In this assignment, you will also track user data. You should construct a simple MySQL database that stores usernames, passwords, search queries for each user, and a timestamp of when the search occurred. You will need to display this data on the Profile Page for each user when logged in (see Figure 10). Images All images needed for this assignment are included in a ZIP file provided in the assignment folder on D2L Brightspace. Figure 1 Home Page (City) Figure 2 Home Page (Lat/Long) Figure 3 Home Page (Google Maps overlay) Home Page At the top right corner of the Home Page, there are two menu items: “Login” and “Register. (Figure 1). Clicking the “Login” menu will direct the user to the login page (see Figure 4). Clicking the “Register” menu will direct the user to the Registration / Sign up page (see Figure 5). If the user clicks the “Location” radio button, the page will add a Google Maps icon to the right of the search bar (see Figure 2). The user can click on that icon, which will create a Google CSCI 201 Assignment 3 Maps overlay for the page (see Figure 3). You can do this by creating three new tags: one for the map with a percentage sizing in the center of the viewport, another for the background, and one more to contain the other two. Clicking on the map should make the map disappear, returning the user to Figure 2, and auto-populate the “latitude” and “longitude” fields with the chosen latitude and longitude. Note: The “Display All” button should be removed. Figure 4 Login Page Login Page Once the user arrives to this page, the user can login with a pre-existing account. There are three different scenarios to account for: ● Wrong username: This user does not exist. ● Correct username, wrong password: Incorrect password. ● Correct username, correct password: Successful login! Once the user enters the correct username/password information, the program will redirect the user back to the home page (see Figure 6). Otherwise, the program will keep the user on the login page and display the error message between the password field and the “Login” button. Figure 5 Register Page Registration Page When the user clicks the “Register” menu, he will be directed to the Registration page. The user can sign up for a new account on this page. There are three different scenarios to account for: ● Username already taken: This username is already taken. ● Passwords do not match: The passwords do not match. ● Username is unique, and passwords match: Successfully create a new account! Once the user enters the correct username/password information, the program will redirect the user back to the home page (see Figure 6). The user is automatically logged into this new account. If the entered information is not valid, the program should leave the user on the Registration page and should display an error message between the “Confirm Password” field and the “Register” button. Figure 6 Home Page (after logging in) Home Page (After Login) Upon successfully logging into an account or successfully creating a new account, the “Login” and “Register” menus should be replaced with “Profile” and “Sign Out”, respectively. Clicking on the “Profile” button redirects the user to the Profile page (see Figure 10). Clicking “Sign Out” will log out the user and redirect him to the home page (see Figure 1). If the user performs a search while logged in, you should add this data into a MySQL table. If a search is completed while not logged in, no data is stored in the database. Make sure to add the timestamp of the transaction to the database. This process should be repeated whenever a search is completed from any page of the site. Figure 7a Search Results Search Results The Search Results should be displayed under the Search area, as a table displaying the city name, low and high temperatures for all results, and dropdown to select the type of sorting, as shown in Figure 7a. The Search Results can be implemented with a section. Selecting from the Sort by dropdown should re-arrange the list as shown in Figure 7b. Figure 7b Sort Options The Search area (edit boxes, radio buttons, etc.) is still active and its functionality remains the same as before. If more than one city matches the search query (i.e. “Springfield” or “Washington”), the OpenWeatherMapAPI will return data for all cities with the same name. You should use the table to display data for all matching cities. Figure 8 Details (values definitions) Figure 9 Details Details When clicking on a city name in a row of the Search Results, the Details for that city will replace the Search Results (The section is updated) as shown in Figure 8 and Figure 9. Figure 10 Profile Page Profile Page When the user clicks on “Profile” menu, the user should be directed to a new Profile page. This page will display the user’s search history. When the user is in the Profile page, the menu on top should still read “Home” and “Sign Out”. Using the browser back button should allow the user to go back to searching on the homepage. The search history table heading should display the actual username of the currently logged-in user, as shown in Figure 10. The search history list should display all the searches made by the user, either by city or by latitude / longitude. The searches are ordered from newest (at the top) to oldest (at the bottom). Clicking on a city name or lat/long location, should display the Details for that location, under the search history, again using a section. Grading Criteria The output must match as close as possible to the screenshots provided above. The maximum number of points earned is 6. User Login Functionality (0.9) 0.2 - Login and Register links are on the Home page and Register page if a user is not logged in 0.1 - Profile and Sign Out links are on the Home page, if a user is logged in 0.1 - Profile and Sign Out links are on the Profile page, if a user is logged in 0.1 - Login page looks like the screenshot 0.1 - Register page looks like the screenshot 0.1 - Profile page looks like the screenshot 0.1 - Error messages displayed properly on the Login page 0.1 - Error messages displayed properly on the Register page Home Page (0.8) 0.1 - When latitude/longitude radio button is clicked, the Google Maps icon is displayed 0.3 - Clicking the Google Maps icon displays the Google Map in the center of the page 0.4 - Clicking on the Google Map makes the map disappear and populates the latitude/longitude text fields properly Search Results (1.6%) 0.5 - Searching by city returns proper results 0.5 - Searching by latitude/longitude returns proper results 0.3 - Sorting functionality works properly 0.1 - Clicking on city name (and not on temperatures) show to city details 0.1 - If no city matches search, the table and drop-down menu are not shown, but an appropriate message is displayed. 0.1 - If only one city matches the search, Sort by drop-down menu is not shown. Details (1.5) 1.5 - Details area shows proper live data Top Search Bar (0.1) 0.1 - When latitude/longitude radio button is clicked, the Google Maps icon is displayed Profile Page (0.6) 0.1 - Profile page shows all the searches performed by a user 0.1 - Clicking browser back button in Profile page, brings back the home page 0.1 - Profile page shows user’s name 0.2 - Results are sorted from newest to oldest 0.1 - Profile page looks like the screenshot
ICS 33 Fall 2024 Project 4: Still Looking for Something Due date and time: Wednesday, November 27, 11:59pm Git repository: https://ics.uci.edu/~thornton/ics33/ProjectGuide/Project4/Project4.git Background Our recent discussion of Functional Programming alluded to the fact that what makes programming languages different from one another isn't solely their syntax, though that's certainly part of it. Each programming language asks its users to think differently — sometimes dramatically so — about how best to organize our solution to a problem. What's considered normal (or even desirable) in an object-oriented language might be awkward (or even impossible) in a purely functional language, and vice versa. How we'd solve a problem in a data manipulation language like SQL would be radically different from how we'd solve the same problem in a language like Python. Naturally, some kinds of problems will be better solved with one set of tools than another, so we'd expect different programming languages to excel at different tasks; part of why we want some exposure to more than one programming language is so that we can start to develop our sensibilities about the ways that languages can differ, and how we might be able to recognize the kinds of problems that are a better fit for some than others. That way, even if we don't become experts in multiple languages at once, we'll at least have embraced the idea that no single language is the best solution to every problem; that'll open our minds to learning about alternatives when they show promise, rather than falling in love with our first language and never being able to let go of it, or simply riding the waves of hype and fashion wherever they lead, for better or worse. Fortunately, we've already had a head start on that journey, because Project 2 asked you to build a single application that was written using more than one programming language. We used Python to implement our user interface and the "engine" underlying it, while we instead used SQL to describe our interactions with the database that stored and managed the program's data. The technique of writing systems made up of code written in multiple programming languages is sometimes calledpolyglot programming, which, like many choices we make in computing, represents a tradeoff: We give up the simplicity of writing everything in a single language, but we gain access to a set of abilities that approach the union of the abilities of all of the languages we're using. As long as we can figure out how to make code in one language work together with code in another — in Project 2, we relied on the sqlite3 library to smoothly communicate between them — and as long as we're careful to use the "best-fit" language for each part of our program, we can sometimes achieve things that are much more difficult to achieve when writing everything in one language. The more complex the system, the greater the chance it may benefit from polyglot techniques. Among the differences between programming languages are the differences in their syntax, which is to say that different programming languages allow us to use different keywords and symbols in different orders. Where a SQL statement might begin with SELECT or CREATE TABLE, a Python statement might instead begin with class or def. There is some overlap between the words and phrases allowed across programming languages, but there are almost always differences somewhere. We can write a + bin both Python and SQL, for example, but the statements in which it can legally appear would need to be structured differently. As you'll see in later coursework, the ability to describe the syntax of a programming language is a fairly universal need, so we would benefit from understanding a universal solution to it. A grammar is a well-known formalism that can do that job nicely. Grammars provide a formal way to describe syntax, allowing us to specify the valid orders in which words and symbols can appear. Grammars form the theoretical basis of parsers like the one provided in Project 3, whose main jobs are to decide whether a sequence of symbols is valid, by inferring the structure from which it derives its meaning. But we can use grammars in the opposite direction, too — generating sequences of symbols that we know are valid, rather than determining whether a sequence of symbols is valid — and that's our focus in this project. To satisfy this project's requirements, you'll write a program that randomly generates text in accordance with a grammar that's given as the program's input. (Note that parsing and generating text are hardly the only tasks for which we can use grammars; they're recurrent in the study of computer science, so you're likely to see them again in your studies, probably more than once.) You will also gain practice implementing a mutually recursive algorithm in Python, which will strengthen your understanding of our recent conversation in which we were Revisiting Recursion. Grammars A grammar is a collection of substitution rules, each of which specifies how a symbol can be replaced with a sequence of other symbols. Collectively, the substitution rules that comprise a grammar describe a set of sentences that we say make up a language. As a first example, consider the following grammar. A → 0 A 1 A | B B → # There are two rules that make up our grammar: One specifying how the symbol A can be replaced, and another specifying a different replacement for B. We say that symbols that can be replaced in this way are variables, which I've denoted here with boldfaced, underlined text. Meanwhile, we say that symbols that cannot be replaced are terminals, and that the sentences that are part of a language described by a grammar are made up only of terminals. There are two variables in our grammar (A and B) and three terminals (0, 1, and #). The vertical bar (' |') symbol in the rule for A indicates optionality, which is to say that we can replace an occurrence of A with one of two options: either with the symbols 0 A 1 A or with the symbol B. Lacking a vertical bar, the rule for B offers only one option: We can only replace B with the terminal #. We consider one of the variables to be the start variable, which is meant to describe an entire sentence. Other variables describe fragments of sentences. For the purposes of this example, we'll say that A is the start variable. Generating a sentence from a grammar From a conceptual point of view, a grammar can be used to generate strings of terminals within its language in the following manner. (I should point out that this will not be precisely how your program will generate its output, but we'll starthere, since it's a good way to understand the concepts underlying what we're doing.) 1. Begin with a sentence containing only one symbol: the start variable. 2. As long as there are still variables in the sentence, pick one of them, find the corresponding rule with that variable on its left-hand side, and choose one of its options. Replace the variable with the symbols in the option you chose. A sequence of substitutions leading from the start variable to a string of terminals is called a derivation. When the leftmost variable is always replaced at each step, the derivation is called a leftmost derivation. The sentence 0 0 # 1 # 1 # is in the language described by the grammar above, a fact we can prove using the following leftmost derivation. A ⇒ 0 A 1 A ⇒ 0 0 A 1 A 1 A ⇒ 0 0 B 1 A 1 A ⇒ 0 0 # 1 A 1 A ⇒ 0 0 # 1 B 1 A ⇒ 0 0 # 1 # 1 A ⇒ 0 0 # 1 # 1 B ⇒ 0 0 # 1 # 1 # The algorithm described above would be able to produce this same sentence by making the same choices for each application of a rule that was made in this derivation. We would say, generally, that the language of a grammar is the set of all strings of terminals for which such a derivation can be built. It's worth noting that there are two aspects of this problem where infiniteness comes into play. · The set of strings in a language maybe infinite. For example, if a grammar contained the rule X → 1 X | 1, there would be no limit on how many times we could choose the 1 X option instead of the 1 option. Still, if we're generating strings at random, we'll always pick exactly one of these options, and we expect, sooner or later, to choose the 1 option, which would prevent the generated string from becoming any longer. · A grammar can be written in a way that it describes individual strings of infinite length. If the only choice for the symbol Y is the rule Y → 1 Y, a derivation in which a string contains Y will never end; any substitution based on that rule will still lead to a string containing Y. (This is a similar problem we encounter when we have a recursive function with no base case.) In practice, though, a properly written grammar will eventually lead only to sentences of finite length. The program The basic goal of your program is to use the description of a grammar to randomly generate sentences that are in the grammar's language. There are a number of details to consider, which are described below. The format of a grammar file The program will read a grammar file, which contains the description of a grammar to be used for generating random sentences. To include that feature in your program, though, we'll need to agree on a format for grammar files, which is specified in detail below. · Each rule starts with a line containing only a left curly brace {. We'll say that each of these lines is called a rule opener. · Each rule ends with a line containing only a right curly brace }. We'll say that each of these lines is called a rule closer. · Any line of text that is not between a rule opener and a subsequent rule closer is considered to be a comment (i.e., it's irrelevant from our perspective, but can be a useful way to write a grammar file that would be more understandable to a human reader). · After a rule opener, the next line specifies the name of the variable for which a rule is being described. This line will consist of only letters and digits, but no whitespace (or other) characters. · Subsequent lines of the rule are the options for substituting a sequence of symbols in place of the rule's variable. There will always be at least one of these lines, and each of them will be as follows. o It will begin with a positive integer (i.e., an integer greater than zero) that specifies the option's weight, which determineshow frequently we'll choose it, relative to the others. That weight will be followed by a space. 。After that will be zero or more symbols, each adjacent pair separated by a single space. When a symbol consists of letters and digits surrounded by brackets (i.e., [ and ]), it is a variable; otherwise, it is a terminal. (Note that the syntactic meaning of spaces means that symbols cannot contain spaces.) As we'll see, a grammar file doesn't specify a start variable; that's specified subsequently as input to the program, so that the same grammar file can be used with different start variables in different runs. Having seen a description of the format, let's take a look at an example grammar file, so we can fully understand the details of what it means. { HowIsBoo 1 Boo is [Adjective] today } { Adjective 3 happy 3 perfect 1 relaxing 1 fulfilled 2 excited } Let's suppose that HowIsBoo is the start variable. If so, then the grammar describes sentences whose basic structure is always Boo is today, with the replaced with one of five adjectives: · There's a 3-in-10 (30%) chance of the adjective being happy. · There's a 3-in-10 (30%) chance of the adjective being perfect. · There's a 1-in-10 (10%) chance of the adjective being relaxing. · There's a 1-in-10 (10%) chance of the adjective being fulfilled. · There's a 2-in-10 (20%) chance of the adjective being excited. Where did those probabilities come from? The sum of the weights for all of the options for the Adjective variable is 10. (3 + 3 + 1 + 1 + 2 = 10.) Each individual weight is a numerator, and that sum is the denominator; happy has a weight of 3, so its odds are 3- in-10 (30%), and so on. One thing this example demonstrates is that weights have no meaning across rules, but only within a rule. For example, the sum of the weights in the rule for HowIsBoo is 1, while the sum for Adjective is 10, which means that "1 point" of weight means more in the HowisBoo rule than it does in the Adjective rule. A more complete example grammar file To provide you with a more complete example of a grammar file, check out the example linked below. · grin.txt That's a grammar file that, when its start variable is GrinStatement, generates random statements written in the Grin language from Project 3. The generated statements will have no syntax errors in them, so it should be possible to run the lexer and parser on them; however, since the statements are generated individually and separately, it's unlikely that you'd be able to run them as a Grin program, because they may have run- time errors or other problems, such as infinite loops, division by zero, or jumping to non- existent labels. Generating semantically valid Grin programs (i.e., ones that you could successfully execute) is a problem that grammars are not equipped to solve, as it turns out. The input The program will begin by reading exactly three lines from the Python shell (i.e., using Python's built-in input function). 1. The path to an existing grammar file. (If only the name of the file is specified, it will need to be located in the program's current working directory, which, by default, is the same directory as your main module.) 2. A positive integer specifying the number of random sentences to be generated. (Note that, as always, zero is not a positive number.) 3. The name of the start variable. (A variable's name does not include the brackets; the brackets are a syntactic device within the grammar file to make clear when an option is referring to a variable.) You can safely assume that the grammar file exists, that it will be valid (i.e., it will follow the grammar file format described above), and that the program input will be formatted according to the rules specified here; we won't be testing your program with inputs that don't meet those requirements, so your program can do anything (or even crash) if given such inputs. We also will not be testing with a grammar file that describes infinite-length sentences, which means that your program can do anything (or even crash) if given such a grammar file. The output The output of your program is simple: If asked to generate n sentences, your program would print a total of n lines of output, each being one of those sentences, and each having a newline on the end of it. No more, no less. Each sentence is a sequence of terminals, separated by spaces. That's it. A complete example of the program's execution Let's suppose that we had a grammar file named grammar.txt identical to the shorter example shown above. Given that, an example of the program's execution might look like this. grammar.txt 10 HowIsBoo Boo is happy today Boo is fulfilled today Boo is relaxing today Boo is excited today Boo is perfect today Boo is happy today Boo is perfect today Boo is perfect today Boo is excited today Boo is happy today Don't forget that the output is generated randomly, which means that a subsequent run of the same program with the same grammar file and the same input might reasonably be expected to produce different output. Remember, too, that the grammar file specifies its options as weights that are probabilities rather than being absolute. Consequently, a subsequent run that generates 10 sentences may, for example, have a different number of occurrences Boo is happy today;just because there's a 3-in-10 chance that happy is chosen in each sentence doesn't mean that exactly three out of every ten sentences will contain happy. (You can flip a coin ten times in a row and it can come up heads all ten times, even though there's a 1-in-2 chance of it happening each time. It's not likely, but it's not impossible, either.) Design requirements There are a number of ways that this problem could be solved, but we'll focus on an approach that leads to a clean, mutually recursive algorithm for solving it, which you'll be required to implement. Representing the grammar as objects From the description of the grammar file, we can see that it's built up from the following concepts. · A grammar contains a collection of rules. · Each rule is made up of a variable and one or more options. · Each option has a weight and a sequence of symbols, each of which is a terminal or a variable. These facts lead directly to an idea of how to design a combination of objects that can be used to represent a grammar. · A class representing a terminal symbol. · A class representing a variable symbol. · A class representing an option. · A class representing a rule. · A class representing a grammar. This may seem like a heavy-handed approach, but it pays off if we take it a step further. What if all of these classes implemented the same protocol, which allows us to ask any of their objects to do the same job: "Given this grammar, generate a sentence fragment from yourself"? Generating random sentences from a grammar Once you've represented your grammar as a combination of objects as described in the previous section, it is possible to implement a relatively straightforward mutually recursive algorithm to generate random sentences from it. The algorithm revolves around the idea of generating sentence fragments, then putting the fragments together into a complete sentence. Here is a sketch of such an algorithm. · To generate a sentence from a grammar, it will look up the rule corresponding to the start variable, then ask that rule to generate a sentence fragment. · To generate a sentence fragment from a rule, one of its options will be chosen at random (in accordance with their weights), which will then be asked to generate a sentence fragment. · To generate a sentence fragment from an option, iterate through its symbols, generating sentence fragments from each one. · To generate a sentence fragment from a variable symbol, ask the grammar for the rule corresponding to that variable, then ask that rule to generate a sentence fragment. · To generate a sentence fragment from a terminal symbol, yield only the value of that terminal; that's its sentence fragment. This mutually recursive strategy provides a great deal of power with relatively little code; by relying on Python's duck typing mechanism, we can allow the "right thing" to happen quickly and easily. (Note that why we say it's a "mutually recursive" strategy is because a grammar might use a rule, which uses one of its options, which uses one of its symbols that is a variable, which would, in turn, use another rule.) Furthermore, if we implement that algorithm using Python's generator functions — each of these methods yields a sequence of terminal symbols, rather than returning them — we can also do this job while using relatively little memory; our cost becomes a function of the depth of the grammar's rules (i.e., how deeply we recurse), rather than the length of the sentence we're generating, which is likely to be a significant improvement if we're building long sentences. Your main module You must have a Python module named project4.py, which provides a way to execute your program in whole; executing project4.py executes the program. Since you expect this module to be executed in this way, it would naturally need to have an if name == ' main ': statement at the end of it, for reasons described in your prior coursework. Note that the provided Git repository will already contain this file (and the necessary if statement). Modules other than the main module Like previous projects, this is a project that is large enough that it will benefit from being divided into separate modules, each focusing on one kind of functionality, as opposed to jamming all of it into a single file or, worse yet, a single function. As before, wFe aren't requiring a particular organization, but we are expecting to see that you have "kept separate things separate." Unlike in Project 2 and Project 3, we are not requiring the use of Python packages, though you are certainly welcome to use them if you'd like. Working and testing incrementally As you did in previous projects, you are required to do your work incrementally, to test it incrementally (i.e., as you write new functions, you'll be implementing unit tests for them), and to commit your work periodically into a Git repository, which you will be bundling and submitting to us. As in those previous projects, we don't have a specific requirement about how many commits you make, or how big a "feature" is, but your general goal is to commit when you've reached stable ground — a new feature is working, and you've tested it (including with unit test). We'll expect to see a history of these kinds of incremental commits. Testing requirements Along with your program, you will be required to write unit tests, implemented using the unittest module in the Python standard library, and covering as much of your program as is practical. As before, write your unit tests in Python modules within a directory named tests. As in previous projects, how you design aspects of your program has a positive impact on whether you can write unit tests for it, as well as how hard you might have to work to do it. Your goal is to cover as much of your program as is practical, though, as in recent projects, there is not a strict requirement around code coverage measurement, nor a specific number of tests that must be written, but we'll be evaluating whether your design accommodates your ability to test it, and whether you've written unit tests that substantially cover the portions that can be tested.