Data Structures: Huffman Encoder Part 2 November 15, 2024 Last week you wrote code that takes a table of characters and their frequencies to build a Hu@man encoding. The purpose of the Hu@man encoding was to represent a chunk of text e@iciently by encoding common characters using fewer bits. This week you will use the code you wrote last week to encode and decode text. You will start by defining a new class called HuffmanConverter which will have a constructor taking as input a string which will be stored in a variable called contents. Your first step is to calculate the frequencies of each character including punctuation and whitespaces. Each ASCII character corresponds to a number between 0 and 255 (inclusive), which you can get by casting a character c into an integer: int i = (int)c. You can then get the character back as c = (char)i. We will store the frequencies of the characters in an integer array, count, of size 256 such that count[(int)c] = the count of character c. HuffmanConverter will have a method public void recordFrequencies()that stores the counts of the characters in contents in an attribute count. Print the table of frequencies you’ve created. Second, you will build a Hu@man tree from count using the code you’ve already written. Your code should build a heap using count and then call HuffmanTree.createFromHeap. The Hu@man tree should be stored in an attribute huffmanTree. This should occur in a method public void frequenciesToTree(). Third, you will extract a code from the tree. You will want to store the code in a string array attribute called code such that code[(int)c] = the Hu@man encoding of character c. This will be done in a method public void treeToCode(). You will also need to write a private method private void treeToCode(HuffmanNode t, String encoding). This private method recursively calls itself on the children of the Hu@man node t, keeping track of its encoding encoding; once it reaches a leaf, it adds the character at that leaf and the encoding to code. In treeToCode(), first set every element of code to the empty string "", then call treeToCode at the root of the Hu@man tree. Print the code you have created; this can also be done using a call to huffmanTree.printLegend(). Fourth, once you’ve built code, you can encode contents into a string of bits in a method public String encodeMessage(). Print the encoded message. Also print the message size in the ASCII encoding (8 bits for each letter) and the Hu@man encoding. Finally, you will write a method public String decodeMessage(String encodedStr)to decode a given bit string using huffmanTree. To do so, you will take in one bit at a time to navigate through the huffmanTree (0 means go left, 1 means go right). Once you reach a leaf, you should store the character at that leaf and return to the root of the Hu@man tree. Call decodeMessage on your encoded message and print the decoded message, which should be identical to your original message. We will call the main () method of HuffmanConverter from the command line, passing the path to a file of text. To recap, the output should be: - the list of characters and their frequencies - the Hu@man encodings - the encoded message - the number of bits needed to encode your message in a Hu@man encoding vs using ASCII - the decoding of your encoded message (which should be the same as the initial message) You are provided a file with a template of HuffmanConverter with code to import a text file. You are also provided with two example input and output files. The inputs are two love poems taken totally randomly from http://www.lovepoemsandquotes.com. Feel free to try out your own inputs to test your program. Please submit your completed [email protected] file on Brightspace. You should not need to make changes to any of the files from part 1 in order to do this assignment.
HIST 205 Paper 2 Prompts Due Monday, December 2nd at 2:30 PM (MyCourses assignments tab). Submit as .doc or .docx or .pdf file. Basics: Please respond to one of the following three essay prompts. A compelling essay will directly answer the prompt’s question(s) with a specific thesis and present an argument that is supported by ancienttextualevidence(quotations, examples, and paraphrased citations from the relevant works). Avoid generalizations- draw your evidence from the text(s), using specific examples! Be specific as well in your use of dates and context. There are many possible answers to each question, and students are encouraged to be creative in their approach. Essays will be judged on how cogently and persuasively they present their argument, and how well students support their argument with analysis of historical sources. Students do not need to consult sources or books beyond the texts themselves specified in each question, though they can use Mathisen and the introductions to the sources for background information and context. However, students are encouraged to discuss their ideas with the instructor and/or their conference leader during office hours and over e-mail. We will only read over sections of papers and detailed outlines, however, in office hours. Requirements: Papers should be 4-5 full pages double-spaced typed, 12 point font in Times New Roman with normal margins and are due December 2nd on MyCourses by 2:30 PM. Submit papers as a .doc(x) or .pdf file. Note that you are responsible for making sure that your submitted document is readable. If we can’t open your document, late penalties will be applied. A bibliography is only required if you’re using translations of the texts not from assigned course readings (such as if you don’t use the Penguin translation of the Germania), or if you use outside sources. Automatic penalties: There will be a grade penalty of five points (from your essay grade out of 100) if the essay is late, and then another 2.5 points off everyday at 2:30 PM the essay is not in. If the essay is not in a week after the deadline without an extension, then it will count as a 0. There will also be a 5 point penalty if the essay is less than 4full pages long, (including if the essay only reaches this length through large margins/font/spaces between paragraphs, a large title area etc...). Extensions will only be offered in case of serious medical problems or personal emergencies. If you need to request an extension, e-mail the professor directly as soon as possible. Proof may be demanded depending on the issue (e.g., for medical problems). There is no penalty for going over 5 full pages, but the grader will stop reading after 6 full pages. Citation: Please see the document “Citing Ancient Sources” (MyCourses) for aguide to citing ancient literature and bibliography, including a link to a website with conventions for different ancient sources. You may use either footnotes or in-text citations. When paraphrasing or quoting an ancient or modern text, you should always cite them. And remember ancient source citation styles: Tac. Agr. 21; Polyb. 3.23, etc … Because this paper is due at the end of semester, if you would like comments on your paper because you intend to look over it and its feedback, please write that you would like comments in the title area or as you submit it on MyCourses. As a reminder: use of AI Chatboxes, such as ChatGPT, is considered plagiarism. Keep in mind the feedback you received on paper 1 – this essay will be graded on the same rubric, though the grading standard will be a little higher, especially for providing and citing evidence. Choice of Essay Questions: 1) In Sallust’s Jugurthine War, a Roman history about a late 2nd century B.C. Roman war in North Africa, the Numidian King Jugurtha makes the following claim: “the Romans are unjust... suffering from profound greed, the common enemy of all people... They had the same reason for war with Bocchus, with himself, and with all others: lust for empire” (Sall. Jug. 81). Is Jugurtha (through Sallust) right? Either way, why did the Romans goto war so often in the late 3rd century and early 2nd century B.C.? In answering this question, you should consider two case studies: The Second Punic War and The Second Macedonian War, using the sources on these wars we assigned for lecture and conference (Polybius, Plutarch, and Livy). 2) The first emperor of Rome (though he never called himself that), Augustus, claimed that he was restoring the ancestral religious and moral values of Roman society after many years of civil war. One way he did so was by emphasizing the restoration of “traditional” gender roles through legal and moral reforms. Did the life of the woman known as “Turia,” both in the civil war period and after peace was restored, conform to Augustus’ prescribed gender roles? Do you think that Laudatio Turiae supports or is critical of Augustus’ moral reforms? In answering these questions, for context and comparison you should consider not just the Laudatio Turiae, but other sources from and about the Age of Augustus: the Res Gestae readings and the sources I posted on MyCourses about Augustus’ laws. Special citation guidelines. When citing the Laudatio Turiae, note that it has two columns: a left column and a right column, along with line numbers. Please cite like this: Laud. Tur. Right 25-30 (for right column lines 25-30). In the edition that we have assigned, they only give line numbers at the beginning of anew section, so you can be approximate in your citations (e.g., Laud. Tur. Left 3-6 if you’re citing something from the first main paragraph). 3) Tacitus’ Germania is a description of customs and the territories of the “German” peoples who lived beyond the bounds of the Roman empire. To the Romans, the Germans were often considered to be the most “barbaric” people imaginable. Is Tacitus’ appraisal of the Germans positive or negative? How might the presentation of the Germans in Tacitus’s Germania act as a criticism or praise of Roman society during Tacitus’ lifetime – especially during the reign of the emperor Domitian and soon after? In answering these questions, you should also consider and compare Tacitus’ statements about contemporary Roman society in his Agricola to support your interpretations of the Germania. Overall grading rubric: A range essay: answers the question directly with a plausible, specific, and clear thesis/argument. Supports that thesis well through citing and analyzing primary sources, giving relevant examples and textual evidence to support each point. The very best essays will approach the question/sources with nuance, showing awareness of other possible interpretations and the flaws of the primary sources while still making a compelling argument. B range essay: Answers the question with a good thesis, but does not support their argument well through analyzing and citing primary sources or with examples; or the argument shows structural problems. Or does not have a clear or plausible thesis, but shows skill and effort at analyzing ancient sources. Or is an A range essay that makes factual mistakes, has writing that is very unclear, etc.. C range essay: Avoids answering the question or does not have a clear argument or does not use the sources to support their argument. Or a paper that goes off topic, but shows good skills of analysis/argumentation. D and below: a paper that does not address the question, does not cite/analyze primary sources, has unclear writing, etc...
Faculty of Arts and Science - Statistics STA304H1 F Surveys, Sampling and Observational Data (formerly STA322H1) Fall 2024 Syllabus Course Meetings STA304H1 F Section Day & Time Delivery Mode & Location LEC0101 Thursday, 3:00 PM - 6:00 PM In Person: AH 100 Refer to ACORN for the most up-to-date information about the location of the course meetings. Course Overview Design of surveys, sources of bias, randomized response surveys. Techniques of sampling; stratification, clustering, unequal probability selection. Sampling inference, estimates of population mean and variances, ratio estimation. Observational data; correlation vs. causation, missing data, sources of bias. The work of applied statisticians, regardless of their specific job title and area of application, is the most important and exciting work in the world right now. The ability to gather data, analyse it, and communicate your understanding of the underlying process is incredibly valuable. In this course you will learn and apply the essentials of this. We focus on surveys, sampling and observational data. The very stuff of statistical science! We will approach these topics from a practical perspective. You will actually run surveys and learn how messy it is to put one together. You will learn how to think about sampling, how to implement it, and why the details matter. You will forecast an election. And you will conduct original research. More generally, you will learn how to obtain and analyse data and use it to make sensible claims about the world. Marking Scheme Assessment Percent Details Due Date In-class group reflection exercise 12% These largely draw on end-of- chapter “Tutorial” questions. Done in groups randomly created by the instructor. Only best three of six count. 2024-09-05,2024-09- 12,2024-09-19,2024- 10-03,2024-10- 10,2024-11-14 Term papers 30% Only best one of two term papers counts, but both should be submitted. Term paper 1 is individual. Term paper 2 is groups of 1-3 and will involve forecasting the US election. 2024-11-04,2024-09- 27 Conduct peer review 6% You are expected to provide peer review, via GitHub Issues, for three peers. Papers will be distributed by a spreadsheet— add a link to the Issue/PR to a paper and submit your review to Quercus. You will only have 24 hours to do this. Students are not assigning grades to other students, but are instead getting the mark based on the quality of the feedback they provide to other students. 2024-09-25,2024-10- 23,2024-11-27 LLMs prompting competition 1% Experiment to develop a sampling-based approach to pick an optimal LLM prompt. 2024-10-04 Mid-term 10% An in-class mid-term. Questions largely draw on end-of-chapter “Quiz” questions. 2024-10-17 Complete survey 1% Complete a survey to better understand how you used LLMs in this course. 2024-11-29 Final paper 40% Due at 5pm. 2024-11-29 Please be aware that there are two industry-sponsored awards for final papers written in this class: • "Investigative Journalism Foundation Award for Best Paper" - The Investigative Journalism Foundation is sponsoring a prize for the best final paper written using their procurement dataset. An IJF reporter will come to class to discuss the details of this later in the term. • “Open Data Toronto Award for Best Paper” - The City is sponsoring a prize for the best final paper written using data from their Open Data Portal. Late Assessment Submissions Policy If no extension has been granted and no accommodation applies, then late submissions will not be accepted. Policies & Statements Late/Missed Assignments Late submissions If no extension has been granted and no accommodation applies, then late submissions will not be accepted. Missed submissions In-class group reflection exercise Only the best three of six count, and so if some are missed then it is not an issue. If you have a accommodation or situation that makes even this impossible, then please consult your faculty/department/college advisor and then email me. Term Paper I Please submit something for Term Paper I, even if it gets zero. That one is done individually, while the other term paper is (optionally) done in groups. If you have a situation or accommodation that makes it impossible for you to submit something for Term Paper I, then in that situation Term Paper II must be done individually to ensure fairness with the rest of the class. Again, please consult with your faculty/department/college advisor and then email me. Term Paper II If you have an accommodation or situation which means you cannot submit Term Paper II, then the mark for Term Paper I will be used for this 30%. If you do not submit Term Paper I and Term Paper II, then the 30% will be distributed: 15% to the mid-term, and 15% to the final paper. Peer review There are many opportunities to get the peer review marks through Term Papers I and II and the Final Paper, so if you cannot do any for a particular paper, then just do the others. If you have a situation or accommodation that makes this impossible, then please consult your faculty/department/college advisor and then email me. Mid-term If you miss the mid-term, due to a medical situation or accommodation, then you will have the opportunity to do it in Week 8. Final paper The Final Paper is a critical piece of assessment. If you have a situation or accommodation that makes it impossible for you to submit the final paper, then please email me before Week 11, and cc your faculty/department/college advisor so that we can work out a alternative plan. If you have an emergency situation, then please consult your faculty/department/college advisor and then email me.
LM Data Mining and Machine Learning (2024) Lab 1 – Text Retrieval PART 1: TF-IDF BASED TEXT RETRIEVAL Objective The objective of this lab session is to apply the text-based Information Retrieval (IR) techniques which we have studied in lectures, namely: 1. Stop word removal 2. Stemming 3. Construction of the index – calculation of TF-IDF weights 4. Retrieval – calculating the similarity between a query and document We will apply these techniques to a ‘toy’ corpus consisting of 112 documents – BEng final year project specifications. These project specifications were submitted by staff in Word format, but I have converted them all into plain text files for the purposes of this lab. However, I did not remove the formatting or the pieces of text which are common to all of the files. Copy the zip archive lab1-2024from Canvas and ‘unzip’ it. You should end up with a new folder called lab1-2024 containing all of the files that you need to complete the lab, including a folder called docOrig which contains 112 text files. The folder lab1-2024 will be the default folder that you work from. Have a look at one of the text files in the docOrig folder. You should be able to identify the common formatting. Processing of the documents Before we can do IR we need to apply stop word removal and stemming to each of the documents in our corpus. To do this you will use two executable (.exe) files of the C programmes that are in your lab1-2024 folder: stop.exe and porter-stemmer.exe. Note that there are also source C programmes provided in a case your computer runs on a non-Windows operating system – in that case, you will need to compile the source C programmes (stop.c, porter- stemmer.c, index.c and retrieve.c). Task 1: Stop word removal: The next task is to remove stop words from each of the documents. The 50-word stop word list stopList50 should already be in your lab1-2024folder. Now run the program stop on one of the documents – AbassiM.txt for example. To run the program, just type the below in the Command Prompt window: stop stoplist50 docOrigAbassiM.txt (note that the above includes the path name to tell stop where AbassiM.txt is – this is the docOrig folder). This should cause a version of AbassiM.txt with stop words removed to be printed onto your screen. You need to store this output in a text file AbassiM.stp. To keep the ‘stopped’ documents separate from the original documents, there is created folder in lab1-2024 called docStop. All of the ‘stopped’ documents should go in this new folder. You need to apply stop to all of the project description files. To do this I have created a batch file called stopScript.bat, which you should have in your lab1-2024 folder. In the Command Prompt window just type stopScript followed by ‘return’ . You need to be in the lab1-2024 folder when you do this. You should now have 112 files in the docStop folder, each with a name of the form filename.stp. Question 1: What is the percentage reduction in the number of words in a document as a consequence of stop-word removal – specifically, what is the reduction in the case of the file AgricoleW.txt? Task 2: Stemming: The next task is to apply the porter stemmer to each ‘ .stp’ file. There is created another folder in lab1-2024 called docStem. This folder will contain a stemmed version of each file from the docStop folder. Basically, for each .stp file you create a .stm file by typing, for example, porter-stemmer docStopAbassiM.stp This causes a ‘stemmed’ version of AbassiM.stp to be printed on screen. You need this data to be stored in a file called docStem/AbassiM.stm. You need to do this for every .stp file. To do this I have created another batch file called stemScript.bat, which you should have in your lab1-2024 folder. In Command Prompt window just type stemScript followed by ‘return’ . You need to be in the lab1-2024folder when you do this. Question 2: Find the file AgricoleW.stm. What are the results of applying the porter- stemmer to the words communications, sophisticated and transmissions? You should now have: - 112 original .txt documents in the folder docOrig - 112 ‘stopped’ documents in the folder docStop - 112 ‘stemmed’ documents in the folder docStem Task 3: Create the document index files: If you’ve forgotten what the document index is, or what it is for, look again at the lecture slides. The next task is to create 3 index files: one for the original .txt documents, one for the .stp documents, and one for the .stm documents. You should have the executable index.exe in your lab1-2024folder (or compile the program index.c if needed). You should have a text file called textFileList in your lab1-2024 folder. This is simply a list of all of the original .txt files – one file per line. Type: index textFileList followed by ‘return’ . After a short pause a text version of the index file will be printed on your screen. You need to store this data in a file called textIndex. Type: index textFileList > textIndex followed by ‘return’ . Look at this index file (open it in a text editor such as Notepad) and try to understand the information it contains. The lecture notes will help you. The first part of the file gives the list of documents with their document length (this is not the length in bytes – see lecture notes if you are unclear). The second part of the file gives the list of all words (ordered based on IDF) that occurred in the set of documents and information related to each word. For each word (its position is indicated in front of the word name), there is the total number of times the word appeared (wordCount), number of documents it appeared in (docCount), and the IDF value of the word. This is then followed with the list of documents the word appeared in, the count and calculated weight. Now repeat this on the ‘stopped’ and ‘stemmed’ files: index stopFileList > stopIndex index stemFileList > stemIndex Question 3: What are the ‘document lengths’ of documents: docOrigDongP.txt, docStopDongP.stp and docStemDongP.stm? Why are they different? Why is the difference between the document lengths of docStemDongP.stm and docOrigDongP.txt greater than the difference between the document lengths of docStopDongP.stp and docOrigDongP.txt? Question 4: The IDF of the term design is approx. 0.009. Why is it so close to zero? Answer: Question 5 : Find the word algorithm in the three index files. Explain why the entries for this word are different in the three files. Task 4: Retrieval: The final task in this part of the lab is retrieval. To do this you will need to create a query. This is just a text file containing your query – you can create it using Notepad or Wordpad. An example query – in file query – is in your lab1-2024folder. This query just contains the text: circuits and devices Next you need to apply stop word removal and stemming to the query: stop stoplist50 query > query.stp porter-stemmer query.stp > query.stm You should have the executable retrieve.exe of the C program in your lab1-2024 folder (or compile the source C program if needed). You can now do retrieval. Start with the raw text files: retrieve textIndex query followed by ‘return’ . This will return a list of all the documents for which the similarity with the query is greater than 0. It also tells you the identity of the most similar document. Now repeat this for the stopped documents and stopped query, and stemmed documents and stemmed query: retrieve stopIndex query.stp retrieve stemIndex query.stm Question 6: Compare the results of the above two searches (using .stp and .stm) with the result for the original raw text files. What do you conclude? Question 7: Repeat Task 4 with one query of your own and report the results. Answer: PART 2: LATENT SEMANTIC ANALYSIS Objective The objective of the second part of the lab is to apply Latent Semantic Analysis (LSA) to the set of BEng final year project specifications in the docOrig folder. Look at the notes on LSA to remind yourself about the technique, to put the following sequence of tasks into context. Task 1: Create the Word-Document matrix Recall that the Word-Document matrix Wis an N x V matrix, where N is the number of documents and V is the vocabulary size (the number of different words in the corpus). The nth row of W is the document vector vec(dn) for the nth document. The executable doc2vec.exe of the C program will create the matrix W (or compile the source C program if needed). We will apply this program to the stemmed documents. The command is: doc2vec stemFileList.txt > WDM This creates a document vector for each document in the docStem folder and stacks them to create the matrix in the file WDM. Task 2: Apply Singular Value Decomposition (SVD) to the Word-Document matrix This is done in MATLAB. You will need the following commands (the quote symbols used below should in Matlab be single quotes): >>W=load(‘WDM’); This reads the data in WDM into the MATLAB matrix W >>[U,S,V]=svd(W); This runs SVD on W, decomposing it as W = USVT. Question 1: Are the matrices U and V as you would expect? Explain. Verify that the singular values, the diagonal elements of S, are ordered according to size. Question 2: What are the values of the first 3 diagonal entries in S? Now recall that the singular vectors, the ‘latent semantic classes’, correspond to the columns of V. You can access, for example, the first column of V and write it into the vector sv1 by using the MATLAB command: >>sv1=V(:,1); Do this for the first 3 columns of V, creating singular vectors sv1, sv2 and sv3. Now you are going to try to interpret these vectors. Intuitively, the most important words that determine the interpretation of the vector sv1 are those for which the corresponding coordinate of sv1 is biggest (positive or negative). To find the biggest positive value in sv1 we can just use: >>m=max(sv1); But we don’t just want to know the size of the biggest number, we also need to know its position in the vector so that we know which word it corresponds to. So use: >>[m,am]=max(sv1); In this case m is the maximum value in sv1 and am is its index (argmax). Find the words that correspond to the three biggest values in sv1. To achieve this you need to know the order that the words occur in when the document vectors were constructed. The program doc2vec.exe is based on index.exe, and the word order is the same in both programs. So the nth component of a document vector corresponds to the nth word in the corresponding index file. Hint, the most significant word for sv1 turns out to be ‘project’ . Question 3: Find the three most significant words for each of the singular vectors sv1, sv2 and sv3. What is your interpretation of the corresponding semantic classes?
LAB 8/9 Financial Statements: Excel FORMAT & Presentation due Friday, November 22 by 8pm B Corp began it’s first year of business on January 1, 2020. At the company’s fiscal year-end (December 31) the following unadjusted trial balance was prepared: Unadjusted Trial Balance @ Dec 31, 2020 Account Name Dr Cr Cash $ 17,000 A/R 11,000 Supplies 2,200 Prepaid Insurance 1,500 Notes Receivable 2-yr 10,000 Truck 60,000 Building 300,000 FV-OCI Investment 10,000* A/P $ 12,000 Unearned Revenue 13,000 N/P 6% 8 year 100,000 Common Shares 230,000 RE 0 AOCI 10,000 Dividends 3,000 Service Revenue 25,000 Sales Revenue 132,700 COGS 40,000 Wages Expense 45,000 Utilities Expense 5,000 Legal Expenses 18,000 TOTALS $522,700 $522,700 Prepare the following adjusting journal entries, in good form. (Assume that BB Corp does adjusting entries only once per year on each December 31). • 3% interest was earned on the Note Receivable; the note had been in place since January 1. No cash has been paid. • Of the supplies purchased during the year, only $500 of supplies remain in the storeroom • A 12-month insurance policy was purchased on August 1 and took effect onthisday. • The truck was purchased on January 1, and has an estimated life of 8 years, with estimated residual value of $6,000. • The building was purchased on March 1; 20-year estimated useful life (zero residual value) • The 8-year loan was signed on June 1; interest accrues each year; terms of the loan state that a cash payment for principle and all accrued interest will be paid on the maturity date of the loan in 2028. • Wages of $4000 are paid every Friday for a five-day workweek (Monday to Friday). December 31 falls on a Monday. Ignore impact of holidays in this adjustment. • Clients had paid cash September 1, for services to be provided from Sept 1, 2020 to April 30, 2021. Services were completed up to December 31. • *NOTE: (not an adjusting JE): *The FV-OCI Investment showed an unrealized loss of $2,000. This needs updating. • IGNORE/OMIT INCOME TAXES FOR THE INCOME STATEMENT in this lab. Prepare the Adjusted Trial Balance, as at December 31, 2020 for BB Corp. Prepare the 2020 financial statements (Comprehensive Income Statement, Statement of Changes inEquity and a Classified Balance Sheet) in good form. Your final submission should be edited carefully for organization, column width, clarity, headings, etc. Present the adjusting journal entries at the top left of your Excel sheet. Omit all instruction wording; omit the Unadjusted Trial Balance. DO NOT include dollar signs in your journal entries!!! Leave one row space between JEs. To the right of the adj JEs, on the Excel sheet, present the Adjusted Trial Balance. Only write a one-line title for each financial statement (name of the statement & “Dec 31, 2020). Below this, present your Income Statement (include Service Rev as an “Other” revenue). To the right of the income statement, present your Statement of Changes in Equity. BELOW this, present your clearly formatted Balance Sheet. Classify the N/R as a current asset. Use formulas in all cells (in all financial statements & the Adj Trial Balance) where appropriate, to add, deduct and carry-over amounts, etc. You will need to plan carefully, for column widths to appear in an organized and presentable way. Keep in mind that balance sheet classification headings should ideally be bolded. Account names, below each balance sheet classification heading, should be INDENTED. Format and presentation are a high percentage of the lab grade.
202425_40771_MATH1013 24/25(3) MATH1013 Computational Mathematics and Modelling (40771) MATH1013 - Welcome to the module Dear Computational Mathematicians, Welcome to MATH1013! Teaching for this module begins on Tuesday 1st Oct with an online lecture at 10am. You should find that meeting in your online calendar, or you can find the link under "Online Lectures" on Minerva. Apart from attending that lecture, the next steps are: (a) Get access to Python (details below) (b) Attend your first weekly computational workshop, in-person, in week 1. This should be on your timetable, and will be a two hour slot in one of the University computer clusters. (Note that for some of you this is soon after the lecture.) (c) Make sure you can access both the Minerva page (where the first set of lecture notes are already available) and the module's Teams page. We will answer most queries in one of those two places, so please look carefully on both before asking us anything! Getting Python In MATH2920 we will learn and use the programming language Python to solve mathematical problems. More specifically, we will be using Python 3.x for some x (the value of x does not matter much, but it must not be Python 2.x). To use Python, we need a graphical user interface (GUI). The one we recommend is Spyder, which comes as part of the large free software package Anaconda. On University cluster machines, you can access Spyder via Apps Anywhere, which you should see on the desktop. If not go to https://appsanywhere.leeds.ac.uk/ and log in with your university email address ([email protected]). For more details on using the AppsAnywhere service look here: https://it.leeds.ac.uk/it?id=kb_article&sysparm_article=KB0014759. You should find a long list of applications including "Spyder". Make sure when you open it, it says Python 3.x (for some x) at the top. If you have your own machine, you may want to access Python on that too. There are different ways to do this, we recommend option (1): (1) Download and install the Python installation from Anaconda. It is free, available on Windows, Mac and Linux, and can be downloaded from: https://www.anaconda.com/products/individual (2) Use the AppsAnywhere service, as above. (3) If (and only if) you cannot do (1) or (2) another option is to sign up for a (free) account at https://www.cocalc.com. This offers the same Python language as Anaconda, although with a different looking interface (not Spyder). Cocalc allows you to write and run Python code entirely in a web browser.
Engineering Skills 1 – Design Build & Test - 2024 Design Build & Test Assignment From OrCAD 1 and Advanced OrCAD, you should now have all the necessary skills for the ‘Design Build & Test’ project. This constitutes the main part of your Engineering Skills 1 grade (60 % - subject to change) . You will now have had feedback on your first project, and have had practise at creating schematics, PCBs,a bill of materials (BoM), assembly drawings, and photomasks. You should be familiar with adding libraries to schematics, setting up PCB boards and including the correct footprints before exporting the netlist. You have also had the opportunity to learn soldering skills. For Design Build & Test, you will first build the board you designed in Advanced OrCAD. This will give you a chance to see how your design impacts the practicalities of populating a PCB. You might find there are somethings you would do differently after the experience. After this you will then design, build, and test the final circuit; ‘Electronic Dice’ . The Design Build & Test course will be graded on: • Your completed, populated XMAS lights board – 5 % of grade • Your electronic dice schematic, PCB design and associated files (e.g. BoM, assembly drawing, photomasks) – 25 % • Your completed, populated Electronic Dice board – 10 % • A verbal communication test – 10 % (Details will be provided on Moodle) • A critical review document – 10 % (Details will be provided on Moodle) Note: These contributions percentages are subject to change. Electronic Dice PCB Here you are building an ‘Electronic Dice’ . This is a PCB board containing LEDs in the pattern that mimics the face of a dice. When you hold a button, this is equivalent to ‘rolling’ the dice i.e. it is constantly changing value at a speed you wont be able to see. When you release the button it will stop on a ‘random’ value. From the above figure, you can a see that you would need 7 LEDs (a – f), to be able to capture all possible numbers of the dice. By inspection, you can also see that certain LEDs are always lit in pairs, for example, when f is on, c is also always on. This allows us to simplify the design, and use less signals to drive the LEDs. The below Figure (Fig. 2) shows the circuit diagram, and the associated ‘truth table’ (Table 1) which indicates which LED pairs are lit for a given dice number. Here, the number ‘ 1’ means the LED is lit, and a ‘0’ means it is off. Electronic Dice Schematic Below is the schematic you should use for your electronic dice design. There is a circuit description in the following section. NOTE: The 74HC175 schematic part has pins in a different order to what is shown below, but it is the same component. Take care here so that you make the correct connections in the schematic, i.e. pay attention to which pins are connected in the below circuit, not just the way the wires are drawn. Basic Circuit Overview The design contains various circuits which will be briefly discussed here. At the top left of the schematic, you can see a power switch for the board, which comprises a battery (in the schematic this is the J1 connector, but you’ll use the footprint for a small lithium battery cell holder) and a switch, with power and ground connections setup through the labels so you can connect the rest of the circuit appropriately. There two different logic chips used in this schematic. The first is the 74HC175, which is a chip containing 4 clocked, positive-edge triggered D-type flip-flops (you’re not expected to know what this means yet!) . Simplistically, each of these flip-flops are devices that sample the voltage at their input (i.e. if it’s ‘high’ or ‘low), and they evaluates this input on the rising edge of a clock cycle signal (CLK) i.e. when the clock signal voltage goes from low to high. At this point in time, the output of the flip-flop, Q, then assumes the same state as the input (i.e. either high or low). Q, outputs the opposite state of Q. You can chain these flip-flops together in a certain way using feedback, that forces them to toggle through different combinations, or ‘states’ overtime as the clock ‘ticks’ (i.e. the outputs of different flip-flops change in a particular way as they continuously receive rising edges from the clock). You can see the chip and the feedback between flip-flops in the centre of the schematic. This circuit has been designed, so that you can use these particular flip-flop output states to drive the LEDs, and cycle through all combinations relating to the face of a dice (see Table 1). However, you can’t use the outputs of the flip-flops directly, you require some logic gates so that the LEDs turn on at the correct time. In essence, you’re using logic gates to convert the signals from the flip-flops to the correct signals to drive the LED combinations. This might be confusing for now, but you will learn much more about these ‘state machines’ in Digital Electronics courses. The logic devices that are used here (i.e. to translate states of the flip-flops to certain LED outputs) are ‘NAND’ gates. This is a (NOT – AND). An ‘AND gate’ outputs a high voltage when both its inputs are high, a NAND gate does the inverse, i.e. it outputs a high voltage when both its inputs are low. In this design, the 74HC132 chip is used, which contains 4 independent NAND gates. You’ll notice that (just like in the XMAS lights design) you have 4 gates that appear separately on the schematic but relate to the 1 physical chip. The NAND gates have been used to generate the correct logic to drive the LEDs, and also as part of the clock generation circuit (see U1B in schematic above) and in the reset circuit (see U1A in the schematic above). Notice that there is a push button switch that connects the clock circuitry to the state machine. When you press this, you’re causing this ‘state machine’ to rapidly toggle through all its possible states (this is like rolling the dice), and when you let go of the switch it appears to stop randomly on a value, which is displayed on the LEDs. It only appears random because it’s changing through the states so quickly that you couldn’t possibly control which result it lands on! Create the schematic You will need to find the appropriate libraries. In the Advanced OrCAD assignment pdf, details were given about a trick for searching for parts. Initially, add the common libraries (i.e. as in XMAS lights) when starting the project, but then use this search tip to find parts you’re missing. Be careful that the ‘push button’ in the schematic has four pins. When laying out the four logic gates of the 74HC132 you must lay all the gates out at the sametime otherwise OrCAD tends to lay down multiple chips – see the OrCAD guide. Again, this was explained in the XMAS lights submission. Make sure the components have the correct values. Check the pins of the LEDs against the pins of the LED footprints and ensure the anode and cathode correspond appropriately. You’ll also notice that not all the LEDs face the same way in the schematic! Check the pin numbers on the switches – both the push button and the slide on-off switch and how they match with their footprints. If you’re confused about this – aska lab demonstrator! Check that you have got the proper power and ground names so that the chips are connected correctly as they have hidden power and ground links (this is explained in the OrCAD tutorial videos for editing a part). Use all the tips you’ve been given during feedback for OrCAD 1 and Advanced OrCAD, make the schematic neat and easy to read, add your name, GUID anda name for your design. Footprints For the footprints, you will have to decide which to use in some instances, for example, the logic chips are different sizes. Google the parts and see which GU footprints they correspond to. You can find the footprints pdf on the OrCAD training moodle page. The pdf files required are called Page 1, Page 2 and Page 3. nt)ResistorsGU-RC400CapacitorsGU-RC200LEDsGU-LED5mmSlideswitchGU-sw-slide-spdtPushswitchGU-PUSH52 (Check PIN assignment carefu Table 1: Suggested footprints for circuit components, double check and select the correct footprints for the logic chips. PCB Board Once the schematic is properly checked and annotated, proceed to start PCB Edit and create an empty board. Setup the padpaths and padstacks as in Advanced OrCAD. If you’ve done this properly in the previous assignment, it should still point to the correct directories. If you had problems selecting these paths, speak to a demonstrator. Make the board 4” x 2” (100 mmx 50 mm). REMEMBER, IT IS USELESS TO CONTINUE WITH PCB EDIT IF THE NETLISTER SHOWS ERRORS. You will need to use both sides of the board for tracks, called ‘double-sided routing’ . Follow the guides in Moodle. You should be able to route this board manually. • Remember to lay out the pairs of LEDs in their proper pattern (i.e. to look like the face of a dice! • Place a ground plane on top and bottom layers (to reduce overall etching time) • Use the correct ‘Gu-via80’ vias and not ‘through hole plating’ – minimise the amount of vias you can use! The fewer the better. • When using vias, make sure you terminate tracks in away that allows components to be soldered (see Introduction to Advanced OrCAD assignment). • Have an identifier on the board (I.e. your name or initials, or GUID for example) with text size of at least 5. • Make sure you don’t have design rule check (DRC) errors. • Use the correct track widths as recommended in the video tutorials, and use wider tracks for power and ground than for signals. Details of the constraints to use are found on the OrCAD PCBedit and PSPICE training page. • Use sensible ‘ keepin’ distances. • Design the board so that components can be easily mounted and soldered. • Minimise the number of vias used in the design. Deliverables & dates In the remaining lab sessions you will need to build and test your XMAS board, and Design, Build & test your Electronic Dice board. Use your time carefully and work on the Dice board in your sparetime, to maximise the time to build and test PCBs in the lab. The deadline for the Dice board submission is specified on the Moodle page (where you downloaded this document from!). It is strongly recommended that you complete the assignment prior to this cut-off date, to maximise the time you have to build and test the PCB. You need to submit your design files for assessment. You can do this on the Engineering Skills 1 Moodle page, going to the EEE Design Build & Test page. Submit the following in a zipped folder: o PDF of the schematic using the guidance described above and in tutorials. o PDF of separate PCB layers, which have been laid out in accordance with advice in this document and in tutorials o PDF of the assembly drawing Use landscape orientation and scale to page for the above documents o PDF of the BoM – neatly presented o Separate PDFs for the top and bottom photomasks (properly mirrored) . Note: please print to pdf at 600 dpi resolution as otherwise ground planes may appear to be segmented. These need to match the board size on the page (1:1 scaling) o The .brd file Other assessments Following the lab sessions, there will be short (10 minute) verbal assessment, where you will be asked to talk about the process of designing, testing and debugging PCBs. You will also need to provide a 1 page critical review document – more details will be provided on Moodle. These will be required next semester but speak to the lecturer or a GTA if you’ve finished your lab book early. Picture of the board Below is a picture of the completed Electronic Dice Design (note that the ground plane is missing!)
ITS66704 (Sept 2024) Advanced Programming Part A – Analysis and Design 2. Concepts Our guiding principles include ongoing motivation, scientific advice, and the user's health experience. Everybody has a distinct body, lifestyle, and set of goals, and our production team is in agreement that fitness is a personal journey. Personalization "for each person" must be the main emphasis of our software design. Our goal is to provide customers with a personalized and distinctive sports experience. Core features The software is divided into three modules: User Management, Fitness Program and Tracking, Data Analysis and Feedback. The user management module includes Sign Up (new users enter their name and password to complete the registration), Log In (registered users enter their name and password to verify the identity of the login), User Profile (management of the user's basic information such as name, weight,dateofbirth, gender, body fat percentage, etc.). Fitness program and tracking module contains the following functions: Workout Tracking (responsible for recording the user's workout data. It is responsible for recording the user's workout data, such as title, date, duration, workout area, and calculating the total calorie consumption), Workout Plan (which can assist the user in determining their weekly exercise goals and body weight target), Diet Tracking (which keeps track of the user's daily calorie intake as well as the meals and drinks they consume), and Online Coach (which provides the links to a variety of exercise instructional videos: HIIT, Yoga, Stretching, Cardio, etc.). The Data Analysis and Feedback module includes Fitness Trends (which generates different images, such as a bar graph that shows more clearly the number of hours the user has trained in the month, a pie chart that shows the distribution of body parts the user has worked out in the month, as well as a graph that shows changes in users'body weight and a bar chart that shows the trend in calorie consumption). It also has Customer Support, which makes it easy for users to give feedback and contact the developers. Targeted audience The software covers running, yoga, meditation, cycling and other sports, as well as user-recorded sports progress, diet and other aspects, generating exclusive user's own body and health trends. Daily fitness training courses are diversified, and users can divide them according to usage scenarios, intensity, training needs, equipment, etc., which can basically cover all the needs of daily training. Quantitative recording and analysis of sports data and visualization of effects can help improve users'enthusiasm for exercise. The software provides online coaches, customized intelligent training plans according to personal needs,and voice guidance training throughout the process, which is very friendly to novice fitness enthusiasts. The software is suitable for people who value appearance, love health, pursue quality of life, have the need to lose weight, want to record life, and pursue self-discipline. Technology Tools and technologies used: 1. IntelliJ IDEA: Chosen for its robust Java development environment and excellent JavaFX support. Its code completion and debugging tools significantly improved our development efficiency. 2. JavaFX: Selected as the primary framework due to its powerful UI capabilities and cross-platform. compatibility. It allowed us to create a visually appealing game with smooth animations. 3. Scene Builder: Utilized to design the game's UI layouts visually. This tool accelerated our UI development process and ensured consistency across different screens. 4. JDK 21: The latest stable version at the time of development, offering performance improvements and new language features that enhanced our code quality and efficiency. 5. Libraries:It is an external plug-in that can realize the function of making charts. It processes, maps, and renders user data, and finally outputs it into a visual chart to provide users with changes in body movement in recent months or weeks, as well as the trend of changes in exercise volume. It allows users to compare and analyze their real-time physical status, which helps to improve users'enthusiasm for exercise. 6. Multithreading: Multithreading is a programming technique that allows a program to run multiple threads at the same time. Threads are the smallest units that the operating system can independently schedule and execute. They share the memory space and resources of the same process, but can execute code independently. The most important thing is that when one of the threads is damaged, the others can still run normally. 7. API: (Application Programming Interface)It is an interface between one system and another, defining a set of rules and protocols to allow communication and interaction between different software or services. It can generate a link for users to copy and paste and send to other friends. 3. Design 3.1 OO concepts Abstraction: In Java, abstraction simplifies system complexity by hiding complex implementation details and showing only the functions required by users (Rahul bangari, 2024a). The developer can define a User abstract class in the software. The User abstract class contains two basic attributes: name and password, which represent the basic identity of the user. There are two behavior. methods: login() : provides the login function to verify whether the name and password entered by the user match. updateProfile() : Allows users to modify their personal information (such as updating user names or passwords). To increase the flexibility of the system and ensure that the interface logic is separated from the data processing. The developer chooses to implement the abstraction with an interface, hiding the methods and basic properties in the abstraction and displaying only the necessary functionality: signUp(), login(),updateProfile(). Interface:In the Java programming language, an interface is an abstract type that is used to define a class's behavior. Java interfaces provide abstract methods and static variables(Kumar and Nitsdheerendra, 2016). The developer declares three methods by defining a UserActions interface: signUp(), logIn(), and updateProfile(). These methods are marked as abstract methods, meaning that any class that inherits the interface needs to implement them, otherwise an error will be thrown. The developer can then create an Abstract Class User and have it implement the UserActions interface. In the User class, the implementation method is provided: signUp() : responsible for user registration, set the name and password of the user. logIn() : Responsible for user login. Check whether the name and password entered by the user are correct. If correct, the login succeeds. updateProfile() : Allows users to update personal information, such as changing their name or password. All classes that implement the UserActions interface have the same basic behavior and methods, ensuring code consistency and maintainability. And if you need to add new user types or new features in the future, you don't need to modify the existing code, you just need to extend the interface or implementation class Encapsulation: Encapsulation in java is to provide data protection to avoid the disclosure of sensitive user information.(geeksforgeeks, 2017). To protect the user's personal information and data records, sensitive information (such as the user's password) and data records (such as daily calorie intake and water intake) can be made private fields in an encapsulation manner that allows secure access or modification only through public methods such as setPassword() and getPassword(). This design effectively avoids direct manipulation of data, reduces the risk of data leakage or tampering, and ensures consistent access control. Inheritance:An essential component of Java OOPs is inheritance, the function is avoid duplicate code and achieve extensible interface functionality.(Rahul bangari, 2024). Inheritance applications in this software can be reflected in: The developer can define a base class called 'User' that represents properties and functionality common to all users. Each 'User' object contains' name '(username) and'password '(password) properties, in addition to two methods:'signUp() 'for registering the user, and' logIn() 'for logging in. These methods are fundamental features that are shared by all users. Then define the RegularUser class, which inherits from the User class. The RegularUser class is a subclass tailored to ordinary users, such as fitness enthusiasts. Not only does it inherit all the attributes and methods of the User class, but it also adds some functionality specific to ordinary users. For example, the RegularUser class adds attributes such as diet_tracking (diet plan) and Workout_Plan (exercise plan), workout_Tracking (exercise tracking), etc. These attributes enable Regularusers to better track and customize an individual's fitness progress and diet plan. In addition, the RegularUser class provides additional methods, dietData_() for recording diet information and exerciseData() for recording exercise information. These features are required by ordinary users when using the system. Through this inheritance structure, the code realizes the function reuse and extension. As a superclass,the User class provides basic user management functionality, while the RegularUser class further enhances the functionality of specific users. If we need to provide coach users ('coachusers') with the ability to manage other users in the future, we can continue to inherit the User class and add specific functions to CoachUser, so as to achieve the extensibility of the code. This design allows users in different roles to share common functions, while also providing specific customization functions for different roles. Composition is an object-oriented design principle that refers to building more complex objects by combining objects together instead of using inheritance. It is necessary to create a basic interface that can be implemented by different training actions, such as PushUp, Plank, Squat. In Composition, the relationship between different classes is inseparable, and you have me and I have you. (Rahulbangari, 2024a) It can divide the software into different categories (such as Workout PLans, Workout Tracking, etc.). Provide different and personalized training plans, each of which may contain multiple training modes (such as aerobic exercise, strength training, etc.),which are combined to create a complete training plan. Benefit: Composition can improve code reusability and maintainability, meet the diverse needs in fitness scenarios, and facilitate the rapid update of system functions according to business needs. It can not only better support user needs, but also maintain high development efficiency and quality. Aggregation is a design pattern that represents the loose relationship between the whole and the part. It is used to represent the "whole-part" relationship, and the lifecycle between the whole and the part is independent. Aggregation is different from Composition. Although the classes in Aggregation are related, they can exist separately even if they are separated. However, they are also very important to each other. (Rahulbangari, 2024a) A training plan can include multiple fitness movements, and users can manage their own fitness data records. Multiple coaches can teach the same course, and the lifecycle of the course and the coach is independent, allowing for arbitrary combinations. Aggregation can improve the modularity, reusability and flexibility of the software. Because it can be used as an independent module, it is easy to count and display, and the user data is separated from the achievement module, making the system more flexible, so it can more efficiently manage the relationship between objects, reduce coupling, and improve the flexibility and maintainability of the system. Association represents an interactive or cooperative relationship between two or more objects, without lifecycle dependency and ownership relationship. Association, Composition, and Aggregation are completely different concepts. In Association, although the classes of the relationship exist with each other, they are completely independent, and one class is dispensable to other classes.(Rahulbangari, 2024a) For example, the many-to-many association relationship between courses and users, coaches and users, and users and exercise records. This relationship is not strongly dependent, usually manifested as "related but not part of the whole", which is different from the nature of Aggregation. Association can provide flexible interaction between objects, enhancing the scalability, maintainability and understandability of the system. Choosing different design methods can more clearly express the relationship between classes and enhance the flexibility and maintainability of the code. 3.2 Patterns Introduction to Patterns Design patterns are reusable solutions to common software design problems. By applying design patterns, the system can achieve modularity, maintainability, and scalability. In this section, the patterns used in our application are explained in detail. 1. Model-View-Controller (MVC) Purpose: The MVC pattern is implemented to separate concerns in the application, ensuring a clear distinction between the presentation layer, the business logic, and user interactions.(New York (2011)) Application in the System: Model: Encapsulates the application's data and business logic. For instance, the Diet Model handles the user's diet tracking data, such as meals, calories, and total water intake. View: Represents the user interface, displaying the input forms and results for modules like Diet Tracking and Workout Tracking. Controller: Manages user interactions (e.g., submitting diet data) and updates both the Model and View accordingly. Justification:Keeps the user interface separate from the business logic, allowing independent modifications. Promotes testability as Model and Controller can be tested independently from the View. 2. Data Access Object (DAO) Purpose: DAO is used to abstract and encapsulate all database interactions, such as saving user diet and workout data. This isolates persistence logic from business logic.(2021.UKEssays. ) Application in the System: DAO is used to manage user records in modules like Diet Tracking and Workout Tracking. For example, a Diet DAO class handles database operations for storing and retrieving diet data.( Feyza Nur. 2018) Justification: Separates the persistence logic from the application's business logic, ensuring cleaner code. Makes it easier to switch database implementations if needed. 3. Data Transfer Object (DTO) Purpose: DTO is used to transfer data between the different layers of the application in a structured manner.((WODA 07). IEEE, 2007: 5-5.) Application in the System: DTOs are used in the application to pass data between Controllers and Views, such as transferring user diet details or work out details. Example classes: Diet DTO, Workout DTO. Justification: Reduces the complexity of data transfer between layers. Improves code readability by grouping related attributes into one object. 4. Singleton Purpose: The Singleton pattern ensures that a particular class has only one instance and provides a global point of access to it. This is commonly used for managing shared resources like database connections.(John Wiley & Sons, 2013.) Application in the System: The Database Manager class is implemented as a Singleton to manage database connections across the entire application. Justification: Prevents the creation of multiple database connections, ensuring efficient resource usage. Provides a centralized control point for database operations. 5. Dependency Injection Purpose: Dependency Injection is used to decouple object creation and usage. It allows objects to be injected into a class instead of being created within the class. Application in the System: Controllers like Diet Controller and Workout Controller receive dependencies such as Diet Service and Workout Service through dependency injection.(training[J]. 2022.) Justification: Enhances modularity by allowing the injection of mock objects for testing purposes. Simplifies maintenance by decoupling classes and their dependencies.
Assignment Remit Programme Title Economics suite of programmes Module Title PASDE A Module Code 33969 Assignment Title Literature review: writing skills Level LC Weighting 100% Hand Out Date 21st of October 2024 Deadline Date & Time 3rd of December 2024 12pm Feedback Post Date 13/01/25 Assignment Format Other Assignment Length 1500 words (max) Submission Format Online Individual Module Learning Outcomes: This assignment is designed to assess the following module learning outcomes. Your submission will be marked using the Grading Criteria given in the section below. LO 1. Demonstrate engagement with own personal, academic and professional development activities and planning. LO2. Apply reflective practice to their own personal, academic and professional development. LO 3. Define critical thinking and demonstrate basic practice of critique of the academic work of others and themselves. LO 4. Undertake independent academic study and writing to produce a basic literature review on an Economic issue. Assignment: This assessment consists of three parts, each requiring a maximum of 500 words. The objective is to engage with both GenAI tools and traditional research methods to produce a comprehensive literature review and reflection. Please see detailed guidance on a separate page below. Grading Criteria / Marking Rubric Your submission will be graded according to the following criteria: 1. Identification of Knowledge Gap and Research Question 2. Reflection and Evaluation of Research Methods, including a critical comparison of GenAI and traditional methods. 3. Critical and Thematic Integration of Sources See the marking rubric at the end of the remit for more information on how your work will be marked and graded. Ethical Use of Generative AI (GenAI) You are required to use GenAI to support your submission for this assessment. You may use it for the following activities: • Researching and refining your ideas • Information retrieval or background research • Drafting an outline to organise or summarise your thoughts • Refining research questions • Checking spelling and grammar Applying GenAI tools should be done with human oversight and control. You should carefully review and use the results carefully as AI can generate authoritative-sounding output that can be incorrect, incomplete, uncritical, or biased. You may not submit any work generated by an AI tool as your own. Where you include any material generated by an AI tool, it should be properly declared just like any other reference material. Alongside your assignment you should also provide a commentary in the Cover Sheet detailing how GenAI has been used to develop your final submission. If you have not used GenAI tools, you should clearly state so. Plagiarism, including that which results from using GenAI, is a form. of academic misconduct that will be dealt with under the University’s Code of Practice on Academic Integrity. https://intranet.birmingham.ac.uk/as/registry/policy/conduct/plagiarism/index.aspx University guidance on ethical use of GenAI can be found here: https://intranet.birmingham.ac.uk/as/libraryservices/asc/student-guidance-gai.aspx Further Guidance: Feedback to Students: Both Summative and Formative feedback is given to encourage students to reflect on their learning that feed forward into following assessment tasks. The preparation for all assessment tasks will be supported by formative feedback within the tutorials/seminars. Written feedback is provided as appropriate. Please be aware to use a web browser and not the Canvas App as you may not be able to view all comments. Plagiarism: It is your responsibility to ensure that you understand correct referencing practices. You are expected to use appropriate references and keep carefully detailed notes of all your information sources, including any material downloaded from the Internet. It is your responsibility to ensure that you are not vulnerable to any alleged breaches of the assessment regulations. More information is available at University’s Code of Practice on Academic Integrity https://intranet.birmingham.ac.uk/as/registry/policy/conduct/plagiarism/index.aspx . Wellbeing, Extensions and Extenuating Circumstances: The processes for extensions and extenuating circumstances (ECs) are to support students who have experienced unforeseen issues that have impacted their ability to engage with their studies and/or complete assessments. Students should notify Wellbeing of any extenuating circumstances as soon as possible via the online form, following the guidance provided. https://intranet.birmingham.ac.uk/social-sciences/college-services/wellbeing/index.aspx Writing a literature review in 2024. Part 1: Literature Review Using Copilot For the first part, select one of the provided topics and utilise Copilot as your primary research tool and assistant. Narrow down your chosen topic with the assistance of Copilot and compose a literature review. Your review should identify a gap in the existing body of knowledge and conclude with a clear research question. Ensure that you integrate sources thematically and critically rather than merely summarising them. Provide areference list for this Part with at least 5 references. Part 2: Literature Review Using Specialist Tools and Google Scholar In the second part, build upon the literature review from Part 1. Write additional 500 words (maximum) literature review on the same topic. Employ specialist literature review tools and Google Scholar to gather additional information, refining and narrowing down your topic further. You are expected to read at least the introduction and conclusion of five original articles. Your literature review should once again identify a gap in the existing knowledge and conclude with a refined research question. Remember to adopt a critical and analytical approach. Provide areference list for this Part with at least 7 references. Part 3: Reflection This part of the assessment requires you to reflect on your work in Parts 1 and 2. Firstly, compare and critically evaluate both parts (Part 1 and Part 2) according to the following criteria: - Relevance and appropriateness of integration of sources; - Identification of research gaps and appropriate formulation of the research question; - Literature review provides a critical and in-depth analysis, with some originality. Secondly, taking into account your evaluation of Part 1 and Part 2, provide a reflection on the process of writing these two parts. In your reflection, consider the following themes (i) Usefulness of GenAI Tools (e.g. How useful were GenAI tools in researching the original topic? Did they help you narrow down your topic effectively?); (ii) Comparison of Research Methods (e.g. How does reading original articles compare with using summaries from GenAI tools? Which method provided more depth and clarity?); (iii) Accuracy and Clarity of GenAI Tools (e.g. Were GenAI tools incorrect, vague, or providing unsubstantiated statements? Did this lead you to write a weaker literature review?). Note: you do not need to provide any additional references for Part 3, unless you want to give one or two examples of key articles missing from Parts 1 & 2. List of Topics: 1. The Effects of Fiscal Policy on Economic Growth 2. Microfinance and Economic Development 3. The Economics of Entertainment 4. Economic Impact of Tourism 5. Income Inequality and Economic Mobility
HUDM 5123 - Linear Models and Experimental Design HW 08 Higher-way Designs Instructions. ● Unless otherwise noted, assignments are due before the next lab class (i.e., in a week). ● You are encouraged to discuss problems with classmates, but all work you submit must be your own. ● If applicable, any plots should have appropriate axis and overall labels. ● In general, do not include computer output (either SPSS or R) in your write-up. Instead, summarize relevant points using text, tables, or plots. ● When in doubt about formatting issues (e.g., for references, tables, notes, etc.), use APA style. Data. "HW_08_data.sav" . Task. Analyze these data, to answer any questions you believe would be of theoretical interest, and interpret your ndings. Study Design. A clinical psychologist is interested in comparing three types of therapy for modifying snake phobia. However, she does not believe that one type is necessarily best for everyone; instead, the best type may depend on degree (i.e., severity) of phobia. Undergraduate students enrolled in an introductory psychology course are given the Fear Schedule Survey (FSS) to screen out participants showing no fear of snakes. Those displaying some degree of phobia are classi ed as either mildly, moderately, or severely phobic on the basis of the FSS. One-third of females and one-third of males within each level of severity are then randomly assigned to a treatment condition: either systematic desensitization, implosive therapy, or cognitive behavior therapy (CBT). The data are obtained using the Behavioral Avoidance Test (higher scores indicate less phobia). Y: the Behavioral Avoidance Test score, bat Factor A: treatment condition, cond Factor B: phobia level accoridng to the FSS results, phobia Factor C: gender, gender
Mathematical Biology Homework Assignment 5 2024–25 Please submit solutions to the following two questions as Homework Assignment 5 by 16:00pm on Monday, November 25, 2024. I. Consider the reaction-diffusion equation under the additional conditions that u(0, t) = 0 = ∂x/∂u (π, t), where a is some positive constant. (a) Assuming that Equation (1) admits separable solutions of the form u(x, t) = X(x)T(t), show that X(x) and T(t) have to satisfy the differential equations X'' = λX and T˙ = (λ - a)T, (2) where λ is a real constant. (b) Solve Equation (2) for the functions X(x) and T(t) under the given conditions. (c) Deduce that any function of the form. with n = 0, 1, 2, . . . and Cn constant, is a solution for (1). II. The Burgers-Fisher-Kolmogorov-Petrowskii-Piscounov (FKPP) advection-reaction- diffusion equation can be written in rescaled form. as ut + kuux = uxx + u(1 - u), (4) where k > 0 is a real constant. (a) Determine the homogeneous – i.e. time- and space-independent – rest states of Equation (4) (b) Let z = x - ct, with c positive, and derive the travelling wave equation corre- sponding to (4) that is satisfied by U(z). (c) Rewrite that equation as the first-order system U, = V, (5a) V, = -cV + kUV - U(1 - U); (5b) then, determine the equilibria thereof, and decide their stability. (d) Given your findings in item (c), deduce that monotonic front solutions to (4) only exist for c > 2. (e) Verify that, for c = 2/k + k/2 with k > 2, a heteroclinic connection between the equilibria of (4) is given explicitly by V(U) = -2/KU(1 - U) (6)
Econ7100 Project Evaluation Final Class Project The main purpose of the final project is for students to estimate the impact of a real randomized control trial RCT. The randomized control trial you will examine provided information to first- time mothers living in poverty in Nashville TN. The information to first-time mothers was provided in books, called hereafter “Baby-Books.” Mothers in the intervention group received new baby-books every two-months. The hypotheses of this RCT were that the new information provided to mothers via “Baby-Books” would: 1) Improve parenting skills 2) Improve nutritional practices with their babies, 3) Prevent un-intentionally baby injuries. 4) Improve post-partum depression and stress levels 5) Impact labor participation Overall, helping first-time young women to overcome the anxiety of becoming first time mothers’ should therefore, improve their own and their baby’s health in the short and long run. Each group will estimate an intent to treat (ITT) and the treatment on the treated (TOT) effect of the randomized educational book intervention. To measure the effect of this intervention, first time mothers were randomly assigned to one of three groups: 1) an educational book group (the intervention), 2) anon-educational book group, or 3) a no book group. Participants were interviewed by either phone (12 times) or personal interviews (7 times). On the interviews, participants self-reported all the outcomes of interest such as type and occurrence of their child’s unintentional injuries, knowledge learned by reading the given books, labor market outcomes, post-partum depression, stress, and many other important outcomes. Each one of the groups in class will examine the impact of this randomized intervention on one or two outcomes of interest. Each group will write a 10-15 page report. In addition, each group will submit a do file, a log, a report in Word, and a powerpoint presentation (about 15-20 slides). The report and powerpoint presentation should include five sections: Introduction, Data description, Methodology, Main Results, Conclusions, and Recommendations. There will be five groups, each group should answer the following questions: 1) What is the intervention about? Explain the study design, sample size by group at baseline. Write down the results chain describing the inputs, activities, outputs, and outcomes of this intervention. (10 points) 2) Did the intervention have a valid counterfactual? (10 points) 3) Estimate the outcomes of interest by subject, group, and time. (Suggestion: make sure you estimate the outcomes of interest at baseline and create fixed time intervals, following the baby’s age at 0, 6, 12 and 18 months old. Some outcomes were not asked at the exact desirable time intervals, but you can estimate the mean or sum of the outcomes at the nearest given time). (10 points) a. Identify all the variables you will use in your analysis and create two final datasets: 1) A wide cross-sectional dataset including household demographics, outcome of interest, and time fixed variables. 2) A panel dataset. (10 points) b. Estimate the impact of the intervention by using the following methods: 1) Pre-Post analysis, (10 points) 2) Randomized Assignment, (10 points) 3) Difference-In-Difference (10 points) Explain and compare your results. For each estimation, explain whether the outcome improved or got worse? Interpret the results 4) What characteristics (mother’s or babies) correlate with the outcome of interest? For instance, were children’s injuries a function of their own developmental growth (i.e. increasing as children grow older), children’s gender (i.e. do boys have more injuries than girls?), mother’s education? mother’sincome. (5 points) 5) Estimate the TOT intervention effect by using a Randomized Assignment and a DID methodology. Explain and compare your results with previous ITT. (10 points) 6) 15% creativity, group organization, attendance, consistency. Data and Documentation: All questionnaires use to gather the information, dataset, and codebooks are available on BrightSpace. Due date: The final report is due Dic 11th. The project presentations are due Dec 2 and 4 (two group presentations per class). Attendance at group presentations is mandatory (plan accordingly). Each student should ask at least one question at the group presenting their work. Meetings with Professor: Starting Nov 4th, each group will meet with your professor during class time to review questions you might have AND to show progress. Meetings with your professor will take place at Calhoun 403 during regular class time, following the following schedule: Monday Nov4 2:35-2:55pm Group1; 3-3:20pm Group2; 3:20-3:40pm Group 3 Wednesday Nov6 2:35-2:55pm Group4; 3-3:20pm Group5; 3:20-3-40pmAny-Group Monday Nov11 2:35-2:55pm Group5; 3-3:20pm Group4; 3:20-3:40pm Group 3 Wednesday Nov13 2:35-2:55pm Group2; 3-3:20pm Group1; 3:20-3-40pmAny-Group Monday Nov18 2:35-2:55pm Group1; 3-3:20pm Group2; 3:20-3:40pm Group 3 Wednesday Nov20 2:35-2:55pm Group4; 3-3:20pm Group5; 3:20-3-40pmAny-Group All group members should attend the meetings. If a group does not have any questions or does not have any progress, please notify your professor and cancel the scheduled meeting. Remember all group members should attend. If you attend a group meeting, come prepare, never empty-handed, and have concrete questions and possible solutions. List of outcomes - randomly distributed among four groups: 1. Nutrition, Safety knowledge, and total Mother’s knowledge 2. Parenting, Development, and Total Mother’s knowledge 3. Total Mother’s knowledge, Mother’s depression and Mother’s stress 4. Total Mother’s knowledge and Unintentional children’s injuries 5. Total Mother's knowledge and Labor Supply outcomes
This lab assignment requires you to compare the performance of two distinct sorting algorithms to obtain some appreciation for the parameters to be considered in selecting an appropriate sort. Write a HeapSort and a Shell Sort. They should both be recursive or both be iterative, so that the overhead of recursion will not be a factor in your comparisons. In this case, iteration is recommended. Be sure to justify your choice. Also, consider how your code would have differed if you had made the other choice. The strategy behind a Shell Sort is to create a more nearly optimal environment for a simple, relatively inefficient sort technique, namely Simple Insertion Sort. This optimal environment allows the simple strategy to be efficient. Use the following sets of increments 1, 4, 13, 40, 121, 364, 1093, 3280, 9841, 29524 (Knuth’s sequence)1, 5, 17, 53, 149, 373, 1123, 3371, 10111, 303411, 10, 30, 60, 120, 360, 1080, 3240, 9720, 29160 One or more sets of increments of your choice. You will have four different Shell sorts to run. Heap Sort is a practical sort to know and is based on the concept of a heap. It has two phases: Build the heap and extract the elements in sorted order from the heap. Altogether, you will have five sorts: 1 Heap and 4 Shell. Create input files of four sizes: 25,50, 200, 500 integers. For each size file, make three versions. On the first, use a randomly ordered data set. On the second, use the integers in reverse order. On the third, use the integers in normal ascending order. (You may use a random number generator or shuffle function to create the randomly ordered file. It is important to avoid too many duplicates. Keep them to about 1%). Your program should access the system clock to get some time values for the different runs. The call to the clock should be placed as close as possible to the beginning and the end of each sort. If other code is included, it may have a large, fixed, cost, which would tend to drown out the differences between the runs, if any. Why take a chance! If you get too many zero time data values or any negative time values then you must fix the problem. One way to do this is to use larger files than those specified. Another solution is to perform the sorting in a loop, N times, and calculates an average value. You would need to be careful to start over with unsorted data, each time through the loop. Turn in a analysis comparing the two sorts and their performance. Be sure to comment on the relative runtimes of the various runs, the effect of the order of the data, the effect of different size files, and the effect of different increment sizes for the Shell Sort. Which factor has the most effect on the efficiency? Be sure to consider both time and space efficiency. Be sure to justify your data structures. As time permits consider implementing a Straight Insertion Sort to compare with Shell Sort. Also, consider files of size 10,000 or additional random files – perhaps with 15-20% duplicates. Your write-up must include a table of the times obtained. The source code you turn in needs to print out the sorted values
Tower of Hanoi Puzzle COMP 273, Fall 2024, Assignment 4 Introduction This assignment will give you more practice with functions and recursion in MIPS. In addition, it gives you some experience of how to make your code efficient, both in terms of the number of instructions as well as with the use of the cache. All the functions you write in this assignment must respect register conventions. Your code must also include useful comments to make it readable. You will need to use two MARS tools in this assignment: • Data Cache Simulator: This tool allows you to set different cache sizes and types, and measures the number of memory accesses, and cache misses. • Instruction Counter: This tool counts the number of true MIPS assembly instructions that are executed during your program. Each tool needs to be connected to MARS, and you will want to use a combination of breakpoints and the reset button on each tool to make careful measurements of your code performance. you should go over them in "MARs tutorial 3" Assignment Objectives (40 marks total) Provided code will help you get started with this assignment. The code lets you run 2 different tests by changing AlgorithmType in the .data section at the top of the code. - Algorithm Type 0 will help you test the first objective of this assignment (Tower of Hanoi -recursive method). - Algorithm Type 1 will help you test the second objective of this assignment (Tower of Hanoi -non-recursive method). 1- Tower of Hanoi -recursive method (20 marks) The tower of Hanoi is a puzzle invented by French mathematician E(´)douard Lucas. The puzzle consists of three rods: A, B and C, and n disks (labeled with numbers 1..n) with different radii stacked on pod A. The disks are arranged from top to bottom in the order of decreasing radius. Figure 1shows an example with three disks. You can move the diskbetween rods, but smaller disks have to be on top of bigger disks, and you can only moveone disk at one time. The goal is to move all the disks from rod A to rod C with the minimal number of moves. Figure 1: An example of the tower of Hanoi with n = 3 Alice is trying to solve this puzzle (with 3 disks). She cannot solve it directly, so she asks Bob to move disk 1 and 2 from rod A to B, then she can simply move disk 3 fromA to C, and ask Bob again to move two disks from B to C, and solve the problem! These two tasks are still a bit difficult for Bob, so for the first task, he asks Charlie to move the smallest disk from A to C so that he can move the second disk from A to B, and then ask Charlie to move the smallest disk from C to B. Similar for the second task, he asks Charlie to move the smallest disk from B to A before he moves the second disk from B to C, and asks Charlie to move the smallest disk from A to C. All the tasks for Charlie are simple enough so he doesn’t need help from others. By asking others to solve the sub-problems, Alice successfully solves the puzzle: 1. Charlie moves disk 1 from A to C; 2. Bob moves disk 2 from A to B; 3. Charlie moves disk 1 from C to B; 4. Alice moves disk 3 from A to C; 5. Charlies moves disk 1 from B to A; 6. Bob moves disk 2 from B to C 7. Charlies moves disk 1 from A to C. This idea can be formalized using a recursive algorithm shown in Algorithm1, which gives the solution to move n disks from the source rod to the target rod, with the help of the auxiliary rod. Algorithm 1 Recursive algorithm for the tower of Hanoi procedure MOVE (n, source, target, auxiliary) : if n=1 then move disk 1 from source to target else MOVE(n - 1, source, auxiliary, target) move disk n from source to target MOVE(n - 1, auxiliary, target, source) end if end procedure Implement the recursive algorithm described above in the provided hanoi.asm. The program should read an integer n (already implemented), i.e., the number of disks. The output should be the steps to solve the problem, one step per line, printed to the standard output. Each line should have the format of: Step i: move disk from to . Here is an example of the output for the problem with n = 3: Step 1: move disk 1 from A to C Step 2: move disk 2 from A to B Step 3: move disk 1 from C to B Step 4: move disk 3 from A to C Step 5: move disk 1 from B to A Step 6: move disk 2 from B to C Step 7: move disk 1 from A to C You can assume that the input nisa valid integer and 1 ≤ n ≤ 15. Make sure you strictly follow the output format! 2- Tower of Hanoi - non-recursive method (15 marks) Write a non-recursive algorithm for the tower of Hanoi which replacing the above recursion algorithm with a loop. You have the same input assumption and output requirements as indicated for the recursion algorithm in 1. 3- Measure cache performance (5 marks) Complete and submit the provided .csv (comma-separated values) file with entries summarizing the cache performance (i.e., number of cache misses and hit rate) and the instruction count of the recursive and non-recursive versions of the Tower of Hanoi algorithms implemented in 1 and 2, for an input n (number of disks) =15. For both solutions, fix the cache size at 1024 bytes and examine performance by varying the block size and number of blocks (with a fixed cache size). Include results for at least two configurations of block size and number of blocks for each solution. Use LRU as a replacement policy for all the tested configurations. Collect data only from the final version of your implementation, which you will submit for grading. Ensure that you do not modify your code once data collection begins, as the TA will verify accuracy and may deduct marks if the data collected is inaccurate. The filename must have the form .csv, that is, it should consist of your student number and have the file extension .csv, for instance, “260123456.csv” . To best ensure you respect the file format, rename and edit the provided csv file. You may include comments in the file by starting a line with “#”, but otherwise complete the entries in the provided file with comma separated values, or fields, on each line. These fields consist of your student number, the test name, the block size, number of blocks, the instruction count, the number of memory accesses, the number of cache misses, and the hit rate. The file should only contain ASCII. You can use the MARS text editor to load and edit the provided csv file. Take care to write your student ID on each line of the provided csv file. Follow the following steps to measure the required performance of your Tower of Hanoi solutions: 1. Set two breakpoints at the locations specified in the comments within the provided .asm file for each implemented algorithm. 2. Ensure the cache simulator is configured correctly, then connect it to MIPS. 3. Ensure the instruction counter is connected to MIPS. 4. Run your code up to the first breakpoint. 5. Press thereset button on the cache simulator. 6. Press thereset button on the instruction counter. 7. Press the run button to continue execution. 8. Once the simulation stops at the second breakpoint just before exiting the program, make note of the instruction count, and the cache performance. 9. Repeat step 1 to 8 for each algorithm and chosen cache configuration (i.e., block size and number of blocks). To evaluate cache performance, you must record the memory access count, the number of cache misses, and the hit rate. The file contains four entries to demonstrate these measurements for the two algorithms of the Tower of Hanoi puzzle, with at least two different configurations for the block size for each. Submission Instructions Submit exactly two files that includes your “ hanoi.asm” and “.csv” file containing the measurements. Do not use a zip file or any other kind of archive. Include your name and student number in all files submitted as instructed above. Add to comments at the top of the provided hanoi.asm file anything you would want the TAs to know (i.e., treat the comments at the top of the asm file as a README, that also should include a request to apply the one-time penalty waiver for this submission if it has not been used before). All work must be your own and must be submitted by MyCourses. Double check that you have correctly submitted the correct version of your assignment as you will NOT receive marks for work incorrectly submitted. HOWIT WILL BE GRADED • Your program must execute to be graded. • The grader should test your code against different values of n input for both implemented algorithms and compare the output with the expected one (partial marks will be given). • Not following the submission instruction will result in losing 3 marks. • Not following MIPs register convention will result in losing 5 marks.
School of Electrical Engineering and Telecommunications ELEC4632 Computer Control Systems Final Examination, Term 3, 2023 Question 1 (10 marks) Sample the continuous-time system y(t) =( -3 4 -2 )x(t) using the sampling interval h = 0.25. Question 2 (10 marks) A discrete-time control system is described by where the parameter a varies from -∞ to +∞ . Determine for which values of the parameter a this system is (a) reachable; (b) controllable. Question 3 (10 marks) Given the system y(k) =( 1 0 )x(k). Design deadbeat output feedback control law. Question 4 (10 marks) The characteristic equation of a discrete-time control system is given by z3 - 0.25z2 + Kz - 0.25K = 0 where the parameter K varies from -∞ to +∞ . Determine the range of the parameter K for stability. Question 5 (10 marks) Consider the following nonlinear discrete-time system x(k + 1) = -x(k)3 - y(k) - y(k)3 , y(k + 1) = -x(k) - x(k)3 - y(k)3 . (a) Is this system globally asymptotically stable? (b) Is the singular point (0, 0) of this system asymptotically stable? (c) Is the singular point (0, 0) of this system stable in the sense of Lyapunov? Question 6 (10 marks) Consider the optimal control problem for the system x(k + 1) = 3x(k) + u(k) with initial condition x(0) = 1. Determine the optimal control strategy and the optimal value of the cost function for the cost function (a) → min; 20 (b) → min; (c) → min .
LIN476H5 F 2024 Language Diversity and Language Universals Homework Assignment 4 Due: Thursday 11/28 by 11:59p Recommended: complete at least a full draft by Tuesday, 11/26, before class (1pm) Submit your homework on Quercus. Neat typing is required. To type IPA symbols, consider one of the following tools: - Online IPA keyboards, e.g.https://ipa.typeit.org/full/ - Install an IPA keyboard on your computer: (supports Windows, Mac OS, Ubuntu Linux) https://scripts.sil.org/cms/scripts/page.php?item_id=UniIPAKeyboard To draw nice trees, try out one of these tools. - https://ironcreek.net/syntaxtree/ - https://mshang.ca/syntree/ Research Cross-categorial generalizations in the generative syntax (“Chomsky-style” syntax) theoretical framework. With Greenberg’s Universals 2-4, we introduced one of the earliest research interests in typology: what correlations across different syntactic categories exist across the world’s languages? As you saw, Greenberg did discover some non-trivial cross-categorial generalizations that are not immediately obvious, and therefore scientifically interesting. (If you don’t know what I’m talking about, now is a good time to go back and review. Read a few of Greenberg’s Universals to remind yourself.) At different points in the course, we also briefly touched on the distinction between description, and explanation. The universals that Greenberg has formulated are ultimately descriptive, because they tell facts about human language. However, at least according to some linguists, facts are not theories. Facts only tell us what things are like, but they do not tell us why things are the way they are: knowing that feature A in one category correlates strongly with feature B in a different category does not mean that we know why that correlation exists. Linguists have attempted to formulate theoretical explanations. Among them is Chomsky, whose theoretical framework (“generative syntax,” which you learned in LIN232) is very much represented by many North American linguists today, including here at the UofT. I introduced a little bit of how Chomsky attempted to incorporate Greenberg’s cross-categorial universals into his generative syntax framework. I would like you to read more about it in this task, and give a summary of some key insights of it. Background review The basic theoretical tool that Chomsky employs from his framework is the idea that different syntactic categories are modelled as different phrases—so book is an NP, a book or the book or John’s book is a DP, in a book is a PP, etc. And, in addition, every phrase (no matter what kind of phrase it is) has a head, and a complement—so the head of an NP is the N, the head of a DP is the D, the head of a PP is the P, etc., and the thing that is sister to the head, is the complement. (Yes, aside from the head and the complement, there is also the specifier. But we won’t have to worry about that for now.) With this, you are now in a position to read about Chomsky’s proposal to explain Greenberg’s cross- categorial universals. He calls this proposal “the head directionality parameter,” or often also called “the head parameter” for short. Wikipedia is not always reliable—but, luckily for us, the article on the head directionality parameter is actually very well-written, and, if you read it carefully, not hard to understand. Don’t read the whole thing at once. Let me guide you step by step. 1. Read the section on English (under “head-initial languages”)—focusing first on the paragraphs, as well as the tree structure, on the VP in English. a. “English is head-initial in its VPs.” Explain what this statement means—you can do it in 1-2 sentences. b. What would a language that is head-final in its VPs be like? 2. Then, read the paragraphs, and the tree structure, on the PP in English. The questions are the same: a. “English is head-initial in its PPs.” Explain what this statement means (again, 1-2 sentences!) b. What would a language that is head-final in its PPs be like? 3. Then, read about the DP in English. a. State the head directionality of DPs in English. Explain this head directionality, using the example given in the article. b. Now think back to a genitive structure like John’s book. As you learned in LIN232, we will take a genitive element like John’s to also be a D in syntax. Do “genitive DPs” in English have the same head directionality as “determiner DPs” used in the article? c. What would a language that is head-final in its DPs be like? 4. Now you know what this is about! Read through the remaining parts about English, and a. State the head directionality of the other syntactic phrases that English has. Explain each with an example. b. On NPs in English: I am supposed to say ‘a redapple’ and ‘a stormy day,’ and not ‘an apple red’ and ‘a day stormy’ . Therefore, NPs in English are right-headed, right? 5. Now, review what you have read about head directionality in all these different kinds of syntactic phrases (or syntactic categories) in English. If I ask you to summarize the head directionality using a cross-categorial (i.e. not about any of the specific kinds of phrases, but about PHRASES in general) phrase, called XP, whose head is X, and whose complement is YP—how would you draw “the cross-categorial XP” with a tree structure? 6. Finally, consider the sentence John reads a book. You know that the dominant word order for English main clauses is SVO. a. Draw the syntax tree for this sentence, up to the TP level. (i.e. you don’t need to include the CP level). b. Does the cross-categorial XP structure in English explain the “VO” part of SVO? c. Does the cross-categorial XP structure in English explain the “SV” part of SVO?
FINA2320 Fall 2024 -Assignment 2 Due: 6pm Sunday, November 24, 2024 Please hand in an Excel sheet with your answers Please make sure that your Excel sheet is clear and well-structured! Be creative in the way you present you answers to improve clarity (Creativity to present results and clarity are part of the grade) Assignment goal: The goal of the assignment is to show you how you can use Excel to build an optimal portfolio, implement CAPM, and implement the market index model. The assignment also aims at helping you learn Excel and develop your Excel skills as solid knowledge of Excel is important in many jobs. Instructions: Cindy is seriously considering hiring your team even before analyzing the performance of your portfolio. However, she would like to make sure that you have chosen the appropriate stocks and portfolio weights. Therefore, she has decided to ask you to use Markowitz portfolio theory to find the optimal weights in the assets that you had selected for the investment competition. Please proceed as follows: Data 1) Data: a. Use the Excel formula “=STOCKHISTORY()” to download the monthly close prices for the two stocks you selected for the investment competition as well as for Meta. Platforms (ticker: META), HSBC (ticker: HSBC), and iShares MSCI World ETF (ticker: URTH). Download the data for the Sept. 1, 2019-Sept. 1, 2024 period. b. Assume the monthly risk-free rate is constant and equal to 0.15%. 2) Compute the monthly simple returns and the monthly excess returns of the five stocks. Markowitz Portfolio Selection with 2 risky assets and 1 risk-free asset 3) Compute the following statistics for each of the two stocks that you selected for the competition: a. The expected monthly return, E[RMontℎly]. (Hint: use the arithmetic mean) b. The annualized expected return (Hint: the annualized expected return of stock A is given by Annualized E[RA] = (E[RA,N ] + 1)N − 1 where N is the frequency. For example, to annualize the expectation of daily returns, one could use Annualized E[RA] = ( E[RA,Daily ] + 1)365 − 1)) c. The annualized risk premium (Hint: The annualized risk premium of stock A is given by Annualized RPA = (E[ExcessReturnA,N ] + 1)N − 1 where N is the frequency. For example, to obtain the annualized daily risk premium, one could use Annualized E[RPA] = ( E[ExcessReturnA,Daily ] + 1)365 − 1))) d. The annualized standard deviation of the excess returns (Hint: the annualized standard deviation of the excess return of stock A is given by Annualizedstd. Dev. (ExcessReturnA) = std. Dev. (ExcessReturnA,N ) × √N where N is the frequency. For example, to annualize the standard deviation of daily excess returns, one could use Annualizedstd. Dev. (ExcessReturnA) = std. Dev. (ExcessReturnDaily ) × √365 ) e. The annualized variance of the excess returns (i.e. the square of the annualized standard deviation) f. The Sharpe ratio. According to the Sharpe Ratios, which of the two stocks is the best? (Hint: use your results from question 3.c and 3.d). g. The annualized covariance between the two stocks’ excess returns (Hint: the annualized covariance between ExcessReturnA and ExcessReturnB is given by Annualized Cov(ExcessReturnA, ExcessReturnB ) = Cov(ExcessReturnA,N, ExcessReturnB,N ) × N where N is the frequency. Therefore, to annualize the covariance of daily returns, one could use Annualized Cov(ExcessReturnA, ExcessReturnB ) = Cov(ExcessReturnA,Daily, ExcessReturnB,Daily ) × 365). 4) Using the annualized expected returns, risk premia, and standard deviations of excess returns that you computed in question 3, compute the Sharpe ratio of an equally- weighted risky portfolio (WA = 50% and WB = 50%). 5) Using the annualized expected returns, risk premia, and standard deviations that you computed in question 3, find the weights of the Markowitz optimal risky portfolio (tangent portfolio) when short-selling is not allowed. (Hint: these are the weights that maximize the Sharpe ratio. You will need to use the Excel Solver, see appendix). 6) Using the annualized expected returns, risk premia, and standard deviations that you computed in question 3, find the weights of the Markowitz optimal risky portfolio (tangent portfolio) when short-selling is allowed. (Hint: these are the weights that maximize the Sharpe ratio. You will need to use the Excel Solver, see appendix). 7) Using the annualized expected returns, risk premia, and standard deviations that you computed in question 3, find the weights of the Markowitz optimal risky portfolio (tangent portfolio) when short-selling is allowed but the weights need to be between - 400% and 400%. (Hint: these are the weights that maximize the Sharpe ratio. You will need to use the Excel Solver, see appendix). 8) Using the annualized expected returns, risk premia, and standard deviations that you computed in question 3, find the weights of the Global Minimum Variance risky portfolio when short-selling is allowed. (Hint: these are the weights that minimize the variance of the portfolio returns. You will need to use the Excel Solver, see appendix). 9) Draw the Mean-Variance Frontier and the Capital Allocation Line. Proceed as follows: a. Using the annualized expected returns and annualized standard deviations of excess returns that you computed in question 3, compute the annualized standard deviation of excess returns and the annualized expected return for 31 risky portfolios in which a stock A has the following weights: wA = −1, wA = −0.9, wA = −0.8, … , wA = 2 b. Draw the Efficient Frontier using the Excel “Scatter with Smooth Lines and Markers” plot. (see appendix) c. Using the annualized expected returns and annualized standard deviations of excess returns of the tangent portfolio (that you computed in question 6), compute the annualized standard deviation ofexcess returns and the annualized expected return for 31 “complete” portfolios (i.e. portfolios that may contain the two risky assets and the risk-free asset) where the weights of the optimal risky portfolio are as follows: wrisky pf. = 0, wrisky pf. = 0.1, wrisky pf. = 0.2 , … , wrisky pf. = 2.8, wrisky pf. = 2.9, wrisky pf. = 3. d. Add the CAL to the “Scatter with Smooth Lines and Markers” plot that you have drawn. e. Does the tangent portfolio seem to be the portfolio that you computed in question 6? 10) Compute (in Excel) the weights of the optimal risky portfolio in the optimal “complete” portfolio if the risk aversion of the investor equals 8 when short-selling is allowed. 11) Compute (in Excel) the risk aversion of a mean-variance investor who invests 95% in the optimal risky portfolio when short-selling is allowed. 12) Given the above results, are you satisfied with the weights that had selected for the investment competition? What would you have done differently? (Explain briefly, max. 100 words) Capital Asset Pricing Model (CAPM) While Cindy is now convinced that you can find optimal weights, she is still critical of the stocks you selected for the competition. To convince her, you have decided to use the CAPM. Proceed as follows: 13) Compute the following statistics: a. The annualized expected return of the market (use iShares MSCI World ETF as a measure of the market portfolio) b. The annualized variance of the market excess returns c. The annualized covariances between the market excess returns and the excess returns of each of the two stocks you selected for the investment competition d. The market risk premium e. The betas of the two stocks. Interpret the betas (max. 100 words) 14) Use the betas of the two stocks to compute their annualized expected returns (under CAPM). Compare these beta-implied annualized expected returns to the annualized expected returns that you computed in question 3. Based on this comparison, was your investment competition strategy consistent with the CAPM (in other words, were you right to buy/sell these stocks according to the CAPM)? Do the stocks that you had selected lie on the security market line? 15) Critical thinking: Researchers often find that well-diversified portfolios have an alpha that is different from zero, which implies that CAPM does not work. Why do you think that CAPM performs poorly in practice? (max. 100 words) Market Index Model Remember that if your portfolio had more assets, the number of covariance terms to estimate could grow very quickly. Therefore, you would like to test whether using the market index model would help you solve this issue. Please proceed as follows: 16) Use the betas and the variance of the market excess returns that you computed in questions 13.b. and 13.e. to obtain an estimate of the covariance between the two stocks that you selected for the investment competition. Is this estimate of the covariance different from the estimate of the covariance that you computed in question 3.g.? If yes, briefly explain why you think this is the case (max. 100 words) (Hint: question 17 may help you) 17) Compute the R2 of the market index model for the two stocks. Interpret and compare the two R2 (max. 100 words) Markowitz Portfolio Selection with 4 risky assets and 1 risk-free asset 18) Most of the time, portfolios include more than two risky assets. Therefore, compute the weights of the optimal risky portfolio that includes the two stocks that you selected for the investment competition as well as Meta and HSBC. Like in question 6, do not include constraints on the weights, beside their sum being 1. When computing the covariance terms, please do not use the market index model. Is the Sharpe ratio of this new optimal risky portfolio higher than the Sharpe ratio of the optimal risky portfolio found in question 6?
SP and RO Problem 1 A company is considering producing a chemical C which can be manufactured with either process II or process III, both of which use as raw material chemical B. B can be purchased from another company or manufactured with process I which uses A as a raw material. Process II and process III are exclusive, i.e. at most one of them can be built. There are five possible outcomes of demand and selling price combination for chemical C. The probability of each scenario, as well as the corresponding demand and selling price are listed below: Demand of C (million tons) Selling Price ($/ton) Probability Scenario 1 5 2000 10% Scenario 2 8 1900 20% Scenario 3 10 1800 40% Scenario 4 12 1600 20% Scenario 5 15 1000 10% Given the specifications, formulate a two-stage stochastic MILP model to maximize the expected profit and solve it with GAMS/Pyomo to determine: a) Which process to build and what would be the corresponding capacity? b) How to obtain chemical B and how much product C should be sold in each scenario? c) What would be the “value of perfect information” for this problem? Hints: a) Facility selection (binary 0-1 decisions) and capacity (continuous decisions) are first-stage decisions (independent of demand/price realization); purchase, production and sale are second-stage decisions (dependent on demand/price realization). b) The production is each scenario should not exceed the capacity, which is scenario independent. c) If production/sale exceeds the demand for a certain scenario, unsold product cannot generate any revenue (but incurs production cost). The sale amount should not exceed the demand and should not exceed the available/production amount. Data: Investment Costs Fixed capital cost (million $) Variable capital cost ($/ton of product) Process I 100 250 Process II 150 400 Process III 200 550 Operating Costs Variable operating/production cost ($/ton of product) Process I 100 Process II 150 Process III 200 Prices: A: $250/ton B: $450/ton Conversions: Process I 90% of A to B Process II 82% of B to C Process III 95% of B to C Maximum supply of A: 16 million tons Problem 2 Consider the newsboy problem: Newspaper unit cost is $0.60/piece, selling price is $1.50/piece; the newsboy needs to order the newspapers tonight, before knowing tomorrow’s demand. Assume there are 100 possible outcomes of demand, ranging from 1 piece to 100 pieces with uniform probability distribution. In other words, there are 100 scenarios for demand and 1% chance for each scenario (i.e. 1% chance for the scenario of 1 piece, 1% chance for the scenario of 2 pieces, 1% chance for the scenario of 3 pieces, … , 1% chance for the scenario of 99 pieces, and 1% chance for the scenario of 100 pieces). The objective is maximizing the expected profit over these 100 demand scenarios while managing the risk. Formulate the following four risk management models and solve them with GAMS/Pyomo to determine the optimal number of newspapers that should be ordered under each risk management strategy. a) Maximizing the mean/expected profit (i.e. risk-neural) b) Optimizing mean-variance (using equal weight for expectation and variance, i.e. ρ =-1) c) Minimizing probabilistic financial risk for the threshold of 0 profit (i.e. Ω = 0) d) Minimizing downside risk for the threshold of 0 profit (i.e. Ω = 0) e) Minimizing CVaR for the 20% “loss” quantile (i.e. α = 20% for the “low” profit) Hints: The example in lecture slides is for cost minimization, while this problem is for profit maximization. Please make sure to revise the model formulations accordingly to account for the “maximization” problem for low profit risk (instead of minimizing “high cost” risk). Problem 3 Derive the robust counterpart for the following problem: max 10x1 + 5x2 x1 , x2 s.t. (6 + u1 )x1 + (2 + u2 )x2 ≤ 80, ∀ (u1, u2 ) ∈ U a) U is a ellipsoidal uncertainty set: U = {(u1 , u2 ) : ≤1}. b) U is a box uncertainty set: U = {(u1, u2 ) : u1 ≤1, u2 ≤1}. Problem 4 Consider the following two-stage adaptive robust optimization (ARO) problem: 0.5x1 +100x2 + max u∈U x1 ≤ 200x2 y1 + y2 ≤ x1 y1 ≤ u1 y2 ≤ u2 (−6y1 − 5y2 ) x1, y1, y2 ≥ 0, x2 ∈ {0,1} where uncertainty set U = Use linear decision rule to solve the two-stage ARO problem. Problem 5 Consider the following robust optimization problem with implementation errors. mn Δx(m)U(x) f (x + Δx) where f is a nonconvex function defined by, a) Find the robust optimal solution x* when U = {Δx −0.5 ≤ Δx ≤ 0.5 }. b) Find the robust optimal solution x* when U = {Δx −1.5 ≤ Δx ≤1.5 }.