Assessment Proforma 2024-25 Key Information Module Code CM2108 Module Title Secure Communication Networks Assessment Title Portfolio On Secure Communication Networks Assessment Number 1 of 1 Assessment Weighting 100% of a 10-credit level 5 module Assessment Limits An individual portfolio comprising one report of 1500 words and a 4-minute video demonstrating a functional network simulation. The Assessment Calendar can be found under ‘Assessment & Feedback’ in the COMSC-ORG-SCHOOL organisation on Learning Central. This is the single point of truth for (a) the handout date and time, (b) the hand in date and time, and (c) the feedback return date for all assessments. Learning Outcomes The learning outcomes for this assessment are as follows: 1. Evaluate issues involved in deploying communication networks and their potential security, performance, and dependability implications and trade- offs. 2. Describe the fundamental principles and protocols of wired and wireless communication networks. 3. Demonstrate an understanding of performance and dependability evaluation approaches for secure communication networks. 4. Demonstrate an understanding of the principles of secure communication, including network vulnerabilities and security controls. 5. Use software tools used to analyse network traffic. Submission Instructions The coversheet can be found under ‘Assessment & Feedback’ in the COMSC-ORG- SCHOOL organisation on Learning Central. All files should be submitted via Learning Central. The submission page can be found under ‘Assessment & Feedback’ in the CM2108 module on Learning Central. Your submission should consist of multiple files: Description Type Name Coversheet Compulsory One PDF (.pdf) file Coversheet.pdf Network design and discussion report Compulsory One PDF (.pdf) CM2108_ [student number].pdf Network simulation video Compulsory One video file or link to video clearly showing working network simulation CM2108_ [student number].mp4 or link in pdf Evidence for Task 2, Network simulation, must be submitted as a link to a video or a video showing screen recording / video. Video must not exceed four minutes and must convey authenticity, e.g. voiceover. No marks will be awarded if the simulation itself is uploaded. If you are unable to submit your work due to technical difficulties, please submit your work via e-mail [email protected] and notify the module leader Elaine Haigh on [email protected] before the deadline. Assessment Description Scenario: You are part of the IT team at a medium-sized company, "Teclyn", which specialises in providing technology solutions to various clients. The company has recently expanded, and with this growth comes the need to overhaul and upgrade its existing network infrastructure to support the increased demand and to ensure high availability, security, and performance. The company has an Office split over two floors and its 150 employees benefit from flexible, remote working. You are tasked with designing a scalable and secure networking solution for the company that meets their business needs. Based on this scenario, complete the following tasks: 1) Network Design (20%) Produce a network design/prototype for the Teclyn Network. You will be expected to include your IP addressing scheme and justify your design choices. a) Produce a network diagram outlining your network solution including services and facilities (e.g., printing services, data storage, servers). Include a short justification of your network design choices, showing how your design meets the requirements of the business. [10 marks]. b) Provide a subnetting scheme based on the network address 172.64.0.0/16. You should include at least one subnet. Include a short justification of your network addressing decisions, including subnetting or VLAN configuration. [10 marks] 2) Network Simulation (20%) a) Implement a simulation of your network and test connectivity between devices. Your network does not have to include all devices but should demonstrate the main components and functions of the network. You will need to submit a screencast or video of a running networking simulation tool such as Packet Tracer, or a small network using Raspberry Pis or similar (not provided). [20 marks]. 3) Networking Services and Protocols (15%) a) Explain, using worked through examples, how protocols are used to transfer data across the above network to perform tasks such as sending an email; printing a document; uploading files to a server; accessing the network remotely. Outline common protocols used at all seven layers of the OSI 7-layer model. Explain encapsulation, referring to protocol header information, and how this is used to control protocol behaviour, giving worked through examples of common networking protocols. You may use a packet capture tool such as Wireshark to generate data packets to use as illustrative examples. All discussion should be related to the scenario, and while you do not have to implement features in your simulation in order to include them in the discussion, you should make it clear if you have done so and show screenshots as evidence. Indicative length - 500 words. [15 marks]. 4) Network Security (15%) a) Discuss your network in relation to Access Control and other security issues. Explain how your network upholds the principles of information security, justifying your choices and evaluating the effectiveness of security controls, using examples. You should include any additional control measures not covered in your network design that could mitigate common vulnerabilities as recommendations but ensure that you highlight any that you have implemented. Indicative length - 500 words [15 marks]. 5) Network Performance (20%) a) Explain how your network upholds high performance, using examples of potential performance issues and showing how your network mitigates this. Describe how performance can be measured. You do not have to include these measures in your simulation but ensure that you highlight any that you have implemented. Indicative length - 250 words [10 marks]. b) Explain how your network upholds dependability, using examples of potential service issues and showing how your network mitigates this. Describe how network dependability can be measured. You do not have to include these measures in your simulation but ensure that you mention any that you have implemented. Indicative length - 250 words [10 marks]. Report writing and referencing: 5 marks will be awarded for clarity and structure of your report, and 5 marks for accurate and consistent referencing.
ELEN30009 ELECTRICAL NETWORK ANALYSIS & DESIGN Semester 1, 2018 1 Transient Analysis in LTI Circuits 1. Consider the following circuit in which current in an inductor is driven by a DC source. An ammeter is used to measure the current iL through the inductor and the current time graph is displayed below. Assume the ammeter has zero impedance. (a) Estimate the time constant τ of the transient, as well as the inductance of L. (b) Estimate the voltage level vL of the DC source. (c) Estimate the energy stored in the inductor at time t = ∞ . 2. Consider the circuit shown below in which the switch has been open for a long time prior to t = 0. (a) What is the value of vC before the switch closes? (b) Obtain a differential equation for vC and solve it to find vC (t) for t ≥ 0. (c) What is the steady state value of vC after the switch closes? Determine how long it takes after the switch closes before vC is within 1% of its steady-state value. (d) Repeat part (a) by first obtaining a Th´evenin equivalent at the capacitor’s terminals, then use this to more easily find the time constant τ and then vC (t) for t ≥ 0. 3. The switch in the circuit below has been in the left position for a long time. At time t = 0 it moves to the right position and stays there. (a) Find the v(0+ ), the capacitor voltage just after the switch changes position. (b) Write the expression for the capacitor voltage, v(t), for t ≥ 0. (c) Write the expression for the current through the 40 kΩ resistor, i(t), for t > 0. (d) What percentage of the initial energy stored in the capacitor is dissipated by the 40 kΩ resistor? 4. The switch in the circuit below has been in position “a” for a long time. At t = 0, it moves instantaneously to position “b”. (a) Find vo (t) for t ≥ 0. (b) io (t) for t > 0. (c) v1 (t) for t ≥ 0. 5. The switch in the circuit below has been in position x for a long time. The initial charge on the 60 nF capacitor is zero. At t = 0, the switch moves instantaneously to position y. (a) Find v0 (t) for t > 0. (b) Find v1 (t) for t ≥ 0. 6. The circuit elements in the circuit shown below are R = 125Ω, L = 200 mH, and C = 5μF. The initial inductor current is -0.3 A and the initial capacitor voltage is 25 V. (a) Calculate the initial current in each branch of the circuit. (b) Find v(t) for t ≥ 0. (c) Find iL (t) for t ≥ 0. 7. The initial value of the voltage v in the circuit from Problem 6 is 0 V, and the initial value of the capacitor current, iC (0+ ), is 45 mA. The expression for the capacitor current is known to be iC (t) = A1 e-200t + A2 e-800t , t > 0, when R is 250 Ω . Find (a) the values of Q, ω0 , L, C, A1 , and A2 . (b) the expression for v(t), t ≥ 0. (c) the expression for iR (t), t ≥ 0. (d) the expression for iL (t), t ≥ 0. 8. The switch in the circuit below has been in position “a” for a long time. At t = 0, the switch moves instantaneously to position “b”. (a) What is the initial value va (0+ ) just after the switch changes position? (b) What is the initial value dva (0+ )/dt? (c) What is the numerical expression for va (t) for t > 0? 9. The two switches in the circuit below operate synchronously. When switch 1 is in position “a”, switch 2 is closed. When switch 1 is in position “b”, switch 2 is open. Switch 1 has been in position “a” for a long time. At t = 0, it moves instantaneously to position “b”. (a) Find vc (0+ ) and the initial inductor current (left-to-right). (b) Find vc (t) for ≥ 0. 10. In the circuit shown below, a switch is connected between a DC voltage source and a network of five passive devices with the following component values: R1 = 1 Ω C1 = 125 mF L1 = 0.8 H R2 = 2 Ω C2 = 125 mF The switch is in the open position before time t = 0. At time t = 0- , there is no energy stored in either capacitor, and the inductor current is given to be iL (0- ) = 1 A. At time t = 0, the switch is instantaneously moved to the closed position. (a) Derive a differential equation for the voltage vC (t) for t > 0. (b) Derive a differential equation for the current iL (t) for t > 0. (c) What type of step response does this circuit exhibit? Why? (d) Using either of your differential equations from (a) or (b), find the following: vC (t), iL (t), and i1 (t) for t > 0 assuming that the switch remains closed after t = 0. (e) Now assume that at t = 1 second the switch is moved back to the open position. Find the following: vC (t) and iL (t) for t > 1 assuming that switch remains open beyond t = 1 second. 11. The circuit below shows a series RLC circuit with an input voltage vin (t). You are given the following component values: R = 100 Ω C = 62.5 μF L = 100 mH (a) For the input vin (t) = 3u(t), find both vc (t) and i(t) for t > 0. Assume for this part that zero energy is stored in the circuit at time t = 0- . (b) Now assume the input is Find both i(t) and vC (t) for t ≥ 0. 2 Convolution 12. A rectangular pulse vi (t) = [u(t) - u(t - 1)] V is applied to the RL network shown below. (a) Find the impulse response h(t) of the network. (b) Use the convolution integral to find vo (t) (c) Interchange the position of the inductor and resistor in the circuit and repeat (a). 13. Consider the same RL network in the previous problem shown in the figure above, as well as the same input vi (t) = [u(t) - u(t - 1)] V. You are going to consider an alternative way to find vo (t) using convolution and superposition. You are to perform the following steps: (a) Consider the input to the circuit to be v1 (t) = u(t) and apply the convolution integral to determine the resulting output vo1(t). Do not specify “for t ≥ 0” in your answer, but instead use u(t) to signify this, as it will be important for the next part. (b) Now consider the input v2 (t) = u(t - 1). This is just a time shifted version of v1 (t), and due to a circuit being an LTI system, the output time shifted version of vo1(t). Apply time shifting to the function vo1(t) to create the output vo2 = vo1(t - 1). Be sure to time shift the u(t) function used to indicate validity for t ≥ 0 in vo1(t). (c) Use the principle of superposition to find the output vo (t) when the input is vi (t) = v1 (t) - v2 (t), and verify this answer is equivalent to that in part (b) of the previous problem. 14. Consider the RC network shown below, where the input voltage is a rectangular pulse shown to its right. (a) Find the impulse response h(t) of the network. (b) Use the convolution integral to find vo (t) and sketch it for 0 ≤ t ≤ 100 ms. (c) Now assume that the resistor’s value decreases to 200 Ω . Repeat parts (a) and (b). (d) Does decreasing the resistor’s value increase or decrease the memory of the circuit? Hint: Use your two sketches and consider which network comes closer to transmitting a replica of the original input voltage. 15. The input voltage in the network shown below is vi (t) = 5[u(t) — u(t — 0.5)] V (a) Find the impulse response h(t) of the network. (b) Use the convolution integral to find vo (t).
ASSESSMENT 1 BRIEF Subject Code and Title FIN600 Financial Management Assessment Task Scenario Analysis Report Individual/Group Individual Length 1,000 words Learning Outcomes The Subject Learning Outcomes demonstrated by successful completion of the task below include: a) Appraise the functions of a financial manager in contemporary businesses and within the current regulatory and legal context. d) Evaluate cost structures and budgets and their impacts on short- and long-term business decisions. Submission 12-week duration: Due by 11:55 pm AEST/AEDT Sunday, end of Week 4 6-week duration: Due by 11:55 pm AEST/AEDT Sunday, end of Week 2 Weighting 20% Total Marks 100 marks Assessment Task Drawing on your understanding of the fundamentals of financial management, as well as costs, budgets and management control, write a 1,000-word scenario analysis report identifying, measuring and analysing the budget variances of a fictitious company. More detailed information about this fictitious company can be found in the Assessments area of the subject site. This assessment is a practical task involving the application of budgeting skills, comparison of actual performance with the budget and analysis of the observed variances. The aim of this assessment task is to consolidate your knowledge and skills relating to the financial management of business operations. Context Budget variances refer to the differences between the actual financial results of a company and the budgeted or planned financial results for a specific period. These variances can be either favourable or unfavourable, depending on whether the actual results are better or worse than what was planned. Understanding and analysing budget variances is important for companies because it allows them to identify areas of their business that may be underperforming or overperforming. By comparing actual results to planned results, businesses can identify where they need to make changes in order to improve profitability, reduce costs or increase revenue. Performing variance analysis is one of the many activities that a management accountant performs when providing financial and non-financial information to aid effective decision-making in an organisation. This assessment task will focus on three types of variances: direct materials, direct labour and overhead variances. Interpreting these variances successfully requires you to both analyse and evaluate. This includes breaking down each variance into its components as follows. Direct materials variance Direct materials price variance Direct materials usage variance Direct labour variance Labour rate variance Labour efficiency variance Overhead variance Overhead spending variance Overhead efficiency variance The formulas for the variances above are found in the Costs, Budgets and Management Control module, under the Variance Analysis topic. Instructions In this assessment task, you will take the role of the management accountant of the company and analyse variances based on a comparison of the actual performance of the business operations relative to its budgeted performance. To be successful in this assessment task, you are required to follow the steps below. 1) Read and understand the topics covered in the modules: • Introduction to Financial Management • Costs, Budgets and Management Control 2) Analyse the additional information in the Assessment 1 Variance Analysis Template (MS Excel file), which is provided in the Assessments area of the subject site. The template contains the following sheets: • Background Information – includes details about the fictitious company and further instructions about how to use the sheets in the template. • Data – includes information on standard budget and actual performance. • Template – contains empty cells in which you are to insert the correct formula and calculate the required item. • Variance Report Format – outlines the information required in the variance report that you are required to submit to support your scenario analysis report. 3) Write a scenario analysis report supported by a variance report, based on the data and operational details of a fictitious company provided in Step 2. Your variance report should follow the format provided in the Assessment 1 Variance Analysis Template (MS Excel file). 4) Using the information from the business operations, budget and the actual production and costs data, you are required to identify, measure and analyse the following variances and investigate their sources: • Direct material variances • Direct labour variances • Overhead variances 5) In your scenario analysis report, you are required to: • summarise and present your variance analyses • evaluate your variance calculations in Steps 2–4, discuss the potential causes for the variances and provide any remedial measures that can be taken to minimise the variances • recommend to the client what position to take in relation to each of the three (3) variances determined, based on your evaluation of the variance calculations. Scenario Analysis Report Structure and Format Your 1,000-word scenario analysis report should follow industry standards and be structured as shown in detail below: Executive Summary – In this section (100–150 words), you are required to summarise your entire report. As a minimum, you will need to discuss the purpose of your report, the methodology you have used, the key findings and your recommendations. The executive summary is not included in the word count. Table of Contents – This section is not included in the word count. 1. Background (around 150 words) – Provide a description of the task, the tools and methods that will be applied, and how the report is organised. 2. Standard budget (around 50 words, in addition to the tabulation) – Using information on the production design and process, as provided within the Assessments area of the subject site, tabulate a standard budget for the specified production volumes. 3. Comparison of the actuals with the budget (around 50 words, in addition to the tabulation) – Calculate and tabulate the variances, while distinguishing the variances into material, labour and overhead variances, and their sources. 4. Analysis of Variances (around 400 words) – Discuss the potential reasons for the variances, their significance and possible ways to resolve the more critical variances. 5. Recommendations and Overall Assessment (around 350 words) – Use the variances of the company, as revealed by the analysis you have undertaken, to recommend what position the client should take in relation to each of the three (3) variances listed in Step 4 in the Instructions section. 6. References - A minimum of six (6) academic and related references (e.g. journal articles, book chapters, conference papers) and at least two (2) non-academic and related references (e.g. company websites and annual reports, newspaper articles, government websites, consultant reports, etc.) are required to support the discussions in your scenario analysis report. Please make sure your references are current. All references should be in the current APA style. Using Wikipedia, Investopedia and similar sources should be avoided. This section is not included in the word count. Read the assessment rubric, which is an evaluation guide with the criteria for grading your assessment. This explains what features a successful scenario analysis report should exhibit. Referencing It is essential that you use current APA style. for citing and referencing the sources that you use. Please see more information on citing and referencing guidelines on the Academic Success webpage. Assessment Support For a range of additional resources and support to help you complete your assessment, please consult the Study Support page on the Student Hub.
ASSESSMENT 1 BRIEF Subject Code and Title MKT600/MKT6002 Marketing Assessment Task Situation Analysis Individual/Group Individual Length Up to 1500 words Learning Outcomes The Subject Learning Outcomes demonstrated by successful completion of the task below include: a) Outline and implement the marketing research process and identify a range of methods of gathering, storing and using data. b) Critically evaluate the relation of marketing process with the resources of an organisation and client needs with regards to the creation of value for the organisation. Submission 12-week delivery Due by 11:55pm AEST/AEDT Sunday end week 4. 6-week delivery Due by 11.55pm AEST/AEDT Sunday end week 2. Weighting 25% Total Marks 100 marks Assessment Task This assessment requires you to complete a 1,500-word situation analysis of a company that includes an analysis of the external marketing environment, the internal marketing environment and the industry. Please refer to the instructions for details on how to complete this assessment. Note: The company you select for analysis in Assessment 1 will also be used to complete Assessments 2 and 3. The situational analysis you create in Assessment 1 will be used to complete Assessment 3. Context A Situation Analysis is the process of collecting information on the internal and external environments to assess a firm's current strengths, weaknesses, opportunities, and threats and to guide its goals and objectives. A situation analysis comprises an analysis of the micro-environment, macro-environment, and industry analysis. The micro-environment analysis involves scrutinising a company’s internal environment and specifying strengths and weaknesses. In contrast, macro-environment analysis involves examining a company’s external environment, identifying political, economic, social, technological, legal, and environmental dimensions that will assist in specifying opportunities and threats. Changes within the macro-environmental forces are outside of an organisation's direct control, requiring it to adjust its marketing strategies to capture emerging opportunities and minimise potential threats. The industry analysis involves using Porter’s Five Forces model to analyse a company’s understanding of its position relative to other companies. This assessment is designed to help you develop and demonstrate your ability to: • Clearly articulate your understanding of marketing concepts and principles within the context of evaluating an organisation's overall situation. • Critically analyse the relationship between marketing theory and real-world practice. • Conduct research to generate meaningful marketing and strategy insights. • Apply appropriate business report writing conventions. • Identify and recommendstrategies based on a SWOT analysis. Please refer to the Instructions for details on how to complete this task. Instructions You have two options for selecting a company for your situational analysis. • Option 1: You may use a company you’re currently working for or have worked for previously on either a full- time or part-time basis. You must de-identify the company name in your assessment if using an existing company. Note: Any material submitted as part of your assessment must ensure that it does not breach any of the Host Organisation’s rights, including with respect to maintaining confidentiality and sensitivity to the information shared. • Option 2: Use the company case study provided in the Assessment 1 section on MyLearn. Write a situational analysis in business report format based on either Option 1 or Option 2. Your analysis should demonstrate your understanding of marketing principles and ability to apply them to real-world business contexts. Your report should include the following sections: Introduction Provide a brief overview of the chosen company, including its core business activities, its products or services, and the industry it operates within. Body 1. Value proposition Discuss the company’s value propositions, core brand values, and buyer behaviour relevant to its target market. 2. Microenvironment analysis Analyse the company’s microenvironment, including internal factors (the company itself), suppliers, intermediaries, relevant publics, customers, and competitors. 3. Macroenvironment analysis Use the PESTLE framework (Political, Economic, Social, Technological, Legal, and Environmental) to analyse external macro-environmental forces that may impact the company’s strategy and capabilities to serve its target markets. 4. Industry analysis Apply Porter’s Five Forces Model to assess the competitive dynamics of the industry in which your company operates. 5. SWOT analysis and strategic implications Identify key strengths, weaknesses, opportunities, and threats (SWOT) for the company. Based on your findings, recommend appropriate strategies to leverage strengths, overcome weaknesses, seize opportunities, and mitigate threats. Conclusion Summarise the key insights from your analysis, particularly those derived from your SWOT, and reflect briefly on their strategic implications for the company. Your report should include a reference list on a separate page. Use appropriate in-text citations using APA 7th referencing convention. You are strongly advised to review the Assessment Rubric before you begin. The rubric outlines the specific criteria your work will be graded against and provides a clear understanding of what constitutes a successful situational analysis. Referring to it as you plan, write, and review your report will help ensure your submission meets expectations and aligns with the learning outcomes. Before you begin writing, take time to collect relevant and up-to-date data on your chosen company and its external environment. Support your analysis with high-quality academic references from reputable marketing journals and textbooks to strengthen your arguments, insights, and recommendations. Use the suggested structure throughout your report to maintain logical flow and cohesion.
In this PA (Programming Assignment) you will: learn about implementing a 6-ary tree learn to design more complex recursive algorithms learn about a possible method of image compression using space-partitioning trees Part 1: The HexTree Class A HexTree is a 6-ary tree whose nodes represent rectangular regions of a PNG image. The root represents the entire image. The six children (referred to as A, B, C, D, E, and F, if present) of any node nd represent up to six rectangular partitions of nd's image region. Every node also contains: a pixel representing the average colour of the pixels in its rectangular region the upper-left image coordinates of its rectangular region, and the lower-right image coordinates of its rectangular region. Building a HexTree The contructor of the HexTree receives a PNG image of non-zero dimensions. The region for each node is split vertically as evenly as possible into upper and lower regions, and each upper and lower region is horizontally split symmetrically, into three (or two, or even one) smaller regions. See the documentation in hextree.h for details about how to determine the dimensions and coordinates of the split pieces. A freshly constructed HexTree will have a leaf node corresponding to each individual pixel of the original image. For example, a large image will be progressively split into the regions shown in the image below: Each different colour group represents the children of a larger node. Note that for clarity, not all children of all nodes are shown, but nodes will continue to be split in this way as long as they are able. Note that in many cases (especially regions with extremely small widths and/or heights), it may be impossible to assign image regions to all upper and lower pointers, or to all left/middle/right pointers. Refer to the documentation in hextree.h for splitting requirements in these situations. To complete this part of the assignment, you should complete (or modify) the following functions: In class HexTree: HexTree(const PNG& imIn) HexTree& perator=(const HexTree& rhs) PNG Render(bool fulldepth, unsigned int maxlevel) const void FlipHorizontal() void Clear() void Copy(const HexTree& other) Node* BuildNode(const PNG& img, pair ul, pair lr) Many of the functions above will require private recursive helpers. It is up to you to add the declarations into hextree-private.h, giving them appropriate signatures, and implementing them in hextree. cpp. Advice: The constructor (which will use BuildNode) is critical for all of the other tree functionality. It is recommended to focus your efforts into first correctly implementing BuildNode which will be used in all of the testing functions. Part 2: Image compression using HexTrees As a result of the hierarchical structure of the tree, along with each node storing the average colour of its rectangular region, we can trim away portions of the tree representing areas without fine pixel-level detail. This is achieved by the Prune function, which receives a tolerance parameter. This function attempts, starting near the top of a freshly built tree, to remove all of the descendants of a node, if all of the leaf nodes below the current node have colour within tolerance of the node's average colour. In this way, areas of the image with little colour variability can be replaced with a single rectangle of solid colour, while areas with fine pixel detail can remain detailed. The image quality may be reduced, but fine details will still be visible, and the size of the structure used to store the image data will also be reduced. An example of the pruning effect is demonstrated in the pair of images below. Original image (upscaled): Rendered image after pruning at tolerance 0.05 (upscaled): The tree constructed from the original image contains 107,635 nodes in total, of which 57,344 are leaf nodes; one for each pixel in the image. To complete this part of the assignment, you should complete (or modify) the following functions: In class HexTree: void Prune(double tolerance) It may be helpful to write two recursive helpers for this function. Testing We have provided a file main.cpp which writes your output images to the outputs folder along with some diagnostic prints to console. This is not a complete test of functionality! You are highly encouraged to add your own testing code to this file. The executable can be compiled by typing: make You can run the given tests using the command: ./pa3 Grading Information The following files are used to grade this PA: hextree-private.h hext ree.cpp All other files (including any testing files you create) will not be used for grading. Getting the Given Code Download the source files here pa3-20250603-0020.zip and follow the procedures you learned in lab to move them to your home directory on the remote linux machines. Handing in your code Pair work is recommended for this project. Document your group membership using PrairieLearn's groupwork feature.
Assignment Instructions: 1. You are required to watch and follow current activities broadcasted on NASA Live TV. 2. Refer to the NASA TV Daily Program Schedule available on the NASA Live TV website to plan your viewing. 3. After watching, share your reflections in a short video and post it on Padlet. Video Content Guidelines: 1. Introduction Course Code Logos: UKM, Universiti Watan, CITRA, Sustainable Development Goals (SDG) + SDG 4 Assignment Title: NASA TV Live Group members name Lecturer(s) Name(s) 2. Activity Review Choose one or more topics to highlight: Activities on the International Space Station (ISS) Latest space technologies being used Any other meaningful or interesting content that stood out to you Reflection How did this NASA TV Live assignment help enhance your understanding and interest in space technology? Video Format 1. Duration: 60 seconds to 3 minutes 2. Language: Bahasa Malaysia or English 3. Content: Must be respectful of all religious, racial, and national sensitivities 4. Upload your video to YouTube, Google Drive, or Dropbox Submission Instructions [Group Representative] - Upload the video link to Padlet [Individual - 25%] - Upload the shared group video link to your own UKMFolio [Individual - 5%] - by 15 July 2025 Join the Virtual Gallery Walk Watch and interact with at least 3 other student videos Leave comments or questions Take screenshots of your comments Complete and upload the feedback form. Submit all screenshots and the form. to UKMFolio
optimization of risk prioritization for agile projects Model based on machine learning Research objectives 1.Clarify common issues and requirements for risk prioritization in agile projects. 2.Evaluate the applicability of existing machine learning techniques for risk prioritization in agile projects. 3.Propose an improved strategy for risk prioritization in agile projects based on existing technologies. Research Question 1.How to prioritize risk in Agile Project Management? 2.How machine learning can be applied for risk prioritization in Agile Project Management?
MEDA 37028. Post-Production Supervisor Post-Production Manual: Remaining components Brief Generate the remaining four components to the post-production manual for the project that you are post-production supervising. See the Sample for each on Slate. Deadline Due Week 13 April 11th, 2025 by 11:59pm - Submit each as a Word or Excel document onto Slate: Assessment – Assignments – Post Manual: Remaining Elements - Save as: Last name_name the item (e.g. McGylnn_Supers) Components Music Cue Sheet - Include a title at the top of the list (production title, music cue title) - Add In/Out timecode & duration for each music piece used - List artist / music name / source / music type Final Script. / Transcript - Include a title at the top of the list - Add a timecode for each line - Transcribe the entire piece (all audio heard; includes VO, interview & sound ups) Supers / Titles List - Include a title at the top of the list - List all titles (opening, lower 3rd or credits) in the project - Indicate font style/size and any other notes for the online (e.g. blur) Final Credits - Include a title at the top of the list - Indicate the Role / Title in Bold - List the person(s) name (spell check all text and names) - List Special Thanks or logos - Format: Centre Text & keep groups together (avoid spilling onto next page) Objectives Organize each element Follow instruction Pay close attention to details Deliver post-production components Evaluation: The assignment is worth 10% of your final grade: 3% Music Cue Sheet 3% Final Script / Transcript. 2% Supers / Titles List 2% Final Credits Late submissions will result in a penalty of 10% per day. If delivered after 7 calendar days (1 week late), it will result in a zero.
Comp9444 assignment In this assignment, you will be implementing and training neural network models for three different tasks, and analysing the results. You are to submit two Python files kuzu.py and check.py, as well as a written report hw1.pdf (in pdf format). Provided Files Copy the archive hw1.zip into your own filespace and unzip it. This should create a directory hw1, subdirectories net and plot, and eight Python files kuzu.py, check.py, kuzu_main.py, check_main.py, seq_train.py, seq_models.py, seq_plot.py and anb2n.py. Your task is to complete the skeleton files kuzu.py and check.py and submit them, along with your report. Part 3: Hidden Unit Dynamics for Recurrent Networks In Part 3 you will be investigating the hidden unit dynamics of recurrent networks trained on language prediction tasks, using the supplied code seq_train.py and seq_plot.py. 2. [1 mark] Train an SRN on the ab language prediction task by typing python3 seq_train.py -lang anbn The ab language is a concatenation of a random number of A's followed by an equal number of B's. The SRN has 2 inputs, 2 hidden units and 2 outputs. Look at the predicted probabilities of A and B as the training progresses. The first B in each sequence and all A's after the first A are not deterministic and can only be predicted in a probabilistic sense. But, if the training is successful, all other symbols should be correctly predicted. in particular, the network should predict the last B in each sequence as well as the subsequent A. The error should be consistently in the range of 0.01 to 0.03. If the network appears to have learned the task successfully, you can stop it at any time using (cntri)-c. If it appears to be stuck in a local minimum, you can stop it and run the code again until it is successful. After the training finishes, plot the hidden unit activations by typing python3 seq_plot.py -lang anbn -epoch 100 Include the resulting figure in your report. The states are again printed according to the colormap "jet". Note, however, that these "states" are not unique but are instead used to count either the number of A's we have seen or the number of B's we are still expecting to see. Briefly explain how the ab prediction task is achieved by the network, based on the generated figure. Specifically, you should describe how the hidden unit activations change as the string is processed, and how it is able to correctly predict the last B in each sequence as well as the following A. 3. [2 marks] Train an SRN on the abc language prediction task by typing python3 seq_train.py -lang anbncn The SRN now has 3 inputs, 3 hidden units and 3 outputs. Again, the "state" is used to count up the A's and count down the B's and C's. Continue training (and re-start, if necessary) for 200k epochs, or until the network is able to reliably predict all the C's as well as the subsequent A, and the error is consistently in the range of 0.01 to 0.03. After the training finishes, plot the hidden unit activations at epoch 200000 by typing python3 seq_plot.py -lang anbncn--epoch 200 (you can choose a different epoch number, if you wish). This should produce three images labeled anbncn srn3_??. ipo, and also display an interactive 3D figure. Try to rotate the figure in 3 dimensions to get one or more good view(s) of the points in hidden unit space, save them, and include them in your report. (If you can't get the 3D figure to work on your machine, you can. use the images anbncn_srn3_77.jpg) Briefly explain how the abd prediction task is achieved by the network, based on the generated figure. Specifically, you should describe how the hidden unit activations change as the string is processed, and how it is able to correctly predict the last B in each sequence as well as all of the C's and the following A. 4. [3 marks] This question is intended to be more challenging. Train an LSTM network to predict the Embedded Reber Grammar, by typing python3 seq_train.py --lang reber --embed True --model Istm --hid 4 You can adjust the number of hidden nodes if you wish. Once the training is successful, try to analyse the behavior. of the LSTM and explain how the task is accomplished (this might involve modifying the code so that it returns and prints out the context units as well as the hidden units).
LATI 180. Reparations Politics Comprehensive Annotated Bibliography Assignment Instructions: This assignment is divided into three distinct sections, each with its own specific guidelines. Please carefully read and follow the instructions for each section, as they are tailored to different aspects of your research on reparations politics. ● Section I: Historical Background involves annotating at least three secondary scholarly or tertiary scholarly sources and requires you to provide context for your chosen topic by focusing on key historical events, social factors, and reparations efforts. ● Section II: Analytical Framework involves annotating at least three primary or secondary scholarly sources, analyzing their relevance, context, and contribution to your understanding of reparations politics. ● Section III: Primary Sources involves annotating at least three primary sources, summarizing and analyzing connections to your topic or themes of reparations politics. Research: You are encouraged to research and combine different types of sources for each section of the annotated bibliography. See Handout for more:LATI 180_Reparations Politics_Research Tips and Guidelines Formatting: The final version of your Annotated Bibliography section will have two parts. It is recommended to complete each section as separate tasks. Start by selecting and annotating your individual sources to then write up a synthesis of combined sources at the beginning of each section (copy/paste Mode/Template below p.5 for formatting). Section I: Historical Background Objective: The goal of this section is to provide a tailored historical background that contextualizes the theme of reparations politics in relation to your chosen topic. This background should illuminate the specific injustices of the past and consider efforts to draw attention to or address them. Length: Approx. 250-750 words for the Historical Background Synthesis Approx. 75-150 words per annotation entry Citations and Format: Following the MLA style, include proper in-text and Works Cited citations. For guidance on reliable sources, see the attached Handout above. Content Guidelines: The final version of this section will have two parts (see Template Model below for formatting). The first is a synthesis of combined sources that identifies the historical relevance, social contexts, and reparations efforts. The second part is the annotations which will include a brief summary and analysis for each individual source. ● Historical Background Synthesis involves identifying key events, examining social contexts, identifying reparations efforts measures taken, and concluding with your focus questions. In this writing, you should combine information from each of your listed sources and analyze how they are relevant to each other and your topic. ○ Historical Relevance: Identify key historical events directly related to your specific topic of research. How have these events have impacted the social, economic, and political realities for the communities affected by the issues you are investigating? ○ Social Context: Examine the social contexts surrounding these historical events and their relevance to your topic. How did societal attitudes and structures contribute to the injustices you are focusing on? ○ Reparations Efforts: ■ Identify and discuss any measures taken in the context of reparations efforts. Consider symbolic actions (e.g., apologies, memorials), legal measures (e.g., lawsuits, legislative initiatives), economic reparations (e.g., direct payments, land restitution), or putative efforts (e.g., educational programs, community investments). ■ Identify the various groups and individuals involved in reparations efforts. Discuss differing interests, agendas, and positionalities concerning your specific area of focus. This may include victims, activists, policymakers, and institutions as well as indirect beneficiaries of injustices in society. ○ Focus Question(s): Conclude your synthesis by formulating your own focus questions that will guide the rest of your annotated bibliography. These questions should reflect your specific interests and areas of inquiry related to reparations politics. State what you hope to explore further and how these questions will shape your analysis. ● Historical Background Annotations involves creating short annotations for your chosen sources that summarize and analyze the text. Address the 5Ws and/or the 5As in each entry: Who, What, When, Where, Why and/or Aim, Approach, Argument, Author, and Audience Section II: Analytical Framework Objective: The goal of this section is to construct an analytical framework you can apply to your topic to analyze and frame. findings. Length: Approx. 250-750 words for the Analytical Framework Synthesis Approx. 100-200 words per annotation entry Citations and Format: Following the MLA style, include proper in-text and Works Cited citations. For guidance on reliable sources, see the attached Handout above. Content Guidelines: Your final version of this section will include two parts (see Template Model below for formatting). The first is a synthesis of combined sources that evaluates their relevance in relation to your topic. The second part is the annotations of individual sources which will include a brief summary and analysis for each entry. ● Analytical Framework Synthesis involves discussing the analytical frameworks you have found in your research and critically analyzing their relevance and relationship to each other and your topic. Consider: ○ What specific examples, evidence, or data used to support their arguments did you find compelling from the texts? ○ What specific examples, evidence, or data did the arguments leave out from their texts? ○ How do the texts, frameworks, and key concepts relate to each other? ○ How can you apply frameworks offered by scholars to analyze questions related to your own topic? ● Analytical Framework Annotations involve contextualizing each entry with the following items: ○ Background: What is the historical context in which the text was written or created? Who is the intended audience? Where is the author writing “from”? What are some main objectives for producing the text? What issues were they seeking to understand or draw attention to? ○ Summary: Summarize the key points or the main argument of the text with your own words, focusing on key points most relevant to your topic. ○ Key Concepts and Terms: List and define 3-4 concepts or terms that the author(s) use to elaborate their argument or analyze their findings. Section III: Primary Sources Objective: This section should provide short summaries and contextual analyses as well as discuss the relevance of the primary sources to your specific topic and/or general themes related to reparations politics. Length: Approx. 250-750 words for the Primary Sources Synthesis Approx. 100-200 words per annotation entry Citations and Format: Following the MLA style, include proper in-text and Works Cited citations. For guidance on reliable sources, see the attached Handout above. Content Guidelines: The final version of this section will have two parts (see Template Model below for formatting). The first is a synthesis of the listed sources that evaluates the source for relevance to your topic. The second part is the annotations which will include a brief summary and analysis for each entry. ● Part 1. Primary Source Synthesis involves contextualizing the main themes and messages and highlighting their relevance to your specific topic. Consider: ○ Why are these sources significant to the topic of reparations politics? ○ Explain how the sources contribute to your understanding of reparations politics. ■ What insights do they offer that are particularly relevant to your research question or focus? ■ How might you use them in relation to the frameworks and Key Concepts discussed in Section II: Analytical Framework? ○ Consider the complexities and nuances the sources highlight about the reparations discourse, including differing viewpoints or tensions among groups involved. How might the sources interact with or challenge each other? ● Part 2. Primary Source Annotations involve contextualizing each entry with the following items: ○ Provide a brief overview of the primary source using the 5Ws and/or the 5As, as much as is possible: ■ What (e.g., a testimony, report, artwork),Who, When, Where, Why ■ Aim, Argument, Approach, Author, Audience ○ Summarize the main themes conveyed in the source. ○ As much as possible, identify the author or creator of the source and discuss their background or possible motivations or perspectives related to the issue.Reflect on any limitations or biases present in the source. ○ Discuss the historical and social context of the source in relation to specific events, actors, agendas, etc. discussed above, in Section I: Historical Background.
ECON6008 Sample 2 Numerical Group Project Question 1 It is demonstrated that with 1% shock to technology, consumption increased in the same proportion. This is because since marginal cost, wt/at , has to remain constant. In the face of a 1% increase in productivity, wage also increases with the same percentage magnitude. However, since the utility function is in log-specification, the income effect and substitution effect cancels out each other, rendering the level of labour supply constant. (As shown in graph 3. With labour supply, N , constant, the 1% shock is passed on to output by atNt = yt , so output increased by the same magnitude. Lastly, because output equals to consumption in equilibrium, consumption also increases with the same magnitude, 1% (as shown in graph 1) Graph 4 shows that inflation does not respond to technology shock. It is trivial that inflation does not deviate from its steady state value as there are no prices in the equations. Also, in the presence of zero inflation, the real interest rate and the nominal interest rate would obviously be equal and therefore track each other in the impulse response function precisely. In response to the productivity shock, both interest rates deviate negative forty basis points, or 0.4 percentage points, per annum. They then steadily return in the direction of the steady state to a level of approximately 5 basis points deviation below the steady state at period 20. The effects of the productivity shock first flow through to consumption. This then affects λ , which is the marginal utility of consumption, inversely by the same amount (since ). Since consumption slowly return to the steady state level, future consumption is expected to be lower and therefore the marginal utility of consumption in the next period λt+1 would be greater than the current period λt. The intertemporal Euler equation demands that the present value marginal utilities across periods to be equal, and therefore interest rate would fall. But as contraction in consumption reduces with time, the level of interest rate also decreases towards the steady state level. Question 2, Part 1 With zero-inflation policy, the standard New Keynesian model is virtually identical to RBC since sticky prices has no significance in an economy with no price adjustment at all. So optimal price and relative price distortion is constant. With productivity increases by 1%. output also increases by 1% which in turn increases consumption by 1%. The shock, output and consumption then steadily decline until reaches a 0.2% deviation at 20 periods after the initial shock. Follow the same argument as Question 1, with zero inflation (shown in graph 4), real rate equals to nominal rate and the initial decline of 40 basis point (per annum) from steady state level is due to the fact the marginal utility in the current period is greater than the following period. As consumption contraction becomes smaller with consumption approaches steady state level, interest rate also steadily increases towards steady state level (shown in graph 2) Labour supply is also constant in this model due to a log-utility specification. However notice that dynare output shows a fluctuation of 2x10^-7. This is assumed to be a computational result and does not affect our interpretation. Question 2, Part 2 Compare to the RBC model in Question 1, due to the relative-price distortion, a 1% increase in productivity brings about a dampened increase in consumption, at close to 0.8% (shown in graph 1). The increase in consumption (and hence output) is lower than that of an flexible-price economy (RBC), hence opens up a negative output gap that is responsible for the approximately 40% basis point per annum decline in inflation in period 1 (shown in graph 4). The Taylor-like rule prescribes that the nominal interest rate to be lowered in order to close the output gap and increase inflation, as shown in graph 2 with a close to 70 basis point per annum decrease in nominal interest rate. This is a large decrease in nominal rate compare to RBC model, but the real rate is less reduced than in RBC (35 basis point per annum compare to 40 in RBC) The productivity shock is also responsible for a persistent employment decline, as graph 3 demonstrate a 0.15% decrease from steady state in labour, compare to no fluctuation in labour in RBC. All variables in the graphs steadily move towards their respective steady state due to the accommodating central bank policy and the deterioration in the effects of the temporary technology shock. Difference between this model and the previous models can be attributed to the fact that inflation is no longer stable and this allows relative-price distortion to affect the economy. With predetermined policy the optimal level of inflation is zero in the steady state as it minimises both relative price and average markup distortion, therefore the zero-inflation policy yields higher welfare for households. This is also supported by the fact that consumption is higher with zero-inflation policy than the Taylor-like rule. Further, τπ and τy needs to be above a certain value in order for the model to be uniquely determined. Bullard and Mitra (2002) give us a necessary and sufficient condition for the unique solution: K(τπ 1) + (1 β)τy > 0 where K ≡ λ(σ + φ). In order to stabilize the system, the monetary author should act aggressively. However, larger τy would generate greater fluctuation in output gap and inflation, and hence will have larger welfare loss. Moreover, the smallest welfare loss could be achieved when monetary authority responds to changes in inflation only. As τπ increases, welfare loss would be lower. Therefore, by setting τy = 0 and let τπ be great enough, the Taylor-like rule can mimic the allocation under optimal Ramsey policy. Question 3 With a 1% markup shock, the relative-distortion reduces output and therefore consumption by 0.1% (shown in graph 1). This again opens up a negative output gap of 0.13% compare to a flexible-price equilibrium (shown in graph 5). However, with the markup shock the firms now price higher and there creates a positive level of inflation, at 40 basis point per annum higher than steady state (shown in graph 4). While graph 2 shows that the central bank adopts a contractionary monetary policy, it is in fact in capable of closing both gaps at the same time. Since such policy would reduce inflation but lowers output and further widening the output gap. The fact that all variables returns to their respective steady state and in a faster manner than previously can be attributed to a small and less persistent (at Pv = 0.5) markup shock.
Project 2: A new version of the event study 1 Overview 1.1 Project Goals In this project, we will explore an alternative implementation of the event study discussed in class. This version supports multiple tickers and assumes stock and market return data come from “dirty” data files provided by various sources. In the first part of this project, you will create functions to compute stock and market returns for this new version of the event study. In the second part, you will answer questions regarding the implementation of the code required to compute CARs and t-statistics. 1.2 Overview of Part 1: Calculating stock and market returns Our implementation of the event study involves five steps: • Step 1: Download the data • Step 2: Obtain/calculate stock and market returns (to compute CARs) • Step 3: Select events of interest • Step 4: Calculate CARs for each event • Step 5: Calculate t-stats for downgrades and upgrades using the CARs from Step 4. In the first part of this project, we will discuss how to implement “Step 2” in this updated version of the event study. This version differs from the one discussed in class in two key ways: first, it allows for multiple tickers; second, it utilizes data from different providers. 1.2.1 The output data frame The original version of the event study discussed in class focused on a single ticker, TSLA. The result of “Step 2” was a data frame with the following structure: date ret mkt Here, ret and mkt represent returns on the TSLA stock and the market, respectively. That data frame was constructed from two CSV files, denoted as and . To account for multiple tickers, this version of “Step 2” will produce a data frame with the following structure: date . . . mkt Where the columns , . . . , include stock returns for the tickers , . . . , , respectively. For instance, if we include AAPL and MSFT in our sample, the output data frame would look like: date aapl msft mkt 1.2.2 Data sources For the purposes of this project, the relevant data comes from two different providers. We assume that “Step 1: Downloading Data” has already been modified to accommodate these new providers. In the revised Step 1, data from these providers is stored in .dat files. There are two types of .dat files, referred to as and : • files contain historical price and volume data for various tickers, obtained from the first data provider. • files contain historical return and volume data for various tickers, as well as market returns, obtained from the second data provider. Unfortunately, the data in these files is often unreliable and improperly formatted, requiring data cleaning before use. For example, column headers in .dat files lack a standardized format. We will discuss additional known issues with this data later in this document. We can represent the structure of these two .dat files as follows (the column order may vary): • : . . . • : . . . Above, , , , , and represent columns with dates, tickers, volume, adjusted prices, and returns, respectively. The actual headers in both files may vary. 1.2.3 Obtaining stock returns We will assume that data from the first provider is more reliable. Therefore, we will prioritize data from whenever available, using data from only when necessary. Specifically, we will calculate returns as follows: • If a ticker is present in : Compute returns using , ignoring any data for that ticker in . • If a ticker is absent in : Use the column from . This approach excludes all information from for any ticker found in the column of , regardless of values in . Data from will only be used if a ticker is missing from the column in . 1.2.4 Obtaining market returns Market returns are only available from the second provider. Assume there is a special ticker, MKT, which never appears in any file but is always included in files. This ensures that market returns can consistently be obtained from . 1.2.5 Summary Once the data in and have been cleaned, stock and market returns are computed as follows: 1. For each ticker in , stock returns are calculated using the column. 2. For each ticker in that is not found in the column of , stock returns are obtained from the column in . 3. Market returns are derived from the column in using the special ticker MKT. 1.2.6 Example Let and represent data frames created after cleaning and processing all data in the and files, respectively: • : . . . Date(0) A . . . P(A, 0) Date(1) A . . . P(A, 1) • : . . . Date(1) B . . . Ret(B, 1) Date(1) MKT . . . Ret(MKT, 1) In this case, the output data frame with stock and market returns will be: Date(1) Ret(A, 1) Ret(B,1) Ret(MKT, 1) Where: • Ret(A, 1) = P(A, 1)/P(A, 0) - 1 is computed from . • Ret(B, 1), Ret(MKT, 1) represent returns from . • , , , and are formatted column labels. 1.3 Overview of Part 2: Short Answers In the first part of this project, you implemented the new version of “Step 2”. In this part of the project, you will answer questions about the new versions of steps 4 and 5. See the section entitled Part 2: Short Answers for more information. 2 Part 1: Completing and submitting your codes 2.1 Preparing PyCharm You should develop your code within PyCharm. Submission, however, will be through Ed. You will need to copy the main.py file from your project into Ed. Unlike the code challenges, Ed will not provide feedback on your code. You can still submit multiple times before the deadline – only your final submission will be marked. 2.1.1 The Source Files All required files are included in a zip archive with the following structure. Please unzip these into your toolkit project folder so it looks like this: toolkit/ pd.DataFrame.: This function produces a data frame. with volume and returns from a single file. It takes the location of this file as a single parameter and produces a data frame with the following columns (in any order): Column dtype ------ ----- datetime64[ns] object float64 float64 Where , , and are formatted column names representing dates, tickers, stock returns, and volume. The original data is unreliable and should be cleaned. See the Cleaning the data section for more information. Column labels and tickers should conform to the format specified by the functions fmt_col_name and fmt_ticker above. Returns should be computed using adjusted closing prices from the original file. Assume that there are no gaps in the time series of adjusted closing prices for each ticker. • read_ret_dat(pth: str) -> pd.DataFrame.: This function produces a data frame. with volume and returns from a single file. It takes the location of this file as a single parameter and produces a data frame with the following columns (in any order): Column dtype ------ ----- datetime64[ns] object float64 float64 Where , , and are formatted column names representing dates, tickers, stock returns, and volume. The original data is unreliable and should be cleaned. See the Cleaning the data section for more information. Column labels and tickers should conform to the format specified by the functions fmt_col_name and fmt_ticker above. 2.4.3 The mk_ret_dat function. This function has the following signature: def mk_ret_df( pth_prc_dat: str, pth_ret_dat: str, tickers: list[str]) -> pd.DataFrame. • Parameters: — pth_prc_dat, pth_ret_dat: The location of the and files, respec- tively. — tickers: A list with tickers to be included in the output data frame. • Output: A data frame with a DatetimeIndex and the following columns (in any order): Column dtype ------ ----- float64 float64 ... float64 float64 Where , . . . , are formatted column labels with tickers in the list tickers, and is the formatted column label representing market returns. Only observations with non-missing market returns should be included. 2.5 Cleaning the data Below are some known issues with the and files. • Column headers lack a standardised format. For example, the column with adjusted closing prices in files may be labeled inconsistently as “adj_close,” “Adj close,” or “Adj_close.” • Numerical data requires cleaning. For example, the number 0.1234 might appear in .dat files as either 0.1234 or "0 .1234" (with quotes). Additionally, typos are common: the number 0 may be mistakenly recorded as the (uppercase) letter O, and some price columns in files contain negative numbers that should be interpreted as errors. • Null values (NaN) are inconsistently represented. For example, the integer -99 or the float -99.9 is used instead of an empty string. There may be other issues with the two files provided to you. Your code should deal with any other data issue you encounter. 2.6 Formatting column labels Assume that original column headers in the .dat files meet the following criteria: • Column names include only alphanumeric characters and underscores. • White spaces and underscores could be used to separate words in the original column header. Words can be separated by any number of spaces and underscores. For example, both ‘Adj Close’ or ‘Adj Close’ could be used to separate the words “Adj” and “Close”. • Column names may include leading or trailing white spaces. Column names should be formatted according to the following rules: • Formatted column names should disregard any leading or trailing spaces found in the original column name. • Words in the formatted column name should be separated by a single underscore, regardless of how they are separated in the original column name. • Formatted column names should not include uppercase characters. 2.7 Formatting tickers Ticker values should be formatted according to the following rules: • Formatted ticker values should disregard any leading/trailing spaces or quotes found in the original ticker. • Formatted tickers consist of uppercase letters only. NOTE: Tickers also appear as column labels in the data frame produced by mk_ret_df. In this case, tickers should be converted to lowercase (i.e., column label formatting rules take precedence). 2.8 Example files and test functions NOTE: The files in the data folder provide examples of the types of and your code needs to handle. Specifically, The data folder includes two example files, prc0 .da and ret0 .dat. Keep in mind that these files may or may not include the data issues described above.
Workshop 7 Q1. Regulators of the Grow1 gene From a deletion analysis of the regulatory region of the Grow1 gene, you identify a 12bp palindromic sequence (G1) that is important for regulation of the Grow1 gene. You identify 2 new transcription factors that bind this G1 sequence – GA and GB – which have almost identical amino acid sequences in their DNA-binding domains , but little homology to each other elsewhere in the protein. GA is 400 amino acids long. GB is 250 amino acids long. 1.1 What does a palindromic sequence in the G1 binding site suggest? 1.2 What does the amino acid conservation between the DNA-binding domains of GA and GB suggest? 1.3 You clone 4 copies of either a wild type G1 site or a mutant G1 (G1m) site upstream of a TATA box and a luciferase reporter gene to make the plasmids G1-luciferase or G1m-luciferase. You then co-transfect either G1-luciferase or G1m-luciferase reporter genes along with increasing amounts of a plasmid to express either GA or GB. The data are shown below. What can you conclude from these data about the function of GA and GB? 1.4 You then fuse the strong transcriptional activation domain from the VP16 viral transcriptional activator to GB, and co-transfect increasing amounts of GB-VP16 with either G1-luciferase or G1m-luciferase reporter genes. The data are shown below. What can you conclude about the function of GB from these data along with the data above? 1.5 Finally, you transfect a standard amount of GA and increasing concentrations of GB with either G1-luciferase or G1m-luciferase reporter genes. The data are shown below. There are several different explanations for these data – perhaps as many as 5. Describe 2 different models that are consistent with the data, and describe an experiment that would test one of the models that you have proposed. What would your experiment show if the model is correct? And what would it show if the model is incorrect? Q2. Regulated activation domains Signaling pathways that increase cAMP activate PKA, which then moves to the nucleus and phosphorylates and activates the CREB transcriptional activation domain. Activated CREB then increases transcription of c-fos and other genes. 2.1 If you add a protein synthesis inhibitor at the same time as increasing cAMP, will c-fos mRNA and protein levels still increase? 2.2. c-fos transcription is also activated by growth factors that activate a different signaling pathway that activates the MAP Kinase protein kinase. Activated MAP Kinase moves to the nucleus (like PKA), where it phosphorylates the SAP1 transcriptional activation domain and converts SAP1 from an inactive to an active transcriptional activator. You are studying the SAP1 activation domain and make an expression vector that has DNA encoding the Gal4 DNA-binding domain (DBD) fused to the SAP1 activation domain. You transfect this Gal4-SAP1 expression vector into fibroblasts in culture with a reporter gene with Gal4 binding sites, and a co-transfection control. You do this for 2 plates – one to which you add growth factors that activate MAP Kinase, and one to which you do not add any growth factors. Your labmate says that you also need 2 more plates as controls. What should these 2 control plates be transfected with, and should you add growth factors to 0, 1 or both of these plates? 2.3 There are 2 serines in the 100 amino acid SAP1 activation domain (SAP1 AD) that your labmate mutates to alanine in one construct, or aspartic acid in another. These are then fused to the Gal4 DBD to make Gal4-SAP1 AD Ser->Ala and Gal4-SAP1 AD Ser->Asp respectively. You transfect 2 plates of fibroblasts with Gal4-SAP1 AD, 2 plates with Gal4-SAP1 AD Ser->Ala and 2 plates with Gal4-SAP1 AD Ser->Asp and add growth factors to one of the 2 plates for each Gal4 fusion. You finish the experiment but become confused which plates you transfected with Gal4-SAP1 AD Ser->Ala and to which you added Gal4-SAP1 AD Ser->Asp. However, you are sure which plates you added growth factors to and which plate had the normal Gal4-SAP1 AD. Which of the expression vectors (m1 or m2) is Gal4-SAP1 AD Ser->Ala and which is Gal4-SAP1 AD Ser->Asp? Briefly explain your answer. Q3. ChIP 3.1 What is ChIP an abbreviation for? 3.2 Briefly explain the main differences between ChIP and a band shift (gel shift) assay. 3.3 R represses transcription of gene Z via site Z1 in the Z regulatory region. Using ChIP, you find that R immuno-precipitates on the Z1 site in mammalian cells in culture. However, you cannot obtain a gel shift with purified R protein on the Z1 site even though you try all kinds of variations in the reaction buffer to help R bind DNA (e.g. adding zinc etc.). You are concerned, but the head of your lab tells you that you are doing the gel shift correctly, and that your data are actually informative about how R represses transcription. What model could explain your data? 4. One more gel shift question You are using a 150 bp labelled DNA as a gel shift probe and you add increasing concentrations of a DNA-binding protein (DB1) from lanes 2 to 4. You see the following pattern on the autoradiograph of the gel shift. You show this to 2 labmates: one suggests that the pattern is because DB1 can form tetramers on a single site, while the other labmate suggests that the pattern is because there are two different binding sites for DB1 in the 150bp DNA fragment. You decide to test the idea by cutting the 150bp DNA probe into two equal-size 75bp pieces which you use as probes (Probe 1 and Probe 2) . 4.1 Draw what you would see if DB1 can form. tetramers on a single site 4.2 Draw what you would see if there is one DB1 binding site on each 75bp probe
FASH 303 // The Business of Fashion Project 3: Design Simulation Project 20% of final grade Introduced: Class 1 Due: Class 20 WHY (PURPOSE) The Fashion Scholarship Fund (FSF) is a major national competition that pushes students to research, explore, and provide solutions for major trends influencing our industry today. Completing this project per the brief provided by the FSF (posted to Blackboard), will challenge students to consider deeply the strategic purpose and business intention of their creative vision. This project will require in-depth business analysis, making use of a diverse range of sources and materials, which will enhance students’ research skills and strengthen their preparedness for employment. WHAT (OVERVIEW) Students are required to follow the latest FSF brief, posted to Blackboard and accessible via the “Savannah School of Fashion Competitions” community on Blackboard as well. Through in-depth research of markets, business strategies, upcoming product/materials trends, and exploration of current socio-cultural context, you will develop a range of products that mark a new, inspirational and innovative direction for your chosen brand. Your final range must comprise 10-15 forward-thinking products, presented with thorough evidence of research, exploration, and technical information. Refer to the full list of slide deck requirements provided in the FSF brief. During the development of this project, it is required that students attend at least 1 competition extra help session offered by Prof. Andrew Fionda,- Competitions Coordinator for the School of Fashion. SUBMISSION REQUIREMENTS // 20% of final grade · One PDF file of the completed slide deck, including all required components stipulated by the FSF brief, to be submitted via Blackboard. · Prepare and deliver a 5-minute live presentation (supported by a Powerpoint or PDF slide deck). The slide deck must be saved on the class dropbox prior to the beginning of class 20. The completed body of work must be submitted to Blackboard as a single PDF file, no larger than 10MB, using the appropriate link in the “Submissions” section of Blackboard. Formats other than PDF will not be accepted. All necessary files must be uploaded to blackboard BEFORE the beginning of class presentations. FEEDBACK SCHEDULE: · Work-in-Progress Feedback Feedback will be offered throughout the development of this project through regular in-class reviews. These reviews will include individual progress discussions with faculty, as well as peer-to-peer group discussions. Please refer to schedule of in-class work-in-progress review activities posted in the course syllabus. Students are welcome to email drafts of work-in-progress to Faculty for one-on-one guidance. When emailing work-in-progress for faculty review, please allow 48hrs for reply. · Critique and post-submission feedback Peer and Faculty feedback will be shared verbally during in-class presentations. Particular focus will be drawn to highlight both positive reinforcement and constructive guidance. All students are expected to participate and engage during this process to maximize the learning value of the formal presentation and discussion process. Assignment rubric score and written feedback from Faculty will be posted to Blackboard following submission of the completed project. · The rubric used for the scoring of this project, is accessible under the “Submissions” section on Blackboard.
COMP0233 Planning Delivery Tours - Coursework 1 (Individual) - #001 1 Foreword and Summary Please READ THIS ASSIGNMENT CAREFULLY. If you have any questions about the assignment, please post them on the Q&A forum in Moodle or email the module leaders. Do not post samples of your code concerning the assignment to public forums such as Moodle. This assignment asks you to write some code for choosing the location for a new depot to minimise delivery times to the surrounding area. We will describe how the code must behave, but it is up to you to fill in the implementation. Besides this, you will also need to create some tests, and demonstrate your ability to use version control (via git). The exercise will be semi-automatically marked, so it is very important that your solution adheres to the correct file and folder name convention and structure, as defined in the rubric below. An otherwise valid solution that doesn’t work with our marking tool will not be given credit. For this assignment, you can only use: • The Python standard library, • numpy, • matplotlib, and • pytest. Your code should work with Python 3.10 or newer. Some modules in the standard library that might be useful to keep in mind are: • csv, • string, • timeit. This document is laid out as follows: • First, we provide the setting of the problem that you will be solving. • Next, we outline the functionality that should be implemented, and any other tasks you should complete. • Finally, to assist you in creating a good solution, we state the marking scheme we will use. 2 Setting You have been contacted by Courier Ltd. (CLtd), a (fictional) delivery company who are looking for help in choos-ing where to construct their newest delivery depot. CLtd want you to help them decide where to build their delivery depot so they can best serve the major towns and cities, and have provided you with information on the location of these places. To provide them with an answer, you will need to write some code to make it easier to load, analyse, and visualise the data. We have provided you with a Jupyter notebook (deciding_depot_location.ipynb) containing the workflow you should be able to run once your code is complete. CLtd have identified a number of locations that they want their new delivery depot to be able to serve, as well as a number of sites that would be suitable for building the depot on. They use the following terminology: • Location means any town, city, or potential building site for the new depot. It is a broad term for any place of interest that CLtd has identified. • Settlement will be used to mean “a location that is not a potential building site for the depot”. Think of these as towns, cities, or other industrial areas that will need deliveries sent to them. • Depot (or “building site”) will be used to mean “a location that is a potential building site”. These are the places where the CLtd is thinking of potentially setting up their delivery depot. • Region is a “collection of locations”. Locations that belong to the same region are typically related in some administrative manner. • Country is a “collection of regions”. Like how England is a country, which has several counties (regions). A Country of course still has locations in it - those locations that belong to the regions. CLtd want to build their new depot at a building site that allows them to provide fast delivery times to the surrounding settlements. They have decided that they want to select the building site that minimises the time taken for a delivery to travel from the depot, visit every settlement, then return to the depot. Essentially, CLtd want to minimise the time it would take a single delivery-horse to deliver to every settlement, in a single trip. It will be up to you to write some reliable, reproducible code to determine which location they should build at. 2.1 Format of the Data File(s) The data files that CLtd expect to handle are all comma-separated value (.csv) files, and have a particular structure. Each location in the country is one row in the file, and you are given information about; • Its name. Note that this comes under the column that is headed “location”. • Its region. Regions will be important when it comes to working out how quickly a delivery can move between locations. Every location belongs to precisely one region. • Its polar coordinates. • Whether the location is being considered as a potential site for the new depot. This is reflected by the value in the depot column; which is either TRUE (if the location is being considered by the company as a depot) or FALSE (it is not). You can read more about how the dataset was created in the appendix. We have provided you with the data file (locations.csv) concerning the country CLtd is interested in. 2.2 Travel Between Locations In reality there are a vast number of routes that can be taken between any two given locations, and CLtd has not provided us with any information about the road or transport network. Instead, they have told us to assume that all locations are directly connected to each other. Travelling between two locations takes an amount of time that depends on the physical distance between the locations, and whether or not they share the same region. Travel time in hours between two locations is given by the formula (1) where • D the physical distance between the locations, in metres. • S is the travel speed between the two locations, in metres per second. • Rdiff (“Different Regions”) is equal to 1 if the locations belong to different regions, and 0 otherwise. • Nlocs (“Locations in Destination Region”) is equal to the number of locations in the same region as the destination location (including the destination itself). This has the effect of making travel into busier regions more time consuming than leaving those regions, and represents the interregional “border control” that CLtd expect to encounter. 2.3 Tours A tour is a sequence of journeys between locations that starts at one particular depot, visits every settlement exactly once, then returns to the depot it started at. Tours are written by listing the locations that the delivery-horse visits, in the order of travel. CLtd need us to find the depot that provides the shortest tour for the country data provided in locations.csv. For example, consider the country defined below: Within this country, the following are all tours: • Brightwater -> Sapphire Plaza -> Cobalt Mine -> Cinnabar Beach -> Cardinal’s Rest -> Brightwater. • Rosewood -> Sapphire Plaza -> Cinnabar Beach -> Cardinal’s Rest -> Cobalt Mine -> Rosewood. • Bluebell Meadow -> Sapphire Plaza -> Cardinal’s Rest -> Cinnabar Beach -> Cobalt Mine -> Bluebell Meadow. Also notice that, if there are s settlements within a country, there will be possible tours that start at each depot. (s! is read as “s factorial”). So, if there are d depots in the country, there are (d × s!) possible tours that can be undertaken. In the example shown above with d = 3 depots and s = 3 settlements there are 3 × 3! = 18 possible tours. If we take the country in the locations.csv file as an example; this provides d = 5 depots and s = 14 settlements, which gives 4.35 × 1011 (just over 400 billion) possible tours - so you can quickly see why we need a computer to tell us which depot is the best one! The time taken to complete a tour is calculated by summing the travel times of the individual journeys that make up the tour. 2.3.1 Computing Tours As mentioned above, there are multiple tours available starting at any given depot in the country, so we need a way to compute tours given a starting depot. To compute an appropriate tour from each depot, CLtd would like you to use the Nearest-Neighbour Algorithm (NNA), which we outline now. We define • The current location as the location we last moved to, and now want to move out of. • The starting depot as the depot we want this tour to start and end at. • The target settlement as the location we are considering moving to next. The algorithm then proceeds as follows: 1. Set the current location to be the starting depot, and initialise the path as a list that only contains the starting depot. 2. Label all of the settlements we want to visit as part of the tour as unvisited. 3. Select the unvisited settlement that has the smallest travel time from the current location. Set it as the target settlement. 4. Add the target settlement to the end of the path. Set the current location to be the target settlement. 5. Mark the current location as visited. 6. If there are unvisited settlements remaining, go to step 3. 7. Add the starting depot to the end of the path. Once the final step of the algorithm completes, the path is the computed tour. The travel time to complete the tour can be computed by summing the travel time of the individual journeys. 2.3.1.1 Edge Cases in the NNA Some operational notes on edge cases in the NNA: • If there are no locations to visit in the first place, that is if in step 2 there are 0 locations marked as unvisited, then the tour is just given as the starting location (we don’t need to go anywhere). The travel time of the tour should be interpreted as 0. • It is possible for there to be two locations that are equidistant from the current location, which both happen to have the minimal travel time. We will specify what to do in this case when we discuss your coding tasks. 2.3.1.2 Example Suppose that we have the following locations in a country with only a single region, • Brightwater (depot) • Bluebell Meadow (depot) • Sapphire Plaza • Cobalt Mine • Azure Observatory and the locations have the following travel times to get between each other; We can actually illustrate this country using a network diagram, as in fig. 1. In this example we will break any ties in our NNA by selecting the location that comes alphabetically before the other(s). Using the NNA to find a tour starting from Brightwater: 1. The path starts at Brightwater. Brightwater is the current location. 2. Sapphire Plaza, Cobalt Mine, and Azure Observatory are all unvisited settlements. Note that Bluebell Meadow is not a settlement, so we don’t want to visit it on this tour. 3. Sapphire Plaza and Cobalt Mine are both 5 hours travel time from Brightwater, which is the shortest travel time available from this location. Since Cobalt Mine is alphabetically before Sapphire Plaza, so Cobalt Mine is set as the target settlement. Figure 1: The network representing the example country with a single region. Arrows indicate the connections between locations (note every location connects to every other), and the numbers beside the arrows represent the travel time. 4. The path becomes Brightwater -> Cobalt Mine. Cobalt Mine is marked as visited, and the current location moves to Cobalt Mine. 5. From Cobalt Mine, the nearest unvisited settlement is Azure Observatory with a travel time of 2, so we go there next. 6. Then, from Azure Observatory the only remaining unvisited settlement is Sapphire Plaza, which we go to next. 7. Finally, since we have no remaining unvisited settlements, we return to the starting depot (Brightwater). The final tour is thus Brightwater -> Cobalt Mine -> Azure Observatory -> Sapphire Plaza -> Brightwater. The travel time of the tour is 5 + 2 + 4 + 5 = 16 hours. If we had chosen to find a tour from Bluebell Meadow instead, we would have obtained the tour: Bluebell Meadow -> Sapphire Plaza -> Cobalt Mine -> Azure Observatory -> Bluebell Meadow. This would have given a total travel time of 2 + 3 + 2 + 3 = 12 hours. In this country, CLtd’s best site for their new depot would thus be at Bluebell Meadow, as it provides the shortest tour via the NNA. 2.4 Goal Your goal for this assignment is to provide CLtd with some code that will enable them to determine the best site to place their new depot at, given the data provided in locations.csv. What you produce will also need to be general enough that CLtd can use it to perform. similar analysis in the future, on different (and potentially much bigger) datasets. As such, your code will need to be reliable, accurate, efficient, and version-controlled. Our marking script. will be testing the functionality of your code over the locations.csv file, as well as other data files of the same format but which describe different countries. Therefore, do not assume that testing on the locations.csv file is sufficient proof that your code performs correctly. 3 Project-Wide Tasks This section covers tasks that you’ll need to be doing throughout your time working on this assignment. They are not restricted to one function or one feature, and affect your submission as a whole. You may find it useful to take a look at the getting started section before you actively start working on the tasks in this section. 3.1 Use of Git To track your changes as and after you make them, you should work in a git repository. You should initialise this repository inside a folder named depot_locations. You will place the .py files containing your code in this folder. In addition to the code directly inside depot_locations, you should also track the content of both the data and report subfolders (once you create them). You should make commits as you work through the assignment and add functionality, or complete the various written tasks. When you submit, your repository should contain everything needed to run the code and tests (see below), but no files that are not necessary. In particular, you should not commit “artefact” files that are produced by your code or IDE. Please refer to the course notes for information on how to exclude such files from your repository! As this is an individual assignment, you may work on one or multiple branches, as you prefer. However note that we will only mark the latest commit on the main branch of the repository, and will run git switch main before grading, so be careful about leaving changes on other branches or uncommitted changes to the HEAD of main. Due to our automated marking tool, only work that has a valid git repository, and follows the folder and file structure described in the relevant section of this assignment, will receive credit. You must use GitHub for your work, you need to use the repository that you’ll get access to by accepting this invitation. After you accept the permissions, this will create a repository named depot_locations- . To prevent plagiarism, avoid uploading your work to any public git repository. Failure to comply with this requirement would be considered academic misconduct. You have to submit a link to that GitHub classroom repository to Moodle in order to be considered a valid submission. To do so, write the repository link into a plain text file called submission.txt and submit that file only. 3.1.1 Some Reminders about the Use of Git • Make sure that you give your commits meaningful descriptions! The goal is that someone (you or others) can look at your commit history in the future and get a rough idea of what changes have happened. The messages should guide them in finding when a particular change was made – for example, if they want to undo it or fix a related bug. Therefore, avoid vague messages (e.g., “Fixed a bug”, “Made some changes”) or ones that don’t describe the changes at all (e.g., “Finished section 1.2.1”). Prefer the use of concrete messages such as “Check the type of the arguments” or “Add tests for reading data”. • Do not be afraid to roll back (git revert) your own commits! A big part of why version control is useful - even in solo work - is the ability to revert back to something you know is working. • If you accidentally commit an artefact file, and notice later, do not panic - there is no need to delete everything and start over. Simply make a new commit that deletes the artefact from the repository, and give it a clear commit message such as: “deleted artefact that was accidentally committed in ”. Do not conflate such commits with other changes though (that is, do not remove an artefact and in the same commit make additional changes). You will not be penalised for committing files that you later identify as artefacts and remove (though it might be better to use .gitignore to avoid accidentally committing them in the first place). 3.2 Testing We expect you to provide unit tests for the code your write as part of the assignment tasks, which extends to any functions or methods that you break out into smaller chunks. You can write all of these at the end, but we recommend you write them as you go along. You should provide all tests inside a test_country.py file, and use the pytest testing framework. test_country.py should include tests for functions from utilities.py, as well as the Location and Country classes. Running pytest in your submission directory should find the tests inside test_country.py, and the tests should pass: (my-environment) $ pwd path/to/my/work/depot_locations/ (my-environment) $ pytest platform. linux -- Python 3.11.5, pytest-7.4.0, pluggy-1.0.0 rootdir: path/to/my/work/depot_locations/ plugins: None collected X items Make sure that your submission includes all files that are needed to run the tests. You are welcome to use the locations.csv file in your test cases, or write your own custom country data for your tests. If you do include your own country data, make sure that you include the data files for your tests in your submission - see the section on your submission file format for where you should place these files. NOTE: You should also be be aware that the locations.csv dataset is quite large - comprising 400 connections between 20 locations. As such, you might find it difficult to manually compute tours on this network for testing purposes. 3.2.1 Code that does not Require Testing Code that does not require testing will be explicitly indicated in the assignment text, otherwise it should be assumed that tests are required as part of the coding task in question. You do not need to write tests for the functions and methods that are provided to you by us, and you will not receive credit for such tests if they are included. Specifically, the following functions and methods do not require you to write tests for them: • All functions within plotting_utilities.py. • Location.__repr__ • Country.plot_country • Country.plot_path • regular_n_gon You do not need to write unit tests for your execution_time.py script. You will not receive credit for such tests if they are included. 3.2.2 Designing and Writing your Tests At a minimum, you should have at least one test for each of the units and/or cases below: • Errors with appropriate error messages are thrown when invalid values are encountered during creation of Locations and Countrys. See the requirements for creating Locations and creating Countrys. • Within the Location class; ∘ The distance_to function. ∘ The use of the == operator. • For the Country class; ∘ The depots and settlements are correctly identified by the respective properties. ∘ The travel_time method functions as intended. ∘ fastest_trip_from works as intended, including when passed its optional argument. ∘ nn_tour correctly implements the NNA in a case you have manually verified. ∘ best_depot_site correctly identifies the best building site. However, remember that when writing unit tests your objective is to check; • the core functionality of the unit, • error-cases, • edge-cases, • deliberate exceptions the unit might have to its normal behaviour. For the longer, more complex methods and functions we ask you to write, you will need more than one test to provide adequate coverage. Fixtures and test parametrisations, as well as other pytest features, might be good ideas to look into. Don’t forget that the purpose of unit tests is to “build up”; you do not need to test aspects of your methods that call your other methods / functions that already have unit tests! For example, best_depot_site is likely to depend on nn_tour. Provided your test coverage of nn_tour is sufficient, you don’t need to check that the best depot found the correct tour, since this is covered by nn_tours unit tests. You only need to verify that best_depot_site is returning the depot you expect assuming all the tours were computed correctly (the details of how it obtained those tours is then covered by your other methods and their tests)! 3.3 Written Answers, Code Styling, and Documentation Some tasks will ask you to provide written answers. Place these answers inside the report/report.md file, clearly heading the answers using their section name in the assignment. Any plots you are asked to produce should be committed to the report folder too, saved as .png files. You are expected to present your submission in a clear, readable fashion. You should stick to a consistent coding style. as you work, maybe even using a code linter to help you. We also expect you to write docstrings and leave appropriate comments in your codebase to help a new user understand how to use it.
ECON6008 Sample 1 Numerical Group Project (Report) Q1. RBC, under a 1% productivity shock Interpret the impulse response The graphs show a 1% increase in productivity under the flexible pricing equilibrium and perfect competition (RBC) . The shock will lead to 1% increase in output according to the production function. Output gap remains zero since there is no deviation of output from the potential output. Besides , there will be no change in the level of inflation. The productivity shock increases the real wage. According to the model, these would lead to consumption to increase by the same amount as in the productivity, which is 1% . Labor is independent of consumption in the household’s utility function, which means it remains constant. However, there is a decrease in the shadow price of consumption. As a result, real interest rate would fall (by around 40 basis points) as well. As the productivity shock is a stationary AR (1) process, the effect diminishes toward zero in the limit and all real variables gradually converge to the zero deviation from the steady state. Q2. (1) NK with zero-inflation policy, under a 1% productivity shock Interpret the impulse response In the standard New Keynesian framework with a special case that the monetary policy authorities manage to keep net inflation to zero , the model is identical to the RBC model in question 1 and monetary policy would not affect any real terms. Hence the impulse responses of variables will be the same as those in the RBC model. The shock will also lead to 1% increase in output. Output gap and inflation remains zero since it is identical to the flexible pricing. This increase in productivity increases the real wage. Consumption would increase by the same amount as in the productivity, which is 1%. Labor is independent of consumption in the household’s utility function, which means it remains constant. However, there is a decrease in the shadow price of consumption. As a result, real interest rate would as well. To accommodate the change in real interest rate, the central bank fully adjust the level for the nominal interest rate. Q2. (2) NK with Taylor-like rule, under a 1% productivity shock Interpret the impulse response The graphs show a 1% increase in productivity, under the NK model with Calvo sticky price , monopolistic competition and Taylor-like monetary policy rule. As the productivity shock is a stationary AR (1) process, the effect diminishes toward zero in the limit as well. Comparing to RBC, this model incorporates price stickiness and monopoly power of the firms. In the model, 1% increase in productivity will lead to a less than 1% increase in the real output and consumption (around 0.85%) . Inflation will also decrease by less than 1% (around 37 basis points) . These are because monopoly power will lead to relatively lower output level and some prices are sticky and remain unchanged for a period of time. Since potential output would increase by 1% , the output gap will be negative. As for labor input, it will decrease since income effect of decrease in prices dominates the substitution effect. Since output gap and inflation both go down , policy authorities will respond by decreasing the nominal interest rate according to the policy rule. The real interest rate will go down as well (but will be less negative than the nominal interest rate due to negative inflation) . Thus , output gap will increase and be closer to zero, which in turn makes inflation closer to zero . Which policy rule yields higher welfare for the household? What can you say about the role of the values of τπ and τy for the ability of the above Taylor rule to mimick the allocation under the optimal Ramsey policy? In a Ramsey policy, monetary policy affects welfare through minimizing the level and the variations in these distortions if possible. In a simple interest rate rule, the authorities could accommodate the expectations of household utility around the Ramsey steady state. Given the equilibrium equations of the model and the assume policy rule, the authorities search for τπ and τy that maximize the level of welfare for the household. As a result, the second rule yields a higher welfare. In the above NK model with Taylor like rule, we need to choose the coefficients to minimize the gap between Ramsey policy and this policy should be as close as possible. These two coefficients, τπ and τy shows the how important the inflation deviation (relative price distortion) and output gap (markup distortion) to the central bank in terms of stabilizing the economy. Different coefficients would have different effects on stabilizing the economy. For example, if τπ is smaller than 1, a decrease in inflation would lead to less than one to one decrease in nominal interest rate, which will lead to an increase in real interest rate. As a result, the negative output gap and negative inflation will further decrease , and this would not be a stationary equilibrium. If τπ is larger than 1, a decrease in inflation is response by a more than proportionate decrease in nominal interest rate, which will lead to a decrease in real interest. Decrease in real interest rate shifts aggregate demand curve to the right , and lead to a higher output for a given inflation. Then higher output will further lead to higher inflation. As a result, the output gap and inflation will increase. This response would stabilize the economy. If policy authorities respond strongly enough to the productivity shock, the output gap and inflation can be stabilized at 0 level. And productivity does not lead to policy trade-off between output gap and inflation. In contrast, in flexible price model (RBC), money have the attribute of neutrality. Central bank can affect the short term nominal interest rate only. Any changes in nominal interest rate will be absorbed completely by 1 to 1 increase in inflation and expected inflation, so it does not affect the real interest rate, real interest rate is only affected by real factors. So MA will not have any effect on output, and it can only affect inflation. Q3. NK with Taylor-like rule, under 1% markup shock Interpret the impulse response and policy trade-off Here is a 1% positive mark-up shock, under the NK model with Calvo sticky price and monopolistic competition, with Taylor-like rule. As the productivity shock is a stationary AR (1) process, the effect diminishes toward zero in the limit as well. The mark-up shock increases the cost of production and the price level, which leads to a decrease in demand. As a result, the society would face an increase in inflation and a decrease in real output and consumption. Labor input will decrease as there will be a decrease in wage and the substitution effect dominates the income effect. As the temporary shock does not affect the long run potential output, output gap would become negative. Due to a negative output gap and a positive inflation level, the monetary policy authority will face a trade-off between closing the output gap and reduce the level of inflation. If authority decides to simulate the economy , i.e . decrease the nominal interest rate , the negative output gap will be smaller , but the level of inflation will be higher (more positive) . If authority decides to conduct a contractionary policy, i.e . increase the nominal interest rate, the inflation will be pinned down while the output gap will be more negative. Output and consumption will decrease as well. So it is not possible to stabilize these two factors at the same time.
Assignment 5: Portfolio You will use your skills in Illustrator, Photoshop, and InDesign to create an online portfolio that can be viewed online. If you do not have a portfolio, I recommend doing the best you can on this one since this can be used to send to hiring managers if you plan to apply for jobs/internships. We will be going over this assignment one step at a time. Make sure to be on top of this assignment or it will be difficult to catch up! Research/Sketches/Layout You will gather your portfolio pieces from wherever you wish. It can be from this class or somewhere else. I suggest to gather work that is related to what you wish to do as a career. I recommend for you to have around 6-9 portfolio pieces. You do not want to have too many or too little work. This is a good balance to have. Start with some sketches on the layout of your portfolio. I will suggest to map out what page will go where and what button will lead you to that page. Research a variety of websites that you like and see what they did. You will only have 1 sketch, but take your time with this one. You will make sure it looks the best you can. You will make it 1920 by 1080 pixels. After your sketch, create your layout in InDesign. You are able to create any assets in the other programs and transfer them to InDesign. Make all the pages you wish to create. Layout Part 2 After creating the first layout, we will be working with Object States, Buttons, and Animation to bring this portfolio to life. You will make sure that this portfolio is easy to navigate anywhere. Have a button that directs the viewer to where you wish to go. I MUST NOT BE FORCED TO CLICK THE BACK BUTTON! This will be saved as a PDF with a link that has access to your portfolio (Publish Online.)
COMP5318/COMP4318 Machine Learning and Data Mining s1 2025 Week 10 Tutorial exercises Clustering 2 Exercise 1. DBSCAN clustering Use the DBSCAN algorithm to cluster the items A1, A2, …, A8. The distance matrix is given below. Assume that Eps=2 and MinPts=2. Exercise 2. Evaluating clustering quality using the silhouette coefficient Given are 4 items P1, P2, P3 and P4. They were clustered using a clustering algorithm. The cluster labels and the distance matrix are shown below. Evaluate the quality of the clustering by computing the silhouette coefficient for each point, each of the 2 clusters and the overall clustering. Distance matrix: Cluster labels: Exercise 3. Evaluating clustering quality using correlation For the data from the previous exercise, evaluate the clustering quality using the correlation between the similarity matrix derived from the distance matrix (given below) and the similarity matrix derived from the clustering results (i.e. the matrix whose ij entry is 1 if two objects belong to the same cluster and 0 otherwise). The similarity matrix derived from the distance matrix is given below. It was computed from the distance matrix as s =1 - (d - dmin)/(dmax - dmin), where dmin and dmax are the minimum and maximum distances in the matrix: dmin=0.1 and dmax=0.7. Similarity matrix: