Department of Digital Humanities 5AAVC204 Management for the Digital Domain I 2024 - 2025 Assignment 1 Format Written assignment (max. 3000 word excluding references). Weight 100% Deadline Due 23 April 2025 before 2pm Brief Essay questions Choose one of the five options below. Answer it using the theories discussed in Lectures and Seminars, and use illustrative examples to support your arguments. 1. To what extent does the nature of managerial practice change in the digital domain? 2. Which is more important to the long-term success of a digital organization: corporate or competitive strategy? 3. Describe the debate between the planning and learning schools of strategy. Which is better suited for digital organisations? 4. Only a small fraction of organisations are big tech companies or platform monopolies. Do we place too much emphasis on them when we study digital organisations and management? 5. Create your own question. You must focus on one of the weekly topics covered this semester and demonstrate a critical understanding of the topic. You must send your question to the module convenor for approval by Monday 10 March 2025 and have it approved by Friday 21 March 2025. You can only use this option if your question has been approved by that date. Guidance This assessment allows for alimited/selective usage of generative AI. When answering your chosen essay question, you should make sure that you reflect on material from the course and wider literature. You should develop an argument based on that reflection that answers the question, drawing on both relevant literature and theory. When developing your argument, you can use phrases such as “In this essay I will argue that…” Developing a clear argument is important because it encourages you to narrow down your focus. A well-crafted argument can also help you to engage critically with the module’s key themes, debates, and literatures. Illustrative examples should be used to support points made. Guidance on structure It is up to you how you structure your essay. The goal should be to develop a structure that best supports your argument. We recommend that you introduce your argument through a thesis statement. A thesis statement frames your argument in such a way that the reader understands how it relates to the essay question. A thesis statement … • makes an argumentative assertion that directly relates to the essay question; it states the conclusions that you have reached about your topic. • makes a promise to the reader about the scope, purpose, and direction of your paper. • is focused and specific enough to be fully developed within the boundaries of your paper. • is generally located near the end of the introduction. • identifies the relationships between the main components of your argument. • It adheres to the stylistic requirements of academic writing. For example, the argument is nuanced and contextualised. You are not expected to “prove” something to be a statement of fact, but to substantiate why your claim is convincing. You should focus on fully developing the points and arguments made in the reflective essay. Given that the essay’s word limit is 3,000 word, be careful not to introduce more concepts, theories and points than you are able to cover. Your argument should be critical (i.e., nuanced and focused), draw on appropriate materials from class and readings, and be positioned within a set of existing debates academic debates. Support Students will have the option to ask questions during seminars and in the final lecture. For any further questions, students can also book office hour meetings with their module convenor and seminar leader or use the dedicated discussion areas on Keats. Essay writing tips and a FAQ page with answers to commonly asked questions about the assessment is available in the section on assessments on the Keats page. The department also runs a weekly Writing Lab where students can learn more about writing essays, arguments, and thesis statements. Resit information Students should revise their original submission in accordance with the marker's comments. Parts that have been changed or added in response to the marker's comments on the original submission should be highlighted in yellow. The marker's comments on the original submission should be added as an appendix to the resubmission for the marker of the resubmission to consult. Students that are taking the reassessment should contact either module convenor to arrange a meeting in their office hour, to discuss the feedback received on the original submission and how to address that feedback in the resubmission.
Assignment 5 STAT 321 Winter 2025 Due: Friday April 4, 5:00PM Accepted: Monday April 7, 11:59PM In this final assignment, you will perform a complete linear regression analysis of an example dataset. You may choose from one of three preloaded datasets. You can use the following commands to load data and view the documentation of each dataset, replacing [name] by one of chredlin, seatpos, or teengamb. data
EEL6537 Class Project Due 04/15/2025 Instruction: You can email me the pdf file of the project report. Your report should be 4 pages, double column, conference paper style. The report should include the following components: 1. A concise description of the problem or application related to spectral estimation. Why is the problem important? 2. A summary of previous solutions to the problem, including their merits and limitations. Your report should have at least a few references to papers in the literature that you have read to help you implement your project. 3. A detailed description of your solution to the problem. 4. Discussion and analysis of your results and how your solution differs from previous work. Please include numerical examples to validate your theoretical analysis. 5. Ideas for how your project could be extended or improved if you had more time.
ISE 537 Homework 5 (100 pts in total) Due April 30, 2024 by 10:00 pm. We expect you to submit two Python Notebooks for this assignment. 1. (Value Iteration Algorithm for Student Dilemma.) Let us consider the so-called Student Dilemma. It is a Markov decision process with 7 states, among which states 5 , 6, 7 are terminal states (observe that there are no arrows going out from these states). All the transitions are Markov and are labeled on the picture. For example, in state 1, we can choose to either work or rest. If we rest, then with probability 0.5 we will transition to state 2, and with probability 0.5 we will stay in state 1. The rewards marked in red are the rewards we can get as long as we stay in the state. For example, we will collect reward 0 as long as we are in state 1, no matter if we choose to rest or work. Consider an infinite horizon decision-making problem (without discount, i.e. γ = 1) until reaching one of the terminal states. The goal is to find the policy that maximizes the expected sum of rewards before achieving a terminal state. Recall that the Bellman equation (a) The optimal value function of states 1, 2, ··· , 7 satisfy: Question: Please explain how to fill in the blanks? (b) We also covered in class the Value Iteration Algorithm to calculate the numerical solution of (1): i. Initialization at iteration 0: Let V(0)(s) be any function V (0) : S → R ii. Given V (i) at iteration i, we compute: iii. Terminate when V (i) stops improving; e.g., when maxs|V(i+1)(s) — V(i)(s)| is small iv. Return the greedy policy π(K)(s) = arg maxa [r(s, a) + γEs~P(s,a,s′ )V(K)(s′ )] Question: Please implement the algorithm to solve (1) (ideally with a Python note- book). Stop when maxs|V(i+1)(s) — V(i)(s)| drops below 10−2 . i. How many iterations does it take to reach this error level? ii. Please report your estimated values of V (1), ··· , V (7). iii. Describe the out-put greedy policy. 2. (Q Learning for Cab Driving.) We use Q learning to train a smart cab driver (Smartcab). The Smartcab’s job is to pick up passenger(s) at one location and drop them off in another. Here are a few things that we’d love our Smartcab to take care of: • Drop off the passenger to the right location; • Save passenger’s time by taking minimum time possible to drop off; • Take care of passenger’s safety and traffic rules. There are different aspects that need to be considered here while modeling an RL solution to this problem: rewards, states, and actions. Please read the Ipython notebook “Q Learning for Cab Driving.ipynb” for more detailed demonstration and explanations. (a) Implement the Q-learning update at the indicated place in the Ipython notebook “Q Learning for Cab Driving.ipynb”. You should follow the following equation to implement your code: i. What are the values of q table[1,1] and q table[51,3]? ii. Visualize the array sum q arr and explain if the algorithm has converged or not. (b) What are the average penalties reported under the evaluation module? Explain the idea of the design.
EECS 31L: Introduction to Digital Logic Laboratory (Spring 2025) Lab 1: Logic Block Design (Revision: v1.0) Due: April 13, 2025 (11:59 PM) In this lab, you will design a few basic logic blocks in Verilog and then verify your designs to see if they work as expected. For each block, you should create a new project, write down the code in Verilog (in *.v source files), and run the simulation in Vivado. Please make sure you complete Lab 0 before working this lab. We use the first block, half adder (HA), as an example to show you the whole process of hardware design and simulation. You should follow the same process for the other blocks. Please read this lab manual carefully and in its entirety. 1 Half Adder (HA) Design a half adder (HA) using Verilog logical operators. As you know, a half adder has 2 1-bit inputs (A and B) and 2 1-bit outputs (Sum and Cout). The block diagram and truth table for a half adder are shown below. As you see, the Boolean equation for the two outputs (Sum and Cout) are as follow: • Sum = A XOR B • Cout = A AND B And circuit schematic for a half adder is: Follow these steps to design and simulate a half adder in the Vivado: • Create a new RTL project in the Vivado. • Add a design source and choose “Create File” (ha.v). Make sure to change the file type to Verilog. You can skip the I/O port definition (you will manually define the I/O ports by commands in your code). Your new source code (*.v) should now be under the “Design Sources” in the “Sources” window. • Double click on it and enlarge the window. Write the code below and save the file. Code 1: Half Adder (ha.v). ` timescale 1 ns / 1 ps // time - unit = 1 ns , precision = 1 ps ( for simulation // Define the module module ha ( A , B , Sum , C out ); // Define the input and output signals input A ; input B ; output Sum ; output C out ; // Define the modules behavior assign C out = A & B ; // bitwise and assign Sum = A ^ B ; // bitwise xor end module // ha • Now to test the functionality of our design, we need to write a testbench. Create a new Simulation Source (ha tb.v) and write the code below. Code 2: Testbench for Half Adder (ha tb.v). ` timescale 1 ns / 1 ps // time - unit = 1 ns , precision = 1 ps ( simulation ) module ha _tb (); // Define the input and output ports reg A_tb = 0; reg B_tb = 0; wire Sum _tb ; wire C out_tb ; // Port Mapping ha instant ( . A ( A_tb ) , . B ( B_tb ) , . Sum ( Sum _tb ) , . C out ( C out_tb ) ); // Test samples initial // initial block executes only once begin A_tb = 1 ' b0 ; B_tb = 1 ' b0 ; # 10; // wait for 10 time - units (10 n s in this example ) A_tb = 1 ' b0 ; B_tb = 1 ' b1 ; # 10; // wait for 10 time - units (10 n s in this example ) end end module // ha _tb • Save the code and run the simulation. To simulate the HA tb module, right click on it and click “Set as Top”, then Run the Simulation. • Check the waveform. From time 0 to 10ns, the inputs are as follows: A=0, B=0, and the outputs are Sum=0 and Cout=0 as expected. For the next 10ns, we have A=0, B=1, and the outputs are Sum=1 and Cout=0. Add a few more tests to the test bench code and see the result on the waveform. Put a screenshot for the waveform. for 2 more test cases with explanation in your report. 2 1-bit Full Adder Now try to design a 1-bit full adder using Verilog logical operators. Below you see the diagram, the truth table, and the circuit schematic for a 1-bit full adder. As you see the Boolean equation for the two outputs (Sum and Cout) are as follow: • Sum = A XOR B XOR Cin • Cout = (A AND B) OR (A AND Cin) OR (B AND Cin) Please implement a 1-bit full adder using Verilog logical operators. Define input and output signals and complete the module behavior. Code 3: 1-bit Full Adder (fa.v). `timescale 1 ns / 1 ps // Module definition module fa ( A , B , Cin , Sum , Cout ); // Define the input and output signals // Define the full adder modules behavior. endmodule // fa Simulate your code. Write a test bench for your design and run the tests below. Test1 (Run for 20ns): A=‘0’ , B = ‘1’, Cin = ‘0’ Test2 (Run for 20ns): A=‘1’ , B = ‘1’, Cin = ‘0’ Test3 (Run for 20ns): A=‘1’ , B = ‘1’, Cin = ‘1’ Put a screenshot of the waveform in your report. 3 4-bit Full Adder Design a 4-bit full adder in Verilog. Below, you see the block diagram for a 4-bit full adder. Below is a circuit schematic for a 4-bit full adder. As you can see in the schematic, a 4-bit full adder can be implemented with four instances of 1-bit full adders. Blue lines inside the box are not connected to any input or output ports. We need to define them in the code as wire. Define signals, complete the code and run the simulation. Write a testbench for your design and run the tests below. Test1 (Run for 20ns): A=“0110” , B = “0100”, Cin = ‘0’ Test2 (Run for 20ns): A=“1000” , B = “1001”, Cin = ‘1’ Test3 (Run for 20ns): A=“1110” , B = “0010”, Cin = ‘0’ Test4 (Run for 20ns): A=“1010” , B = “1011”, Cin = ‘0’ Check the outputs (Sum and Cout) to see if they are correct. Put a screenshot of the waveform in your report. Your screenshot should show both input and output signals. `timescale 1 ns / 1 ps // Module definition module fa4 ( A , B , Cin , Sum , Cout ); // Define the input and output signals // Define the full adder modules behaviour endmodule // fa4 4 2:1 Multiplexer Design a 1-bit, 2 to 1 multiplexer. You should have a 1-bit select input S that choose from two 1-bit inputs (D1 and D2). When S is ‘0’, the output equals to D1; when S is ‘1’, the output equals to D2. Below is the block diagram for the 2:1 multiplexer. Use the code skeleton below for your module declaration. Code 5: 2:1 Multiplexer (mux21.v). `timescale 1 ns / 1 ps // Module definition module mux21 ( input S , input D1 , input D2 , output Y ); // Define the MUX2 :1 module behaviour endmodule // mux21 Write a testbench (mux21 tb.v) for your design and run the tests below. test1 (Run for 20ns): D1=‘0’ , D2 = ‘1’, S = “0” test2 (Run for 20ns): D1=‘0’ , D2 = ‘1’, S = “1” Check the output (Y) to see if it is correct. Put a screenshot of the waveform in your report. 5 4:1 Multiplexer Design a 1-bit, 4-to-1 multiplexer. You should have a 2-bit select input S to choose from four 1-bit inputs. Below is the block diagram for 4:1 multiplexer. Use this code skeleton for your design. Code 6: 4:1 Multiplexer (mux41.v). `timescale 1 ns / 1 ps // Module definition module mux41 ( input [1:0] S , input D1 , input D2 , input D3 , input D4 , output Y ); // Define the MUX4 :1 modules behavior. endmodule // mux41 Test1 (Run for 20ns): D1=‘0’ , D2 = ‘1’, D3=‘0’ , D4 = ‘1’, S = “00” Test2 (Run for 20ns): D1=‘0’ , D2 = ‘1’, D3=‘0’ , D4 = ‘1’, S = “01” Test3 (Run for 20ns): D1=‘0’ , D2 = ‘1’, D3=‘0’ , D4 = ‘1’, S = “10” Test4 (Run for 20ns): D1=‘0’ , D2 = ‘1’, D3=‘0’ , D4 = ‘1’, S = “11” Check the output (Y) to see if it is correct. Put a screenshot of the waveform in your report. 6 Assignment Deliverables Your submission should be in a *.zip file and should be submitted to Gradescope. The ZIP file should include the following items: • Source Code: Module designs and testbenches. (fa.v, fa tb.v, fa4.v, fa4 tb.v, mux21.v, mux21 tb.v, mux41.v, mux41 tb.v) (Remember: In Verilog the file name does NOT have to be the same as the module name.) • PDF Report: A report in the PDF format including the simulation results. Note 1: Start working on the lab as early as possible. Note 2: Compress all files (8 *.v files + report) into one ZIP file named “lab1 UCInetID firstname lastname.zip” (note: UCInetID is your email user name and it is alphanumeric string), e.g., “lab1 sitaoh sitao huang.zip”, and upload this ZIP file to GradeScope before deadline. Note 3: Use the code skeletons given in the lab description. The module part of your code (module name, module declaration, port names, and port declaration) should not be changed. Note 4: It is fine to discuss the lab with others, but please write the code by yourself. Note 5: Make sure that your code has good readability, with proper variable naming and comments. You may lose points if your code lacks readability.
NE/PS 212: Exam 2 Extra Credit Question 1 Extra Credit Question For exam 2, here is the extra credit question. It is worth 6 points. Please write a function that performs two sample t-tests with unequal variances. The inputs to the function should be the following. 1. X: the first vector of numbers 2. Y: the second vector of numbers 3. type: equal variances or unequal variances. You can use numeric values for type (say 1 for pooled and 2 for unpooled). The function must return the t-statistic, the degrees of freedom and the p-value for the test. The function must also print out whether you reject the null hypothesis that the means are equal. Please name the function newttest and provide appropriate comments. Here is a skeleton of my function f u n c t i o n [ tS t at , pValue , DF ] = n ewt t e st ( X , Y , v a r ty p e ) % T h is f u n c t i o n p e rfo r m s a u n p a i r e d two sa m p le t-t e s t % I t can pe rfo r m both poo le d and u n poo l ed t-t e s t s . % % % more comments h e r e . . % CC , BU , 2020 i f v a r ty p e==1 % Pe rfo r m i ng a poo led v a r i a n c e t-t e s t e l s e % Pe rfo r m i ng an u n poo led v a r i a n c e t-t e s t end Please test your code with the following data generated using these commands rng(’default’); X = 3 + 4*randn(1,20); Y = 6 + 7*randn(1,30); Run your code with X and Y as defined above and also make sure you check them with the inbuilt MATLAB functions for t-tests. [h,p,ci,st] = ttest2(X,Y,'vartype','unequal') [h,p,ci,st] = ttest2(X,Y,'vartype','equal') 1.1 Expected Results ● For the unpooled case, I get the DF to be 46.0894, the p-value to be 0.7696, and my t-value to be -0.2946. ● For the pooled case, I get the DF to be 48, the p-value to be 0.7789, and my t-statistic to be -0.2823. Here are the equations for the t-statistic for equal and unequal variances: 1.2 Pooled Variance The pooled variance/standard deviation case assumes that the true standard deviation of both groups is the same, and that we can use both sample standard deviations together to get a better estimate of that true standard deviation. The degrees of freedom in this case are n1 + n2 - 2 1.3 Unpooled Variance Assumption that the underlying variance for both groups is the same is not applicable in many settings. The equation for the t-statistic with Unpooled variance is given by the following equation The Null hypothesis is μ 1 = μ2, so this reduces to The degrees of freedom for this case is
QBUS6320 S1 2025 Assignment 1 This is an individual assignment. It is worth 10% of your final grade. It consists of three questions. Each question is worth different marks. It is due on Friday 4 April at 11:59pm and must be submitted through Canvas using Turnitin. The submission will comprise two separate parts: 1. A typed report (PDF please) that addresses all questions and contains images all of relevant tables, charts and decision trees within the report. The report must be able to be read as a standalone document. 2. An Excel file with containing all the original tables, charts and decision trees. The Excel file is provided for backup and corroboration purposes. Failure to submit both files by the due date will result in late penalties being applied. Additional instructions occur after the questions. Question 1 (25 marks) The Australian Institute of Sport (AIS) is considering whether to introduce mandatory drug testing for all athletes. Knowing that drug tests are not completely reliable, they want to use decision tree analysis to see whether the benefits outweigh the costs. Probabilities: If an athlete is tested for a particular drug, the test result will either be positive or negative. However, as these tests are not completely accurate some athletes who are drug free test positive (a false positive) and some athletes who are drug users test negative (a false negative). The best data we have available suggests that 8% of all athletes use drugs. 2.5% of all tests on drug free (DF) athletes result in false positives and 7.5% of all tests on drug using athletes result in false negatives. Monetary Values: The monetary values are difficult to assess but include the following: • The benefit B from correctly identifying a drug user and banning them. • The direct cost, C1, of a test. • The cost, C2, of violating a non-users privacy by performing the test. • The cost C3, of falsely accusing a non-user and banning them. • The cost C4, of not identifying a drug user and allowing them to participate. We measure all benefits and costs relative to C1 (the direct cost of a test). We assign C1 a value of -1 to indicate it represents a cost. All other monetary values are expressed as multiples of C1's magnitude. For example, a value of -2 represents a cost twice as large as the direct testing cost, while a value of +25 represents a benefit 25 times larger than the testing cost. The index values are shown in Table 1.1 on the following page. Table 1.1 Cost/benefit Index C1 -1 C2 -2 C3 -20 C4 -10 B +25 Questions: 1. In Excel create two net benefit pay-off tables that map the net benefit of either testing (four different states) or not testing (two different states) against the decision to ban or not ban an athlete. Include the pay-off tables in your report. Note: the first table should be expressed in index notation (+B, -C1, -C2 etc) while the second table should state the net benefit in numerical terms based on values indicted in table 1.1. For example, if a positive test is obtained for a non-drug user and this athlete is banned, there are three associated costs: Cost of the test (-1), the cost of violating the athlete's privacy (-2) and cost of falsely accusing the athlete (-20). (6 marks) 2. Calculate the relevant posterior probabilities. Include any Bayes tables generated in Excel in your report. (4 marks) 3. Based on the values in the net benefit pay-off table and the Bayesian probabilities, create a decision tree using Precision Tree that will help the AIS decide whether they should implement mandatory drug testing. Note: Be careful to avoid double counting the costs (for example, do not include the C1, C2, C3, or C4 costs at multiple decision points if they have already been accounted for in earlier calculations). Include your decision tree in your report. (8 marks) 4. For the given assumptions around the cost and benefit, outline the best strategy and its net benefit and discuss this solution. (2 marks) 5. Conduct a brief sensitivity analysis giving reasons why you might change the relative index values. Discuss how this might impact the original solution. (5 marks) Question 2 (35 marks) The University of Sydney s procurement office has invited Intelligent Computing (iC) to tender on a new contract. The contract calls for the supply of 200 generic desktop computers and associated accessories which will be used for digital in-place exams. All vendors must fulfil the order within 6 weeks of contract award. Despite the urgency the contract specifications are generic and so the university has informed all bidders that the low bid will win the contract. iC believes that the cost of preparing the bid will be $10,000 and the cost of supplying the computers will be $190,000. The bids are sealed, so iC has no information about the value of the bids their competitors will submit. However, in the last 12 months iC has managed to poach several key employees away from vendors who are competing for the contract and so iC has a good understanding of how the competitors may behave. In summary iC believes that the size and probability of a low competitor bid will be: Table 2.1 Low Bid Probability Less than $230,000 0.20 Between $230,000 - $240,000 0.40 Between $240,000 - $250,000 0.30 More than $250,000 0.10 In addition, because of supply chain constraints, iC think there is a 30% chance there will be no rival bid. Further assume iC s bid will never equal any competitor s bid. Part A (20 marks) A.1. Based on this information create a pay-off table in excel which outlines iC s most logical bid prices given potential competitor bid options. Include relevant probabilities. (8 marks) A.2. Based on the pay-off table and any other relevant information use Precision Tree to create a decision tree which sets out the problem. Include an image of the whole tree in your report. (10 marks) A.3. Using this decision tree, indicate the strategy that maximises EMV for iC. What is that optimum value? (2 marks) Part B (15 marks) Use the sensitivity analysis function on the Precision Tree tool bar to vary the following inputs: • Bid Preparation Costs: +/-10% in 1% increments • Supply Cost: +/-10% in 1% increments • No competing bid percentage: A minimum of 0% to a maximum of 60% in 5% increments B.1. Run a one-way sensitivity analysis on the entire decision tree model selecting one of either the tornado graph or the spider graph. For the tornado graph: Show how each variable impacts the expected value. For the spider graph: Display variable values on the x-axis and expected value on the y-axis. Include an image of the chosen graph in your report and provide a short interpretation of the graph. (5 marks) B.2. Run a second one-way sensitivity analysis on the bid value decision node (not the entire tree) and create a strategy region graph. Include an image of the graph - that plots the probability of no competing bids against Expected Value - in your report along with a short interpretation of the graph. (5 marks) B.3. Run a two-way sensitivity analysis on the bid value decision node and create another strategy region graph including an image of the graph in your report along with a short interpretation. (5 marks) Note: Precision Tree s sensitivity functionality has not been explicitly covered in lectures. Part B provides an opportunity for students to explore this for themselves and differentiate the quality of their submissions based on their responses. Question 3 (40 marks) The following table outlines the potential pay-offs for three separate investments. Table 3.1 Investment A Investment B Investment C Pay-off ($'000) Probability ($'000) Probability ($'000) Probability 1 18.0 10% 27.0 20% 18.0 20% 2 36.0 30% 45.0 30% 45.0 40% 3 61.0 30% 61.0 20% 72.0 20% 4 90.0 30% 99.0 30% 90.0 20% Write a report for potential investors which ranks the three investments in terms of their attractiveness. Your report should be approximately 500 words. All diagrams and tables used to support your analysis should be generated in Excel using Precision Tree. As you do not know the individual risk preferences of investors you should consider all risk types. Write a report for potential investors which ranks the three investments in terms of their attractiveness. Your report should be approximately 500 words. All tables and risk profiles used to support your analysis should be generated in Excel using Precision Tree. As you do not know the individual risk preferences of investors, you might wish to consider multiple risk perspectives: - Expected monetary value (EMV) - Risk measures (variance, standard deviation) - Risk attitudes (risk-neutral, risk-averse, risk-seeking) - Potential for extreme outcomes (downside risk) - Dominance Other Instructions Word count 1,200 words +/-10% excluding tables, decision trees and charts and references. Any words beyond 1,320 will not be marked. Submissions below 1,080 words may be penalised. Style This is a business report not an essay. The report should: - Have a suitable cover page. - Be divided into 3 distinct sections. - The text should be concise. Using bullet points is acceptable. - Be professionally and logically laid out with good grammar and spelling. - Marks will be deducted for submissions that do not meet these requirements. Precision Tree If Precision Tree is not used where required ,the best mark possible will be 50% of the available marks. Rubric There is general rubric relating to this assignment in the Assignment 1 Canvas module where this brief is located that provides detailed information regarding the quality expectations of the submission. Late penalties Reports submitted after the due date will incur a late penalty of 5% per day or part thereof. Reports more than 10 days late will not receive a mark.
EECS 31L: Introduction to Digital Logic Laboratory (Spring 2025) Lab 1: Logic Block Design This is a very brief lab report template. It provides a possible outline for your report. Your lab report should clearly explain your design and testing results – readers of your report should be able to reproduce your design solely by reading your report. NOTE: Please refer to each lab manual for detailed instructions on what to include in your report. You will lose points if you do not include all the required content. 1 Overview Give a summary or a high-level overview of the design. Use a block diagram for your design to explain inputs, outputs and the relation between them. If your design has more than one module, explain how they are related. 2 Hardware Design In this section describe how you design your hardware modules. If necessary, you may show some Verilog code samples. A truth table helps to find the boolean equation (if you want to use logical operators), or in general the relation between inputs and outputs. For more complex designs you can also draw the schematic of your design. The schematic of design includes the components of the design and their connectivity. You can use any software like PowerPoint to draw your figures. 3 Simulation Results Show your simulation results here. Explain your test cases and how you design them to cover all combi- nations of inputs. Put screenshots of your simulation here. You may need to put more than one image if necessary. Also, define the signals in the screenshots and explain how and when the input changes cause changes in the output. If there is something different from what you expect, explain why.
BAFI1005 Financial Markets and Institutions Assessment 3: Research Report – Case Study Overview The assessment will involve a case study pertaining to different financial markets and instruments. Students will be required to prepare a comprehensive Investment Strategy Research Report for the client. Learning Outcomes The targeted Course Learning Outcomes for this assessment are: • CLO1: Identify the nature and key components of financial systems domestically and globally to apply in diverse contexts. • CLO2: Identify the nature, role and determinants of the structure and level of interest rates in economics and financial contexts. • CLO3: Analyse the characteristics and functions of the main financial intermediaries and the role of regulatory bodies in in financial systems in a global context. • CLO4: Assess the operations of the foreign exchange market including the nature and determinants of exchange rates and relevant investment strategies. • CLO5: Explore the main features and theorems of capital markets, institutions and securities including debt securities, equity and derivative products. Assessment details • The assessment is a scenario-based research analysis report. You are required to conduct independent research and write a 3000 words (-/+10%) report. • This assessment includes all the content covered in Topics 1 to 10. • The assessment is worth a total of 50 marks and accounts for 50% of the total grade for this course. Formatting • The report must be presented in standard report structure. • The report must be presented and submitted in Microsoft Word document. • You may use hand-drawn diagrams where applicable. Include the image of the diagram - use balanced resolution so the information in the image is • Guidelines for text formatting: o Font style.: Arial o Font size: 12 (14 for headings) o Spacing: 1.5 line spacing o Page No: Page x Final Assessment – Research Report Case Study Background: As a financial advisor, you have the responsibility to educate and empower clients with a deep understanding of market and economic systems to remove the mystery and fear associated with investing. This approach fosters confidence and informed decision-making, enabling clients to invest wisely regardless of external economic conditions. Your task is to prepare a comprehensive Investment Strategy Report for your new client, Mr. Cheong. This report should align with Mr. Cheong's client profile, including his wealth, risk preferences, and investment objectives. The client information are available in the Appendix of this document. Report requirements: Your report should be well-supported with examples and credible sources, including peer-reviewed journal articles, papers, books, industry reports, institutional reports, regulatory standards, and official materials. Do not rely solely on general websites for information. A minimum of 8 references is expected. Detailed instructions for each section: Executive summary: Provide a brief summary of the key findings, recommendations, and the client's profile and objectives. Introduction: Offer context for the report and its purpose. Section 1: Market conditions and Monetary Policy • Discuss the current market conditions both domestically and internationally, highlighting factors that may influence investment opportunities. • Discussion differences in the Monetary Policy Implementation Process in Singapore compared to Australia. • Explain the intermediate target for monetary policy set by the MAS and highlight the objectives and tools that the MAS employs to achieve this target. • Offer examples of economic indicators that provide insights into future stages ofthe business cycle. • Explain how changes in key economic indicators influence the Singapore Economy, referencing the brief of recent Economic Development in Singapore (See additional material in Appendix 3). Section 2: Asset class discussion • Select four distinct asset classes that you are knowledgeable about and conduct a comprehensive analysis of their defining characteristics, associated risks, potential returns, and provide illustrative examples for each asset class. You are encouraged to utilise diagrams, charts, tables and figures in the discussion. • Explain to Mr. Cheong regarding pros and cons ofthe right issue, assuming he owns 1000 Tesha shares and below scenario: In 2024, Tesha, Inc. conducted a rights issue to raise additional capital for its growth and expansion plans. Shareholders were given the opportunity to purchase additional Tesha shares at a discounted price on a ratio of 1:5. Tesha's stock price before the rights issue announcement was approximately $1000 per share. The discount price is $800 per share. Assuming Tesha has 1,000,000 shares and all shareholders exercise their rights. Section 3: Funds under management • Considering Mr. Cheong's preference for cost-effective index funds and his pursuit of promising returns, provide an overview of funds in his portfolio. • Investigate and evaluate the historical performance, risk, and return of these funds compared to benchmark indexes. • Introduce two new funds aligned with Mr. Cheong's needs. • Using the provided data for Marina Horizon Fund (Appendix 4), explain fund performance by calculating the Coefficient of Variation, Sharpe ratio and Jensen’s Index and provide an interpretation of the results. Section 4: Hedging using Derivatives • Discuss the potential use of derivatives (e.g., options or futures) to hedge Mr. Cheong's portfolio against adverse market movements. • Explain the benefits and risks associated with derivatives-based hedging strategies. • Assess the alignment of these strategies with Mr. Cheong's risk tolerance and overall investment plan. Conclusion: Summarize key takeaways from the report. Emphasise the importance of informed decision-making in achieving Mr. Cheong's investment objectives.
MKT101A Marketing Assessment 3: Design Project Assessment Key Information Weighting 40% Due date Week 11, Friday 23.55 AEST/AEDT NOTE: late submission or non-submission may attract penalties. Refer to Assessment, Submission, Extension and Feedback Proceduresand Grading Policy for details. Group or Individual? This is an Individual assessment Word Count/Length 1500 words ±/-10% NOTE: the word count of this assessment should NOT include the table of contents, reference list, appendices and any other tables or figures. Subject Learning Outcomes In this assessment, you will be tested on whether you have successfully met the following Subject Learning Outcomes (SLOs): c) Explain the various contemporary marketing strategies and how they may be used in the industry you work in d) Review the key components of a marketing plan Assessment Type and Required Format The type of assessment you will be completing is a: • Report It should cover the following sections: • ICMS Assessment Cover Sheet • Table of Contents • Introduction • Body/Findings • Conclusion • References Suggestion: Students are to use the following headings and sub-headings to craft the Body/Findings section of their Journal Entry: 1. The Company (Insert table) 2. The SWOT Matrix for the selected Company (Insert matrix/table) 3. The Contemporary Marketing Issue 3.1. Introduction of the Contemporary Marketing Issue 3.2. Validation and justification as to why it was chosen/or is needed 4. The New Market Offering 4.1. Brief description 4.2. Feasibility 4.3. Infographic/Poster Assessment Track o Track 1 conducted in supervised in-person settings or synchronously online. Track 2 conducted in unsupervised settings. Assessment Details Assessment Purpose The purpose of this assessment is to develop your ability to investigate, analyse and evaluate a company’s marketing environment with a view of proposing a new marketing idea or revised marketing offering (development) which involves the implementation of one (1) of the Contemporary Marketing Issues that has been discussed in this subject. This assessment provides the opportunity for you to showcase what you have learnt in this subject These competencies are highly relevant to a career in Marketing as well as anyone entering the general business arena Artificial Intelligence (AI) Use AI USE AUTHORISED In this assessment there are certain elements authorised for use of artificial intelligence (AI) tools to support achievement of the learning outcomes. This is in accordance with the guidelines and the assessment instructions below. Please contact your Lecturer for further information. Guidelines for authorised use of AI You are recommended to use AI tools from the list below: • Co-Pilot • Canva • Microsoft Designer • Chatbot App You can use these AI tools in the following ways • Conducting preliminary research about your new product/service • Generate passages of text that must then be transformed to apply to a student’s particular context. • Produce a definition that provides a basis for further discussion and evaluation • Assisting in the creation of new product service prototype, your infographic/poster, or other visual components of your design project report including representations of your findings and theories. Any use of AI tools must be appropriately acknowledged. If the AI tools you wish to use are not included on the list provided, you should discuss this with your Lecturer first. Follow the ICMS Style. Guide for how to reference and acknowledge the use of AI:ICMS Style Guide. You CANNOT use AI tools to generate your whole assessment submission. IMPORTANT NOTE: • Refer to the Artificial Intelligence in Education (AIED) Frameworkand Use of Artificial Intelligence (AI) in Assessment Guidelinesfor further information. • Engaging with AI tools without authorisation can result in a breach of academic integrity. • Using AI in an unethical or irresponsible manner, such as copying or paraphrasing the output without citation or evidence; or using the output as your own work without verification or integration; or using the output to misrepresent your knowledge or skills, is considered a form. of academic misconduct. Refer to theAcademic Integrity PolicyandProcedures for more details.
Excel Assignment (Commerce 1DA3) Please make sure to review the guidelines on this page before reviewing the questions. The deadline to submit the Excel file is 11:59PM, on Tuesday April 08. Before you start the assignment 1. Problem 2 of the assignment uses Data Analysis Toolpak. Please download the instructions to load it in MS Excel here. Note: If you do not have access to MS Excel, you can use MS Excel Online on Office Online. Download and review the instructions to access Office Onlinehere. Submission Guidelines: 1. Download the Excel file (template) for the assignment from the Avenue to Learn folder. 2. For Problem 1, to customize the data for each student, please enter the last 4 digits of your student ID in the Cells F2, F3, F4 and F5 before starting the analysis and creating the PivotTable. You mark for this question will be based on this step! For example, if your student ID is 400568015, you need to enter 8015 in each one of the 4 cells (F2, F3, F4, F5) before starting the analysis. If last 4 digit of student number start with 0s, the excel will display only the significant digits. For example, 0002 will display only as 2, and that is ok. 3. For Problem 1, make sure that the pivot table you created (for any of the questions) is included in the submission file. For each question, copy the data generated by PivotTable, paste it into the sheet named “Problem 1 – Visualizations” and create the visualization in that sheet. Do this for each of the three questions asked in Problem 1. This means, the data generated by the PivotTable and the visualization attached to it for all three questions can be found in the sheet “Problem 1 – Visualizations”. 4. For Problem 1, make sure that the data, based on which you create the visualization is there and the visualization is live and attached to the data. Please do not paste screenshots of the visualizations. 5. For Problem 2, to customize the data for each student, please enter the last 4 digits of your student ID in the Cells A2, A3, A4 and A5 before conducting the test. You mark for this question will be based on this step! For example, if your student ID is 400560218, you need to enter 218 in each one of the 4 cells (A2, A3, A4, A5) before starting the analysis. If last 4 digit of student number start with 0s, the excel will display only the significant digits. For example, 0002 will display only as 2, and that is ok. 6. For Problem 2, make sure the report is placed in Cell D15. Also, provide your answers in the textbox that is already placed in the Excel sheet. If for any reason you cannot add your answer to the textbox, add a new textbox and enter your answers in that. 7. Please note that this assignment is an individual component, and no sharing of files and results is allowed. 8. You can find the submission folder on Avenue to Learn under Assessments > Assignments. 9. Make sure your submission to Avenue is a single MS Excel file (extension .xls or .xlsx). We will not be able to open files in other formats. It is your responsibility to ensure that the file you upload is in the correct format, and can be opened without any issues. You may want to download the file from Avenue to Learn after uploading it, and make sure it can be opened, and it contains your answers. Problem 1 As the marketing manager of a fast fashion online retailer, you need to prepare key data points for an upcoming visit from the Chief Marketing Officer (CMO). The CMO is particularly interested in analyzing the company's spending patterns. You have been provided with the purchase data in an excel file for the period January 1, 2025, to January 31, 2025. The database includes six data variables: PO Number, Vendor, Purchasing Department, Contracted Vendor, Emergency Purchase, and PO Amount. Their descriptions are as follows: • PO Number: Each time the company makes a purchase, the relevant department must issue a purchase order (PO). A PO number is a unique identifier assigned to each purchase order (PO) for tracking purposes. • Vendor: The supplier from which the purchase is made. • Purchasing Department: The department responsible for initiating the PO and procuring goods or services from the vendor. • Contracted Vendor: A categorical variable indicating whether the vendor is contracted or non-contracted. o Contracted vendors have pre-negotiated agreements with the company, ensuring standardized terms, pricing, and reliability. o Non-contracted vendors do not have such agreements, leading to potentially higher costs and variable terms. • Emergency Purchase: A categorical variable indicating whether the purchase was made under urgent circumstances. o Emergency purchases occur when there is an immediate need for goods or services due to unforeseen events, bypassing standard procurement procedures. • PO Amount: The total amount payable to the vendor upon receipt of the goods or services by the purchasing department. For this problem, you need to create three visualizations. Each visualization will be created based on data generated by the Pivot Tables in Excel. Using the PivotTable tool, generate a visualization for each of the following questions. You do not need to add axis labels or a plot title.: a) Create a bar graph to display the sum of PO Amount for each purchasing department. Make sure the variable Purchasing Department is placed on the horizontal axis. b) Create a bar graph to display the average of PO Amount for each vendor. Make sure the variable Vendor is placed on the horizontal axis. c) Create a pie chart to display the sum of PO Amount for each category of emergency procurement, i.e., the total PO Amount for orders where Emergency Procurement was equal to Yes and total PO Amount for orders, where the Emergency Procurement was equal to No. Problem 2 (requires data analysis toolpak) As the district manager for a retail clothing chain, you manage two different stores in your district. You want to determine if the average daily sales differ between two of the stores: Store X, located in a shopping mall, and Store Y, situated in a downtown area. Using the data provided in the Excel sheet named “Problem 2 – data”, conduct a two-sample t-test and provide your conclusion at the 5% significance level. Make sure to provide the test result and the answer to the questions directly in the Excel sheet. For the two-sample t-test, assume unequal variance. Note: MS Excel uses the scientific exponential format to show very small and very large numbers. For example, 1.23E-12 means 1.23 × 10 12. Similarly, 1.23E+12 means 1.23 × 1012. The p- value may be represented using this format.
Operating Systems COMP 310 – ECSE 427 Project Part #3: Memory Management Due: April 4, 2025 at 23:59 1. Assignment Description This is the final assignment. After implementing the memory manager, you will have built a simple, simulated OS. So far, our simulated OS supports simple Shell commands and is capable to do process management according to different scheduling techniques. The assumption in the second assignment was that processes fully fit in the Shell memory (like what we saw in the Virtual Memory lecture). In this assignment, we will extend the simulation with demand paging. Similar to Project Part 2, this assignment is larger than Part 1 and MP. Plan your time wisely and don’t hesitate to ask questions on Discord if you get stuck. 1.1 Starter files description: You have two options: • [Recommended] Use your solution to Assignment 2 as starter code for this assignment. If your solution passes the public unit tests it is solid enough to use as a basis for the third assignment. • Use the official solution to Assignment 2 provided by the OS team as starter code. 1.2 Your tasks: Your tasks for this assignment are as follows: • Add scaffolding for paging. • Design and implement demand paging. • Implement the LRU replacement policy in demand paging. On a high level, in this assignment we will allow programs larger than the shell memory size to be run by our OS. We will split the program into pages; only the necessary pages will be loaded into memory and old pages will be switched out when the shell memory gets full. Programs executed through both the source and the exec commands need to use paging. In addition, we will further relax the assumptions of exec by allowing the same program to be executed multiple times by exec (remember that in Assignment 2 the programs run by exec needed to be different). For simplicity, in our tests for this assignment we will only consider the RR (round robin) policy, with a time slice of 2 instructions, with no multithreading and no background execution (the # parameter). However, the paging mechanism will work with the other scheduling policies as well, if it is implemented correctly. The API for exec does not change, so you still need to specifically pass the RR argument. More details on the behavior. of your memory manager follow in the rest of this section. Even though we will make some recommendations, you have full freedom for the implementation. In particular: • Unless we explicitly mention how to handle a corner case in the assignment, you are free to handle corner cases as you wish, without getting penalized by the TAs. • You are free to craft your own error messages (please keep it polite). • Just make sure that your output is the same as the expected output we provide in the test cases in Section 2. • Formatting issues in the output such as tabs instead of spaces, new lines, etc. will not be penalized. Let’s start programming! 1.2.1. Implement the paging infrastructure We will start by building the basic paging infrastructure. For this intermediate step, you will modify the source and exec commands to use paging. Note that, even if this step is completed successfully, you will see no difference in output compared to the source/exec commands in Assignment 2. However, this step is crucial, as it sets up the scaffolding for demand paging in the following section. As a reminder from Assignment 1, the source API is: source SCRIPT. Executes the commands in the file SCRIPT source assumes that a file exists with the provided file name, in the current directory. It opens that text file and then sends each line one at a time to the interpreter. The interpreter treats each line of text as a command. At the end of the script, the file is closed, and the command line prompt is displayed. The exec API is: exec prog1 prog2 prog3 POLICY • As in Assignment 2, we will not be testing recursive exec calls. • Unlike Assignment 2, exec now supports running the same script (i.e., exec can take identical arguments so exec prog1 prog1 is a legal instruction). You will need to do the following to implement the paging infrastructure. 1. Set up process sharing. o When exec is passed the same script name twice, we will want to share memory between them. Modify your existing code loading setup from A2 so that two scripts with the same name can share their loaded code. If you loaded code contiguously, you have something like base+bounds virtual memory. Allow multiple processes with the same name to use the same base and bounds. 2. Partitioning the Shell memory. The shell memory will be split into two parts: o Frame store. A part that is reserved to load pages into the shell memory. The number of lines in the frame store should be a multiple of the size of one frame. In this assignment, each page consists of 3 lines of code. Therefore, each frame. in the frame. store has 3 lines. If you implemented code memory separately from variable memory in A2, then there’s probably very little to do here. o The frame size of 3 is an unfortunate complication (a power of two would usually make more sense). We have set it up this way so that we can test interesting cases of demand paging without having you implement more scheduling policies. For address translation, you can will probably want to use the / and % operators to separate the page number and offset respectively. o Variable store. A part that is reserved to store variables. You are free to implement these two memory parts as you wish. For instance, you can opt to maintain two different data structures, one for loading pages and one for keeping track of variables. Alternatively, you can keep track of everything (pages + variables) in one data structure and keep track of the separation via the OS memory indices (e.g., you can have a convention that the last X lines of the memory are used for tracking variables). We think maintaining separate data structures is easier. For now, the sizes of the frame. store and variable store can be static. We will dynamically set their sizes at compile time in the next section. 3. Code loading. The shell will load script(s) into the frame memory as follows. o The given script. files are used to load program pages into the frame store. If exec receives the same filename more than once, it is only loaded once. o At this point, you will load all the pages into the frame. store for each program. This will change in the next section. Unlike Assignment 2, where you were encouraged to load the scripts contiguously into the Shell memory, in this assignment the pages will not be contiguously loaded. For example, you can load into memory as follows for 2 programs that each have 2 pages (i.e., 6 lines of code per program). Experiment with different orders to check that your paging implementation is working. Frame. store 0 prog2-page0 1 prog2-page0 2 prog2-page0 3 prog1-page0 4 prog1-page0 5 prog1-page0 6 prog2-page1 7 prog2-page1 8 prog2-page1 9 prog1-page1 10 prog1-page1 11 prog1-page1 12 o However, for grading purposes, when a page is loaded into the frame. store, it must be placed in the first free spot (i.e., the first available hole). 4. Creating the page table. For each script, a page table needs to be added to its PCB to keep track of the loaded pages and their corresponding frames in memory. You are free to implement the page table however you wish. A possible implementation is adding a page table array, where the values stored in each cell of the array represent the frame. number in the frame. store. For instance, in the example above, the page tables would be: Prog 1: pagetable[0] = 1 //frame 1 starts at line 3 in the frame store pagetable[1] = 3 //frame 3 starts at line 9 in the frame store Prog 2: pagetable[0] = 0 //frame 0 starts at line 0 in the frame store pagetable[1] = 2 //frame 2 starts at line 6 in the frame store You may want to also keep validity information separately, or you could use an unreasonable frame number like -1 to indicate that an entry is invalid. Note that you will also need to modify the program counter to be able to navigate through the frames correctly. For instance, to execute prog 1, the PC needs to make the transitions between the 2 frames correctly, accessing lines: 3,4,5,9,10,11. Assumptions (for now): o The frame. store is large enough to hold all the pages for all the programs. o The variable store has at least 10 entries. o An exec/run command will not allocate more variables than the size of the variable store. o Each command (i.e., line) in the scripts will not be larger than a shell memory line (i.e., 100 characters in the reference implementation). o A one-liner is considered as one command If everything is correct so far, your source/exec commands should have the same behavior. as in Assignment 2. You can use the existing unit tests from Assignment 2 to make sure your code works correctly. 1.2.2. Extend the OS Shell with demand paging We are now ready to add demand paging to our shell. In Section 1.2.1, we assumed the all the pages of all the programs fit in the shell memory. Now, we will get rid of this assumption. 1. Setting shell memory size at compile time. First, you need to add two compilation flags to adjust the frame. store size and variable store size at compile time as follows. o In gcc, you will need to use the -D compilation option, which replaces a macro by a value -D =. o In make, you can pass the value of the variable from the command line. Example: At the command line: make xval=42 In the Makefile: gcc -D XVAL=$(xval) -c test.c In test.c: int x=XVAL; o Using the technique described above, your shell will be compiled by running make mysh framesize=X varmemsize=Y where X and Y represent the number of lines in the frame. store and in the variable store. You can assume that X will always be a multiple of 3 in our tests and that X will be large enough to hold at least 2 frames for each script in the test. The name of the executable remains mysh. o Print the following message at shell startup, instead of the version message: ”Frame. Store Size = X; Variable Store Size = Y” Where X and Y are the values passed to make from the command line. o To reiterate: this is printed instead of the version message. Please make sure your program compiles this way and that the memory sizes are adjusted. 2. Code loading. Unlike the previous Section, in this section the code pages will be loaded into the shell memory dynamically, as they become necessary. o In the beginning of the source/exec commands, only the first two pages of each program are loaded into the frame store. A page consists of 3 lines of code. In case the program is smaller than 3 lines of code, only one page is loaded into the frame store. Each page is loaded in the first available hole. o The programs start executing, according to the selected scheduling policy (in our case, RR with time slice of 2 lines of code). 3. Handling page faults. When a program needs to execute the next line of code that resides in a page which is not yet in memory, a page fault is triggered. Upon a page fault: o The current process P is interrupted and placed at the back of the ready queue, even if it may still have code lines left in its“time sIice”. The scheduler selects the next process to run from the ready queue. o If you want your A3 solution to work with policies other than RR, you might not want to always place P at the back of the ready queue. Again, such implementation details are up to you! o The missing page for process P is brought into the frame store from the file. P’s page tabIe needs to be updated accordingly. The new page is loaded into the first free slot in the frame store if a free slot exists in the frame store. o If the frame store is full, we need to pick a victim frame. to evict from the frame store. For now, pick a random frame in the frame. store and evict it. We will adjust this policy in Section 1.2.3. Do not forget to update P’s page tabIe. You also need to update any page tables that were using the frame. you evicted! To accomplish this, you will need a mapping from frames to the page table(s) that use them. Make sure you keep this bookkeeping structure up-to- date whenever you modify the contents of frames! Upon eviction, print the following to the terminal: ”Page fault! Victim page contents:” ”End of victim page contents.” Upon page faults when the frame. store is not full, print the following: ”Page fault!” o P will resume running whenever it comes next in the ready queue, according to the scheduling policy. o When a process terminates, you should not clean up its corresponding pages in the frame store. Note that, because the scripting language is very simple the pages can be loaded in order into the shell memory (i.e., for a program, you can assume that you will first load page 1, then pages 2, 3, 4 etc.). This greatly simplifies the implementation, but be aware that real paging systems also account for loops, jumps in the code etc. Also, our pages are always read–only, so you never have to worry about backtracking in the backing files to overwrite updated data. Obviously, real paging systems have to account for that as well. Also note that if the next page is already loaded in memory, there is no page fault. The execution simply continues with the instruction from the next page, if the current process still has remaining lines in its “time slice”. This will happen when the same program is used by multiple processes. If you are especially astute and thinking very hard, you might notice that it’s possible for one process to load some pages of a program, and then another process running the same program uses them so much later that they have been evicted. This would require backtracking in the backing file. This is very difficult to implement correctly, since our pages do not have a fixed number of bytes, and so the tests will not cause this to happen. You can simply pretend that it will not happen. If you do choose to try and handle it, beware. 1.2.3. Adding Page Replacement Policy The final piece is adjusting the page replacement policy to Least Recently Used (LRU). As seen in class, you will need to keep track of the least recently used frame in the entire frame store and evict it. You must implement accurate LRU, not an approximation. Note that, with this policy, a page fault generated by process P1 may still cause the eviction of a page belonging to process P2, so all of the same bookkeeping is necessary. 2. TESTCASES We provide 5 testcases and expected outputs inthe starter code repository. These are the public tests. There may be hidden tests for this assignment. We have not decided yet. Please run the testcases to ensure your code runs as expected, and make sure you get similar results in the automatic tests. You are strongly encouraged to add more of your own tests as well. If there are hidden tests, it will be highly unlikely that you fail any hidden tests if you are passing all the public ones. As with A2, you need to run the given test cases from the A3/test–cases/ directory, and your code for the assignment should remain in the project/src directory as with the other project parts. IMPORTANT: The grading infrastructure uses batch mode, so make sure your program produces the expected outputs when testcases run in batch mode. You can assume that the grading infrastructure will run one test at a time in batch mode, and that there is a fresh recompilation between two testcases. 3. WHAT TO HAND IN The assignment is due on April 4, 2025 at 23:59. Your final grade will be determined by running the code in the GitLab repository that is crawled by our grading infrastructure. We will take into account the most recent commit that happened before the deadline, taking any requested late days into account, on the main branch of your fork. The project must compile on the our server by running make clean; make mysh framesize=X varmemsize=Y The project must run in batch mode, i.e. ./mysh < testfile.txt Feel free to modify the Makefile to add more structure to your code, but make sure that the project compiles and runs using the commands above. Note: You must submit your own work. You can speak to each other for help but copied code will be handled as to McGill regulations. Submissions are automatically checked via plagiarism detection tools. 4. HOW IT WILL BE GRADED Your program must compile and run on our server to be graded. If the code does not compile/run using the commands in Section 3, in our grading infrastructure you will receive 0 points for the entire assignment. If you think your code is correct and there is an issue with the grading infrastructure, contact the instructor. Your assignment is graded out of 20 points. You were provided 5 testcases, with expected outputs. If your code matches the expected output, you will receive 2 points for each testcase. There might be an additional 5 hidden test cases, in which case there will be 2 points per hidden testcase. (If we decide not to have hidden tests, you will receive 4 points for each of the given test cases.) You will receive 0 points for each testcase where your output does not match the expected output. Differences in whitespace in the output, such as tabs instead of spaces, new lines, etc. will not be penalized. The TA will look at your source code only if the program runs (correctly or not). The TA looks at your code to verify that you implemented the requirement as requested. Specifically: • Hardcoded solutions will receive 0 points for the hardcoded testcase, even if the output is correct. • You must write this assignment in the C Programming language, otherwise the assignment will receive 0 points.
CSCI 421 Numerical Computing, Spring 2025 Homework Assignment 7 1. (20 points) Implement the (normalized) power method (AG p. 289), inverse iteration (AG p. 294) and the Rayleigh quotient iteration (AG p. 295). They all use the notation μk = vk(T)Avk for the Rayleigh quotient (note that the usual Rayleigh quotient denominator vk(T)vk is always one). In inverse iteration, for efficiency you need to do the LU factorization of the matrix A - αI outside the loop and then use forward and back substitution (using is fine) inside the loop. This is not possible for the Rayleigh quotient iteration because the matrix depends on the iteration counter k. In all cases compute and display the eigenvalue residual norm ⅡAvk - μkvk Ⅱ at the end of the loop and terminate when either the residual norm less than 10—14 or the iteration counter k reaches 100. Also print k , μk and the eigenvalue residual norm at the end of the loop using fprintf. Run all 3 iterations on the tridiagonal matrix T computed from calling tridiagExample.m, with inputs N=100 and nonsym=0 .1. Set v0 to the vector of all ones. For inverse iteration, use α = 4. Using semilogy, plot the residual norms of all 3 methods on the same plot. Explain carefully how the plot demonstrates what we discussed about the con- vergence of these 3 methods in class. Note that the Rayleigh quotient iteration does not find the largest eigenvalue of T, unlike the other 2 methods. 2. (20 pts) As I mentioned in class, if we first compute the eigenvalues of a matrix A and then later we decide we want the eigenvector corre- sponding to a computed eigenvalue λ, we can do this very effectively by solving the equation (A - λI)x = b, where b is chosen randomly. This amounts to just one step of inverse iteration (the inverse power method) using the shift λ . The reason it works is that λ is not exactly an eigenvalue of A and hence A - λI is not exactly singular. It is very badly conditioned, so there is a lot of rounding error incurred in solving the equation, but luckily, and very surprisingly, the rounding error is almost entirely in the magnitude of the solution x, not the di- rection. But we don’t care about the magnitude because we are going to normalize x anyway. And the reason the resulting x is very close to an eigenvector is the same reason that the inverse power iteration converges so fast: see AG p. 293 and my notes. Verify these claims as follows: • choose a random square matrix with odd order, say 11 or 101, so that it must have at least one real eigenvalue • compute its eigenvalues with evalues = eig(A) • choose one of the computed real eigenvalues for λ (using real or imag) and solve (A − λI)x = b using , where b is chosen randomly – note that Matlab gives you a warning about A−λI being nearly singular • normalize x by dividing by its 2-norm • check the eigenvalue-eigenvector residual ⅡAx − λxⅡ2 : it should be around machine precision • another much more expensive way to find the eigenvector is to compute the SVD of B = A − λI. The eigenvector of A corre- sponding to the eigenvalue λ is one of the left or right singular vectors of B: which one and why? 3. (30 pts) This concerns the SVD Block Power method for approximat- ing the largest r singular values and corresponding singular vectors of an m × n matrix A, where r is less than n, usually much less: Initialize V0 to an n × r matrix For k = 1, 2, . . . Set U(ˆ) = AVk— 1 Get reduced QR factorization U(ˆ) = QURU Set Uk = QU Set V(ˆ) = AT Uk Get reduced QR factorization V(ˆ) = QVRV Set Vk = QV This is also given on p. 22-6 of my lecture notes for April 14, but it is not discussed by AG. (a) The signs of the diagonal entries of R in a QR factorization are arbitrary: although Gram-Schmidt (which we are not us- ing) chooses them to be positive, Householder (which Matlab’s qr uses) does not. Since we’d like them to be positive in the next part, write a short Matlab function [Q,R]=qrPosDiagR(A) which computes the reduced Q and R factors of A by calling Mat- lab’s qr(A,0) and then modifies the Q and R factors so that the diagonal entries of R are all positive (or zero, but that’s unlikely), by changing the sign of the corresponding row of R and the cor- responding column of Q, so that the equations A = QR still holds (check this!) One way to see that this “works” is to observe that QR = QS2 R = (QS)(SR) where S is a diagonal matrix of ±1’s. (b) Implement the SVD Block Power method given above. Use qrPosDiagR for both QR factorizations in the loop. It turns out that for random matrices, both the upper triangular matri- ces RU and RV converge to diagonal matrices, so compute the matrix 2-norm of the strictly upper triangular parts of both RU and RV (either by using triu( .,1) or by subtracting off the di- agonal entries) and terminate the loop when the maximum of these two norms is less than a tolerance, tol. The inputs to this function should be an m × n matrix A (it should not mat- ter whether m > n, m = n or m
BUSM208 Strategic Marketing 2024/2025 Module Level Learning Outcomes to be assessed No Module Learning Outcome Description 1 A1 Examine and evaluate marketing strategies and apply them to specific businesses contexts 2 A4 Examine key issues associated with the implementation of marketing strategies and marketing campaigns 3 B3 Apply marketing knowledge to analyse, deconstruct and solve strategic marketing problems 4 B5 Be able to develop and present a market-led strategy of sustainable competitive advantage. 5 B6 Demonstrate an understanding of the practicalities and limitations of marketing strategy implementation 6 C1 Develop and further strengthen the ability to think creatively and reflect critically 7 C4 Demonstrate market research and sensing skills - developed through web search exercises, independent study and interaction with peers Assurance of Learning (selected modules only): contribution to Programme Level Learning Outcomes No Programme Learning Outcome Description 1 LO 1.4 Advance qualitative and quantitative research skills 2 LO 1.6 Engaging with notions of business innovation, entrepreneurial behaviour and enterprise development, and the management and exploitation of intellectual capital 3 LO 2.4 Apply conceptual and practical strategic interventions to the global marketplace 4 LO 2.7 To creatively, and in instrumental terms improve business and management practice, including within an international context. 5 LO 2.8 Establish criteria, using appropriate decision-making techniques including identifying, formulating and solving business problems 6 LO 3.6 Development of lifelong learning skills, including engendering an enthusiasm for business and continuing personal and professional development Assessment instructions for students (as per QMPlus ‘Assessment Information’ tab) 1. The module learning outcomes being assessed See above. 2. Instructions and guidance Assessment: Individual Report (100%), 3000 words excluding appendices (4000 words including appendices) Submission date: 16th April 2025, Wednesday, 15:00 pm. Part 1 (85%) The length of the report should be no more than 2500 words excluding references and appendices. Marks may be deducted if you overshoot the word limit. The stated word counts may be exceeded by a maximum of 5%. Please pay attention to writing and referencing style. The preparation of this individual report and the exchange of experiences in the classrooms or group meetings are major learning aspects of the Markstrat simulation. This assessment requires you to prepare a report which may be considered as briefing to the new CMO which will take over the management of the firm who you have been handling during the simulation. The report should demonstrate your understanding and application of the conceptual frameworks and topics discussed in this module, and how they have been applied in the MarkStrat simulation. The report should include the following elements (Review) analysis of your performance (Key Decisions) main strategies pursued (Adjustments to Strategy) main adjustments made to changes in the environment (Learnings) key points learned through past successes and failures (Recommendations) for the future The coherence of report structure and clarity in the overall presentation of the arguments, as well as the appropriate use of evidence and cases to support your arguments, are essential. Part 2 (15%) Individual reflection on your MarkStrat learning experience during this module (500 words max). 1) What have you learned comparing before and after your MarkStrat simulation experience especially when it comes to practical applications of marketing strategy concepts? 2) In retrospect, what would you have done differently to improve your learning experience 3. Assessment rubric with weighted criteria Part 1: Simulation Score15% Part 1: Main analysis 60% Part 1: Structure & use of evidence, graphs and appendices 10% Part 2: Individual reflection 15% 4. Assurance of Learning measures: performance thresholds for assessment criteria “significantly exceeds expectations” [outstanding/excellent] at equivalent of 70+, “exceeds expectations” [good] at equivalent of 60-69, “meets expectations” [average] by achieving equivalent of 50-59; “does not meet expectations” [poor/outright fail] at equivalent of 49 or less
AMS2320 Business Regression Analysis Semester 2, 2024-2025 ASSIGNMENT 2 There are two questions. Please answer all questions and show your steps clearly. Correct your numerical answers to 4 decimal places. You may use SAS and/or R to do the calculations. Please include the program codes in your answer script. Combine all materials in a single pdf file with file name: AMS2320 Assignment 2 - [Student ID].pdf (e.g. AMS2320 Assignment 2 - s123456.pdf) and submit it via Moodle. Question 1. Is there structural break in the price level data? It is suggested that there is a structural break in the price level. The period before 2007 is one regime. The period 2007 and after is another. Consider the model CPI = β0+ β1(Year) + β2D + β3(D)(Year) . where D is a dummy variable: 0 for before the break and 1 for after. We want to test if the so-called structural break is significant. We have the following computer output: Full model d .f . SS MS F Sig . Regression 3 3047 .5 1015 .833333 4353 .571429 2 .11752E-10 Residual 6 1 .4 0 .233333333 Total 9 3048 .9 Reduced model d .f . SS MS F Sig . Regression 1 3012 .148485 3012 .148485 655 .6787599 5 .80084E-09 Residual 8 36 .75151515 4 .593939394 Total 9 3048 .9 Test if the structural break is significant. [20 marks] Question 2. We consider the data (prostate.txt) from the study of Stamey (1989). It was a study on 97 men with prostate cancer who were about to receive a radical prostatectomy (an operation). The relationship between the level of prostate-specific antigen and a number of clinical measures were studied. Variable Meaning X1 lcavol log(cancer volume) X2 lweight log(prostate weight) X3 age age X4 lbph log(benign prostatic hyperplasia amount) X5 svi seminal vesicle invasion X6 lcp log(capsular penetration) X7 gleason Gleason score X8 pgg45 percentage Gleason scores 4 or 5 Y lpsa log(prostate specific antigen) (a) [20 marks] Consider the full model Y = β0 + β1X1 + ... + β8X8 + Error . Here, the error terms are assumed to be independent and identically distributed random variables N(0,σ2 ) . Suppose that the value of X of a new patient is given as follows X1 = 1 .1474025, X2 = 3.4194, X3 = 59, X4 = -1.386294, X5 = 0, X6 = -1.38629, X7 = 6, X8 = 0 You are interested in predicting Y , the logarithm of amount of prostate specific antigen. Give the predicted value of Y and express the variance of prediction error in terms of σ2. (b) [20 marks] Now, you do not want a model with eight predictors as the prediction error is not satisfactory. You prefer a model with only five predictors. Select a model with BACKWARD selection approach. Report the t statistic of each remained variables in each step. (c) [20 marks] Now, you prefer a model with only four predictors. Select a model with FORWARD selection approach. Report the t statistic of each unselected variables in each step. (d) [20 marks] Consider the reduced model obtained in part (c). You are interested in predicting Y , the logarithm of amount of prostate specific antigen. The value of X for a new patient is given in (a). Give the predicted value of Y and express the variance of prediction error in terms of σ 2 . Comparing (a) and (d), which model tends to give smaller estimation error? DUE DATE: 11th, April, 2025 (Friday).
ISyE 6339 Physical Internet Engineering Casework 2 Hyperconnected Road-Based Freight Transportation in Continental USA These caseworks focus on truck-based freight transportation across the Continental United States of America, with an emphasis on contrasting the current system with a Physical Internet enabled hyperconnected system. Examination of the tasks will reveal clearly the distinctions of focus and scope between caseworks 2.1 and 2.2. Addressed trucking flow includes flow from and to ports as well as railyards to support multimodal transfer, without exploring the intensification of multimodality to contain casework complexity. The casework relies on statistics and forecasts compiled by the Federal Highway Administration of the US Department of Transportation, within its Freight Analysis Framework (FAF). We suggest students to study carefully the FAF website at https://ops.fhwa.dot.gov/freight/freight_analysis/faf/. The Freight Analysis Framework simplifies the freight transportation demand by aggregating it by specific mode and commodity (among a set of 42) between 132 FAF zones. • List of FAF zones can be found in 2017_CFS_Metro_Areas_with_FAF_Table.xlsx at https://faf.ornl.gov/faf5/ • FAF zone shape files can be obtained at https://faf.ornl.gov/faf5/data/2017_CFS_Metro_Areas_with_FAF.zip FAF provides numerous Tables and Maps depicting the estimated freight flow. Below is an example focused on mapping the truck-based estimated average daily flow volumes on the national highway system in 2030. To simplify data processing and analysis in this casework, as depicted and tabled below, we have aggregated the truck-based flows along the main highways: North-South 5, 15, 25, 35, 55, 65, 75, 85, and 95; East-West 10, 20, 40, 70, 80, and 90; and used a set of 39 highway intersection proxies. Multi-source and multi-destination routing of flows within a specific FAF zone is not addressed in this casework, each zone being treated as a single point as done in the FAF modeling. The creation of the regional flow file utilizing Freight Analysis Framework (FAF) data involved series of steps to model the movement and volume of freight across different regions. Here's a comprehensive overview of the steps taken to achieve this: 1. Data Acquisition: We first downloaded the FAF origin-destination tonnage data specifically for FAF regions. This foundational step ensured access to a rich dataset detailing the quantities of goods moved between various geographic locations. 2. Commodity Selection: To refine the data for practical analysis, we focused on commodities that could logically be grouped together, intentionally excluding categories such as live animals, oil, and liquid chemicals due to their unique transportation requirements and regulations. 3. Payload Factor Estimation: Leveraging the payload factors provided by FAF, we estimated the number of trucks required for each commodity type from each origin to destination. This critical step involved applying specific coefficients that represent the average load carried by trucks, thus enabling the conversion of tonnage data into a more tangible measure of freight traffic in terms of truck movements. 4. Flow Calculation: We finally aggregated these estimated truck movements across all origin-destination (O-D) pairs to derive a total flow metric. This comprehensively represents the entirety of freight movement within the dataset's scope, providing a clear picture of regional freight dynamics. We are providing the “FAF based casework Data_File” workbook, composed of three worksheets described hereafter. The first worksheet is called “FAF_Regional_Flows” . Below is an overview of the columns present in the worksheet: 1. Origin FAF Zone: Indicates the origin zones of freight. 2. Origin_lat & Origin_long: Geographical coordinates (latitude and longitude) of the origin zones. 3. Destination FAF Zone: Indicates the destination zones of freight. 4. Destination_lat & Destination_long: Geographical coordinates (latitude and longitude) of the destination zones. 5. Thousand tons in 2022: The volume of freight in thousand tons for the year 2022. 6. Thousand tons in 2025: Predicted volume of freight in thousand tons for the year 2025. 7. Thousand tons in 2025_high: High growth prediction for the volume of freight in thousand tons for the year 2025. 8. Thousand tons in 2030: Predicted volume of freight in thousand tons for the year 2030. 9. Thousand tons in 2030_high: High growth prediction for the volume of freight in thousand tons for the year 2030. 10. Trucks in 2022: The conversion of the 2022 freight volume into the equivalent number of trucks, based on the payload of the commodities. 11. Trucks in 2025: The conversion of the predicted 2025 freight volume into the equivalent number of trucks, based on the payload of the commodities. 12. Trucks in 2025_high: The conversion of the predicted high growth 2025 freight volume into the equivalent number of trucks, based on the payload of the commodities. 13. Trucks in 2030: The conversion of the predicted 2030 freight volume into the equivalent number of trucks, based on the payload of the commodities. 14. Trucks in 2030_high: The conversion of the predicted high growth 2030 freight volume into the equivalent number of trucks, based on the payload of the commodities. This second worksheet named “Intersection_ID” is structured with the following columns: 1. Intersection ID: Unique identifier for each intersection. 2. Interstate_Intersection: Describes which two interstates are intersecting at this junction, providing crucial data for understanding traffic flow and planning. 3. Location: The name or description of the location of the intersection for easy identification. 4. Lat & Long: Geographical coordinates (latitude and longitude) pinpointing the precise location of the intersection. This sheet also contains the visualization of the intersections along with the identifiers. This third worksheet named “Distance_Intersections” is structured with the following columns: 1. Intersection_A & Intersection_B: Identifiers for the paired intersections being analyzed. 2. A_lat & A_long, B_lat & B_long: Geographical coordinates for intersections A and B, respectively. 3. Duration (mins): Travel time between the intersections. 4. Distance (km): Road distance between the intersections. We are also providing the “Georgia 10 days” workbook, composed of two worksheets described hereafter. Both start with the same four columns as workbook “FAF_Regional_Flows” . The worksheet “Atlanta Co. A, B, & C 10 days” focuses on three companies and their freight flow demand to/from the Atlanta FAF zone from/to each other FAF zone (each corresponding to a row) on simulated days 1 to 10 (corresponding each to a column), expressed in truck fractions. The worksheet “Georgia 10 days” similarly focuses at a higher degree of aggregation on the overall freight flow demand to/from Georgia (its three FAF zones) from/to each other FAF zone (each corresponding to a row) on simulated days 1 to 10 (corresponding each to a column), expressed in truck fractions. Tasks 1. Proceed to a Pareto analysis of the inter-zone flows, first accounting for unidirectional flows from a zone to another, then for bidirectional flows between zones, for 2025 and 2030 (average and max). Rank and plot flow-from-to pairs and flow-between pairs, as well as zones (and zone groups as you may deem pertinent), accounting for all their incoming and outgoing flows. Depict results in terms of tons and truck shipments per day, hour, minute, and then second. Analyze your results, aiming to provide key insights. 2. Provide vivid inter-zone flow maps depicting through direct links the flows between pairs of zones in 2025 and 2030 (average and max). For clarity and emphasis, you are encouraged to produce distinct maps, for example for groups of zone pairs distinguished through the Pareto analysis. Include a FAF-zone-specific zooming on a small representative set of FAF-zones, including the Atlanta FAF-zone. Also, make sure you emphasize the total bidirectional flows, unidirectional flows (A to B, B to A), as well as the flow imbalances. Analyze your results, aiming to provide key insights. 3. For each trip between each pair of zones with non-zero origin-destination (O-D) FAF freight flow estimate: a. Compute the lower bound on traveled distance by assuming the truck take a direct path between the zones, so the distances are computable using the provided longitude and latitude coordinates. b. Given your answer to (3.a), assuming your best 2025 and 2030 estimates based on published literature, compute the lower bound on: o Energy consumption assuming internal combustion engine (ICE) or electric (E) trucks are used: 1. Kilowatt-hours (kwh) for ICE and E trucks 2. Diesel gallons (for ICE trucks) o Transport time assuming a steady 60-miles/hour speed (excluding any stop); o Travel time including transport and reenergization, assuming autonomous ICE or E trucks are used, given your estimates for: 1. Truck autonomy of each type 2. Fueling time of an ICE-truck 3. Charging time of an E-truck 4. Battery swapping time of an E-truck (assuming charged batteries are available when needed for reenergizing an E-truck). o Travel time as in (3.b.iii) assuming single-driver and two-driver trucking, respecting the current state-specific trucking time regulations assuming that the usual 11-hour-maximum regulation applies in all states (see Georgia regulations for details). c. Contrast and analyze your results, aiming to provide key insights. 4. Considering the estimated freight transportation demand between each pair of zones, and assuming loaded trucks are in fact loaded 60% of their capacity in average for each daily end-to-end O-D interzone transportation, compile 2025 and 2030 (average and max) overall lower bound estimates for total number of truck trips, travelled distance, energy consumption, greenhouse gas emission, transport time and travel time, trucks, and truckers based in each FAF zone (when pertinent), for the combinations of assumptions in (3). You have to account for two extreme empty truck flow estimations: a. Assuming empty truck have to travel back from destination to origin; b. Assuming daily inbound and outbound imbalances at each zone to have trucks not having to travel empty all the way from reached destination back to origin, but rather to have them travel empty to a zone distinct from the reached destination zone only to contribute to rebalancing nodal flow. This assumption has drastic implications for truckers and trucks as they may get back to their home base after long multi-zone journeys. o The logic is essentially as follows. Assume the reached destination zone z for a truck has 900 inbound flow and 800 outbound flows. Then the assumption would be that 800 trucks would not have to travel empty and would simply pick up a shipment out of zone z the next day or sooner if available depending on timing. There would be expectation of 100 trucks arriving daily in zone z and having to move empty to another zone to pick up a next shipment. Ideally, each such trucks would only have to move empty to a zone z’ nearby zone z, that zone z’ having a daily inbound flow less than its outbound flow. Overall, at an aggregate level, when not accounting directly for the aim of getting the truck and trucker-s back to their home base, this corresponds to the well-known transportation problem minimizing total travel between nodes in the network where each node is either a source or a demand node (here based on their daily flow imbalance). You may choose to solve it optimally using an existing solver or to generate a heuristic solution using an available heuristic or your own justified heuristic. c. Contrast and analyze your results, aiming to provide key insights. 5. Now assuming travel along the highway network provided in the “FAF based casework Data_File” workbook: a. Draw the simplified highway network with its nodes (intersections) and links (highway segments), depicting the length (miles) and duration (hours @ 60mi/hr) of each link, with and without the map underlaid. b. Using an available optimal shortest path algorithm, for each pair of zones with non- zero freight flow, compute the shortest-distance path from origin to destination through the highway network. c. Provide drawings of representative inter-zone shortest path samples. 6. Repeat (3) now based on network-based estimates assuming a single truck is to move each shipment from its origin to its destination, the truck (and its driver-s for non-autonomous truck) being dedicated for this specific O-D trip. Contrast and analyze your end-to-end network-based results as done in (3), then contrast and analyze your lower-bound results vs end-to-end network-based results. 7. Leveraging your results from (5 and 6), compute aggregate 2025 and 2030 (average and max) daily total, loaded, and empty flow estimates, assuming end-to-end network-based transportation, for: a. Each highway network link lii, between intersection nodes i and i’, differentiating flows from i to i’ and flows from i’ to i. b. Each highway network intersection node i, differentiating flow within the node inbounding from each link li,I and outbounding to each link lii,. Depict on drawings of the network the flows, leveraging node and link sizes and colors, as well as directional arrows and numbers, to make vivid your results. Contrast loaded-flow network renderings, empty-flow network renderings, and total-flow network renderings. Analyze your results. 8. Consider companies A, B, and C based in Atlanta, Georgia, for which you are provided their freight flow demand for days 1 to 10. Assume that each company promises to ship its ordered products within 3 days from ordering time (so, demand in day 1 must be shipped by day 3 at the latest), and all products ordered in days 8, 9, and 10 must be shipped by day 10 at the latest. As above, assume the companies ship directly from origin to destination through a single-stop truck route. Assume also that the companies each use dedicated trucks, so they are responsible for their empty travel. a. Provide an optimized transportation plan over the 10 days for each of the three companies and proceed to a thorough performance assessment for each company, as requested in (3). b. Assuming the three companies engage in a collaborative agreement in which they jointly optimize their transportation plans. For simplicity purposes, assume here that their target location in each FAF zone is very near to the FAF zone coordinates. Provide an optimized collaborative transportation plan over the 10 days for the group of three companies and proceed to a thorough performance assessment for the group and each company, as requested in (3). Assess the added value of the collaboration for each company. c. Discuss the differences and convergences of perspectives and outcomes when aggregating at the FAF zone versus when addressing the cases of specific companies. 9. Now assume a hyperconnected national freight system where: • There are clusters of inter-regional logistic hubs located around each highway intersection node and other ones located at specific to enable (1) reenergization respecting truck autonomy and (2) getting truckers back home mostly every day for quality of life and retention purposes while keeping the freight moving. • Freight travels only: o From its origin zone to its entry hub in the nearest hub cluster given its final destination. o Along highway links, from hub to hub at adjacent network nodes. o To its final destination from its exit hub in the destination-nearest hub cluster. • Trucks only travel as follows: o Shuttling between regional hub-s within a FAF zone (here assumed at FAF zone coordinates) and FAF-zone-entry-exit inter-regional hubs. o Binodal shuttling between hubs at distinct ends of a specific highway network link, leveraging balanced flows between the two nodes of the link. o Trinodal and quadrinodal shuttling between network-adjacent nodes to contribute to rebalance flows between these nodes. • Freight consolidation is enhanced by enabling: o Consolidating flows of shipments within a FAF-zone from multiple shippers heading to a shared intermediary hub (or hub cluster) toward final destination. o Consolidating flow inbounding into a node from several links and heading to a shared intermediary hub (or hub cluster) toward final destination. o Consolidation at inter-regional hubs is performed fast so that (1) inbound freight can be reconsolidated and ready to ship to the next hub within 1 hour and (2) inbound trucks can be back on the road for their next transport leg within 30 minutes. a. Leveraging the work done in (5) and the new information provided above, design an inter-regional hub cluster network capable of supporting the 2025-2030 horizon (simplified here , as normally we would extend further) given the various scenarios of vehicle energy autonomy (given ICE vs EV trucks), vehicle driving autonomy (autonomous self-driving truck vs human-driven truck), and freight flow demand. o Provide and justify location of any added hub cluster, ideally at some existing highway exit or intersection (with other not-modeled highways). b. For each FAF zone, develop a consolidation tree (or wider network for resilience purposes) reaching all its destination FAF zones through the designed hub cluster network. The concept can be illustrated as follows: o Shippers from a FAF zone would consolidate their shipments to their destination zones as depicted below for a green zone having 360 trucks-equivalent to 5 other zones. The flows are channeled to hub cluster nodes and links, resulting in the consolidated red inter-hub (cluster) flows. Shipment set 1: 360 from Z1 to hub (6,2), composed of shipment sets 2 and 3 Shipment set 2: 40 from hub (6,2) to hub (4,2) to Z2 Shipment set 3: 320 from hub (6,2) to hub (6,3), composed of shipment sets 4 and 5 Shipment set 5: 80 from hub (6,3) to Z3 Shipment set 6: 80 from hub (6,3) to hub (8,3) Shipment set 7: 160 from hub (6,3) to hub (6,4), composed of shipment sets 8 and 9 Shipment set 8: 100 form. hub (6,4) to Z5 Shipment set 9: 60 from hub (6,4) to (3,4) to Z6 c. Consolidate all zonal consolidation trees into a global consolidation network, then estimate loaded freight flows along each directional link and at each hub cluster node in the 2025 and 2030 scenarios. d. In previous tasks, trucks were assumed to be loaded at 60% as there was no inter- shipper flow consolidation and shippers were assumed to aim for delivering with satisfactory velocity and service level to customers, not letting time to fill full trucks toward each direction. This is not the case here. Given the freight flows estimated in (9.c), determine the fraction of shipments departing from each node that would not lead to full truck loads along its outbound links. Estimate average utilization of trucks associated with such fraction of shipments. Then compute estimates on the number of loaded trucks flowing through each directional highway link and hub cluster node. e. Using the same type of methods used earlier, yet adapted to hyperconnected transportation, estimate the inter-hub empty travel in each link induced by zonal flow imbalances. Then compute estimates for the number of empty trucks traveling along each directional links. Combining these with the loaded truck travel estimates, compute the total truck estimates. f. Estimate the frequency of truck departures from and arrivals on each hub cluster node along each of its links. Discuss the impact of such frequencies on the expected freight and truck dwell times at hubs, and on the overall freight origin-to-destination time. 10. Repeat (3) adapted for the hyperconnected freight system. Make sure to highlight results along links and nodes of the highway-based network, and for the total system. Contrast your results with those of the dedicated origin-destination direct flow system. Provide key insights. 11. In (9.b), you were asked to create a consolidation tree for each FAF zone. Using such a tree has key impact on the freight flow and concentration. How could you alter and create a consolidation network that would better account for the need to assume that flows may not always be directed to the shortest route from source to destination along the highway network, in fact that would better reflect the need to account for resilience enhancement. Use the Atlanta FAF zone to demonstrate your alternative approach. Contrast your results with the results you obtained in (8.b) and discuss the expected consequences when this is done for each FAF zone. 12. In the dedicated origin-destination direct flow system studied in (1-7), truck drivers are essentially needed to be based in each zone. In the hyperconnected system, truck drivers are needed to be based around each zonal hub cluster node and around each hub cluster node along the highway network. a. Estimate the number of truck drivers needed in each zone by dedicated origin- destination direct flow system. b. Estimate the number of truck drivers needed in around each zonal hub cluster node and around each hub cluster node along the highway network in the hyperconnected system. c. Contrast and analyze your results, aiming to provide key insights. 13. Consider the case of the three companies based in Atlanta studied in (8), now leveraging the hyperconnected transportation system. a. Use adapted versions of the approaches described / used in (9 to 11) to provide optimized transportation plans for each company, considering they are three of out of many hundreds using the hyperconnected system’s open access hubs and trucks. b. Provide a comparative assessment of the expected achieved performance for each company and over their set (as done in (8)), as contrasted with their solo and group performance estimated in (8). Analyze and provide key insights. 14. Use the FAF zones in the state of Georgia to provide a deep meaningful generalized comparison of the two systems (beyond the three-company example), notably contrasting how they treat flows into, across, and out of Georgia, providing comparative Tables, diagrams, and maps to enhance your comparison. 15. Consider that the three Atlanta based companies, once having started to leverage the hyperconnected system, realize they could potentially gain by pre-deploying their products at open-access deployment centers near to the hub clusters in the network. a. Develop a smart strategy to maximize full truckload transports while ensuring short order-to-delivery times and low inventory across the network. b. Assess the performance of your strategy by applying it for the 10-day case. Contrast your results to those obtained in (13). c. Analyze the large-scale impact if most companies across all FAF zones would utilize such a strategy. Provide key insights. 16. Synthesize your key challenges and learnings in performing this casework.
Advanced Econometrics I EMET4314/8014 Semester 1, 2025 Assignment 7 (due: Tuesday week 8, 11:00am) Exercises Provide transparent derivations. Justify steps that are not obvious. Use self sufficient proofs. Make reasonable assumptions where necessary. The linear model under endogeneity is Y = Xβ + e X = Zπ + v where E(eiXi) ≠ 0 and E(eiZi) = 0. Notice dim X = N × K, dim β = K × 1, dim Z = N × L, dim π = L × K, and dim v = N × K. The source of the endogeneity is correlation between the two error terms, write e = vρ + w where E(viwi) = 0. Notice dim ρ = K × 1, and dim w = N × 1. Combining, we obtain Y = Xβ + vρ + w (1) (i) You have available a random sample (Xi , Yi , vi). You are running a regression of Y on X and v. Using linear algebra, define the OLS estimator of β in equation (1). Call it . (Hint: Use the partitioned regression result on the next page.) (ii) Prove that = β + op(1). (iii) You do NOT have available a random sample (Xi , Yi , vi). Instead, you have available a random sample (Xi , Yi , Zi). You cannot run a regression of Y on X and v, but you can instead run a regression of Y on X and ˆv where ˆv is the first stage residual. Using ˆv in place of v in equation (1), define the OLS estimator of β using linear alge-bra. Call it . Prove or disprove: = (X′PZX) −1X′PZY . (iv) Which estimator do you prefer: or ? No need to prove anything here, just give a quick intuitive statement. Partitioned Regression and Frisch-Waugh-Lovell Theorem Partition the linear regression model like so: Y = Xβ + e = X1β1 + X2β2 + e where X1 is of dimension N × K1 and X2 is of dimension N × K2 with K1 + K2 = K and X = [X1 X2]. Then how could you estimate β1? Write down the normal equations Solving first for Similarly This has an interesting interpretation: The OLS estimator results from regressing Y on X2 adjusted for X1. This ad-justment is crucial, obviously it wouldn’t be quite right to claim that results from regressing X2 on Y only. That would only be true of = 0 which means that the sample covariance between the two sets of regressors is zero. Now, doing the math by plugging into and letting and M2 = I − P2: Multiplying both sides by and moving terms The end result (and also symmetrically for ): Remember that M1 and M2 are residual maker matrices: At the same time M1 and M2 are symmetric and idempotent (that is ) There’s a lot of intuition included here. This harks back all the way to Gram Schmidt orthogonalization. To obtain , you regress a version of Y on a version of X1. These versions are and . These are the versions of Y and X1 in which the influence of X2 has been removed, or partialled out or netted out. If X1 and X2 have zero sample covariance then = Y and = X1 and we only need to regress Y on X1 to obtain .
[pdf-embedder url="https://assignmentchef.com/wp-content/uploads/2025/04/CSE340S25B_Project1.pdf"] 1 CSE340 Spring 2025 Project 1: A Simple Compiler! Due: Monday, April 14 2025 by 11:59 pm MST 1 Introduction I will start with a high-level description of the project and its tasks, and in subsequent sections I will give a detailed description on how to achieve these tasks. The goal of this project is to implement a simple compiler for a simple programming language. To implement this simple compiler, you will write a recursive-descent parser and use some simple data structures to implement semantic checking and execute the input program. The input to your compiler has four parts: 1. The first part of the input is the TASKS section. It contains a list of one or more numbers of tasks to be executed by the compiler. 2. The second part of the input is the POLY section. It contains a list of polynomial declarations. 3. The third part of the input is the EXECUTE section. It contains a sequence of INPUT, OUTPUT and assignment statements. 4. The fourth part of the input is the INPUTS section. It contains a sequence of integers that will be used as the input to INPUT statements in the EXECUTE section. Your compiler will parse the input and produces a syntax error message if there is a syntax error. If there is no syntax error, your compiler will analyze semantic errors. If there are no syntax and no semantic errors, your compiler will perform other semantic analyses if so specified by the tasks numbers in the TASKS section. If required, it will also execute the EXECUTE section and produces the output that should be produced by the OUTPUT statements. The remainder of this document is organized as follows. • The second section describes the input format. • The third section describes the expected output when the syntax or semantics are not correct. • The fourth section describes the output when the program syntax and semantics are correct. • The fifth section describes the requirements for your solution. Note: Nothing in this project is inherently hard, but it is larger than other projects that you have done in the past for other classes. The size of the project can make it feel unwieldy. To deal with the size of the project, it is important to have a good idea of what the requirements are. To do so, you should read this document a couple of times. Then, you should have an implementation plan. I make the task easier by providing an implementation guide that addresses 2 some issues that you might encounter in implementing a solution. Once you have a good understanding and a good plan, you can start coding. 2 Input Format 2.1 Grammar and Tokens The input of your program is specified by the following context-free grammar: program → tasks_sectionpoly_sectionexecute_sectioninputs_section tasks_section → TASKSnum_list num_list → NUM num_list → NUMnum_list poly_section → POLYpoly_decl_list poly_dec_list → poly_decl poly_dec_list → poly_declpoly_dec_list poly_decl → poly_headerEQUALpoly_bodySEMICOLON poly_header → poly_name poly_header → poly_nameLPARENid_listRPAREN id_list → ID id_list → IDCOMMAid_list poly_name → ID poly_body → term_list term_list → term term_list → termadd_operatorterm_list term → monomial_list term → coefficientmonomial_list term → coefficient monomial_list → monomial monomial_list → monomialmonomial_list monomial → primary monomial → primaryexponent primary → ID primary → LPARENterm_listRPAREN exponent → POWERNUM add_operator → PLUS add_operator → MINUS coefficient → NUM execute_section → EXECUTEstatement_list statement_list → statement statement_list → statementstatement_list statement → input_statement statement → output_statement statement → assign_statement input_statement → INPUTIDSEMICOLON output_statement → OUTPUTIDSEMICOLON assign_statement → IDEQUALpoly_evaluationSEMICOLON poly_evaluation → poly_nameLPARENargument_listRPAREN argument_list → argument argument_list → argumentCOMMAargument_list argument → ID 3 argument → NUM argument → poly_evaluation inputs_section → INPUTSnum_list The code that we provided has a class LexicalAnalyzer with methods GetToken() and peek(). Also, an expect() function is provided. Your parser will use the functions provided to peek()) at tokens or expect() tokens as needed. You must not change these provided functions; you just use them as provided. In fact, when you submit the code, you should not submit the files inputbuf.cc, (inputbuf.h, lexer.cc or lexer.h on gradescope; when you submit the code, the submission site will automatically provide these files, so it is important not to modify these files in your implementation. To use the provided methods, you should first instantiate a lexer object of the class LexicalAnalyzer and call the methods on this instance. You should only instantiate one lexer object. If you try to instantiate more than one, this will result in errors. The definition of the tokens is given below for completeness (you can ignore it for the most part if you want). char = a | b | ... | z | A | B | ... | Z | 0 | 1 | ... | 9 letter = a | b | ... | z | A | B | ... | Z pdigit = 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 digit = 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 SEMICOLON = ; COMMA = , PLUS = + MINUS = - POWER = ^ EQUAL = = LPAREN = ( RPAREN = ) TASKS = (T).(A).(S).(K).(S) POLY = (P).(O).(L).(Y) EXECUTE = (E).(X).(E).(C).(U).(T).(E) INPUT = (I).(N).(P).(U).(T) OUTPUT = (O).(U).(T).(P).(U).(T) INPUTS = (I).(N).(P).(U).(T).(S) NUM = 0 | pdigit . digit* ID = letter . char* What you need to do is write a parser to parse the input according to the grammar and produce a syntax error message if there is a syntax error. Your program will also check for semantic errors and, depending on the tasks list, will execute more semantic tasks. To achieve that, your parser will store the program in appropriate data structures that facilitate semantic analysis and allow your compiler to execute the statement list in the execute_section. For now, do not worry how that is achieved. I will explain that in detail, partly in this document and more fully in the implementation guide document. 2.2 Examples The following are examples of input (to your compiler) with corresponding outputs. The output will be explained in more detail in later sections. Each of these examples has task numbers 1 and 2 listed in the tasks_section. They have the following meanings: 4 • The number 1 listed means that your program should perform syntax and semantic checking. • The number 2 listed means that your program should produce the output of the output statements if there are no syntax and no semantic errors. EXAMPLE 1 TASKS 1 2 POLY F = x^2 + 1; G = x + 1; EXECUTE X = F(4); Y = G(2); OUTPUT X; OUTPUT Y; INPUTS 1 2 3 18 19 This example shows two polynomial declarations and a EXECUTE section in which the polynomials are evaluated with arguments 4 and 2 respectively. The output of the program will be 17 3 The sequence of numbers at the end (in the input_section) is ignored because there are no INPUT statements. EXAMPLE 2 TASKS 1 2 POLY F = x^2 + 1; G = x + 1; EXECUTE INPUT X; INPUT Y; X = F(X); Y = G(Y); OUTPUT X; INPUTS 1 2 3 18 19 This is similar to the previous example, but here we have two INPUT statements. The first INPUT statement reads a value for X from the sequence of numbers and X gets the value 1. The second INPUT statement reads a value for Y which gets the value 2. Here the output will be 2 Note that the values 3, 18 and 19 are not read and do not affect the execution of the program. EXAMPLE 3 1: TASKS 2: 1 2 3: POLY 4: F = x^2 + 1; 5: G = x + 1; 6: EXECUTE 5 7: INPUT X; 8: INPUT Y; 9: X = F(X); 10: Y = G(Y); 11: OUTPUT X; 12: INPUTS 13: 1 2 3 18 19 Note that there are line numbers added to this example. These line numbers are not part of the input and are added only to refer to specific lines of the program. In this example, which looks almost the same as the previous example, there is a syntax error because there is a missing semicolon on line 4. The output of the program should be SYNTAX ERROR !!!!!&%!! EXAMPLE 4 1: TASKS 2: 1 2 3: POLY 4: F = x^2 + 1; 5: G(X,Y) = X Y^2 + X Y; 6: EXECUTE 7: INPUT Z; 8: INPUT W; 9: X = F(Z); 10: Y = G(Z,W); 11: OUTPUT X; 12: OUTPUT Y; 12: INPUTS 13: 1 2 3 18 19 In this example, the polynomial G has two variables which are given explicitly (in the absence of explicitly named variables, the variable is lower case x by default). The output is 2 6 EXAMPLE 5 1: TASKS 2: 1 2 3: POLY 4: F = x^2 + 1; 5: G(X,Y) = X Y^2 + X Z; 6: EXECUTE 7: INPUT Z; 8: INPUT W; 9: X = F(Z); 10: Y = G(Z,W); 11: OUTPUT X; 12: OUTPUT Y; 12: INPUTS 13: 1 2 3 18 19 This example is similar to the previous one but it has a problem. The polynomial G is declared with two variables X and Y but its equation (called poly_body in the grammar) has Z which is different from 6 X and Y. The output captures this error (see below for error codes and their format) Semantic Error Code 2: 5 3 Tasks and their priorities The task numbers specify what your program should do with the input program. Task 1 is one of the larger tasks and, but it is not graded as one big task. Task 1 has the following functionalities: 1. Syntax checking 2. Semantic error checkings The other tasks, 2, 3, 4, 5 and 6 have the following functionalities: • Task 2 – Output: Task 2 requires your compiler to produce the output that should be produced by the output statements of the program. . • Task 3 – Variable used but not explicitly initialized: Task 3 requires your compiler to produce a warning about uninitialized variables. A variable is uninitialized when a variable appears on the right-hand side of an assignment statement without having previously appeared on the left-hand side of an assignment statement or in an INPUT statement. This will result in a warning message. However, it is not considered a semantic error. The execution can proceed assuming the variable is initially zero. • Task 4 – Useless assignments: This happens when a variable value is calculated, but the variable is not used later in the right-hand side of an assignment or in an OUTPUT statement. • Task 5 – Polynomial degree: This task requires that the degree of all the polynomials in the polynomial sections are calculated and outputted. Detailed descriptions of these tasks and what the output should be for each of them is given in the sections that follow. The remainder of this section explains what the output of your program should be when multiple task numbers are listed in the tasks_section. If task 1 is listed in the tasks_section, then task 1 should be executed. Remember that task 1 performs syntax error checking and semantic error checking. If the execution of task 1 results in an error, and task 1 is listed in the tasks_section, then your program should only output the error messages (as described below) and exits. If task 1 results in an error (syntax or semantic) no other tasks will be executed even if they are listed in the tasks_section. If task 1 is listed in the tasks_section and does not result in an error message, then task 1 produces no output. In that case, the outputs of the other tasks that are listed in tasks_section should be produced by the program. The order of these outputs should be according to the task numbers. So, first the output of task 2 is produced (if task 2 is listed in tasks_section), then the output of task 3 is produced (if task 3 is listed in tasks_section) and so on. If task 1 is not listed in the tasks_section, task 1 still needs to be executed. If task 1’s execution results in an error, then your program should output nothing in this case. If task 1 is not listed and task 1’s execution does not result in an error, then the outputs of the other tasks that are listed in tasks_section should be produced by the program. The order of these outputs should be according to the task numbers. So, first the output of task 2 is produced, then the output of task 3 is produced (if task 3 is listed in tasks_section) and so on. You should keep in mind that tasks are not necessarily listed in order in the tasks_section and they can even be repeated. For instance, we can have the following TASKS section: TASKS 1 3 4 1 2 3 In this example, some tasks are listed more than once. Later occurrences are ignored. So, the tasks_section above is equivalent to TASKS 1 2 3 4 In the implementation guide, I explain a simple way to read the list and sort the task numbers using a boolean array. 7 4 Task 1 – Syntax and Semantic Checking For task 1, your solution should detect syntax and semantic errors in the input program as specified in this section. 4.1 Syntax Checking If the input is not correct syntactically, your program should output SYNTAX ERROR !!!!!&%!! If there is syntax error, the output of your program should exactly match the output given above. No other output should be produced in this case, and your program should exit after producing the syntax error message. The provided parser.* skeleton files already have a function that produces the message above and exits the program. 4.2 Semantic Checking Semantic checking also checks for invalid input. Unlike syntax checking, semantic checking requires knowledge of the specific lexemes and does not simply look at the input as a sequence of tokens (token types). I start by explaining the rules for semantic checking. I also provide some examples to illustrate these rules. • Polynomial declared more than once – Semantic Error Code 1. If the same polynomial_name is used in two or more different polynomial_header’s, then we have the error polynomial declared more than once. The output in this case should be of the form Semantic Error Code 1: ... where through are the numbers of each of the lines in which a duplicate polynomial_name appears in a polynomial header. The numbers should be sorted from smallest to largest. For example, if the input is (recall that line numbers are not part of the input and are just for reference): 1: TASKS 2: 1 3 4 3: POLY 4: F1 = 5: x^2 + 1; 6: F2 = x^2 + 1; 7: F1 = x^2 + 1; 8: F3 = x^2 + 1; 9: G = x^2 + 1; 10: F1 = x^2 + 1; 11: G(X,Y) = X Y^2 + X Y; 12: EXECUTE 13: INPUT Z; 14: INPUT W; 15: X = F1(Z); 16: Y = G(W); 17: OUTPUT X; 18: OUTPUT Y; 19: INPUTS 20: 1 2 3 18 19 then the output should be Semantic Error Code 1: 7 10 11 8 because on each of these lines the name of the polynomial in question has a duplicate declaration. Note that only the line numbers for the duplicates are listed. The line number for the first occurrence of a name is not listed. • Invalid monomial name – Semantic Error Code 2. There are two kinds of polynomials headers. In the first kind, only the polynomial name (ID) is given and no parameter list (id_list in the header) is given. In the second kinds, the header has the form polynomial_nameLPARENid_listRPAREN. In a polynomial with the first kind of header, the polynomial should be univariate (one variable) and the variable name should be lower case "x". In a polynomials with the second kind of header, the id_list is the list variables that can appear in the polynomial body. An ID that appears in the body of a polynomial (in primary) should be equal to one of the variables of the polynomial. If that is not the case, we say that we have an invalid monomial name error and the output in this case should be of the form: Semantic Error Code 2: ... where through are the numbers of lines in which an invalid monomial name appears with one number printed per occurrence of an invalid monomial name. If there are multiple occurrences of an invalid monomial name on a line, the line number should be printed multiple times. The line numbers should be sorted from smallest to largest. • Attempted evaluation of undeclared polynomial – Semantic Error Code 3. If there is no polynomial declaration with polynomial name which is the same as a polynomial name used in a polynomial evaluation, then we have attempted evaluation of undeclared polynomial error. In this case, the output should be of the form Semantic Error Code 3: ... where through are the numbers of each of the lines in which a polynomial_name appears in a polynomial_evaluation but for which there is no polynomial_declaration with the same name. The line numbers should be listed from the smallest to the largest. For example if the input is: 1: TASKS 2: 1 3 4 3: POLY 4: F1 = x^2 + 1; 5: F2 = x^2 + 1; 6: F3 = x^2 + 1; 7: F4 = x^2 + 1; 8: G1 = x^2 + 1; 9: F5 = x^2 + 1; 10: G2(X,Y) = X Y^2 + X Y; 11: EXECUTE 12: INPUT Z; 13: INPUT W; 14: X = G(Z); 15: Y = G2(Z,W); 16: X = F(Z); 17: Y = G2(Z,W); 18: INPUTS 19: 1 2 3 18 19 then the output should be Semantic Error Code 3: 14 16 Because on line 14, there is an evaluation of polynomial G but there is no declaration for polynomial G and on line 16, there is an evaluation of polynomial F but there is no declaration of polynomial F. • Wrong number of arguments – Semantic Error Code 4. If the number of arguments in a polynomial evaluation is different from the number of parameters in the polynomial declaration, then we say that we have wrong 9 number of arguments error and the output should be of the form: Semantic Error Code 4: ... where through are the numbers of each of the lines in which polynomial_name appears in a polynomial_evaluation but the number of arguments in the polynomial evaluation is different from the number of parameters in the corresponding polynomial declaration. The line numbers should be listed from the smallest to the largest. For example if the input is: 1: TASKS 2: 1 3 4 3: POLY 4: F1 = x^2 + 1; 5: F2 = x^2 + 1; 6: F3 = x^2 + 1; 7: F4 = x^2 + 1; 8: G1 = x^2 + 1; 9: F5 = x^2 + 1; 10: G2(X,Y) = X Y^2 + X Y; 11: EXECUTE 12: INPUT Z; 13: INPUT W; 14: X = G2(X,Y, Z); 15: Y = G2(Z,W); 16: X = F1(Z); 17: Y = F5(Z,Z); 18: Y = F5(Z,Z,W); 19: INPUTS 20: 1 2 3 18 19 then the output should be Semantic Error Code 4: 14 17 18 You can assume that an input program will have only one kind of semantic errors. So, for example, if a test case has Semantic Error Code 2, it will not have any other kind of semantic errors. 5 Task 2 – Program Output For task 2, your program should output the results of all the polynomial evaluations in the propram. In this section I give a precise definition of the meaning of the input and the output that your compiler should generate. In a separate document that I will upload a little later, I will give an implementation guide that will help you plan your solution. You do not need to wait for the implementation guide to write the parser! 5.1 Variables and Locations The program uses names to refer to variables in the EXECUTE section. For each variable name, we associate a unique locations that will hold the value of the variable. This association between a variable name and its location is assumed to be implemented with a function location that takes a string as input and returns an integer value. We assume that there is a variable mem which is an array with each entry corresponding to one variable. All variables should be initialized to 0 (zero). To allocate mem entries to variables, you can have a simple table or map (which I will call the location table) that associates a variable name with a location. As your parser parses the input program, if it encounters a variable name in an input_statement, it needs to determine if this name has been previously encountered or not by looking it up in the location table. If the name is a new variable name, a new location needs to be associated with it, and the mapping from the variable name to the location needs to be added to the location table. To associate a location with 10 a variable, you can simply keep a counter that tells you how many locations have been used (associated with variable names). Initially, the counter is 0. The first variable will have location 0 associated with it (will be stored in mem[0]), and the counter is incremented to become 1. The next variable will have location 1 associated with it (will be stored in mem[1]), and the counter is incremented to become 2 and so on. For example, if the input program is 1: TASKS 2: 1 2 3: POLY 4: F1 = x^2 + 1; 5: F2(x,y,z) = x^2 + y + z + 1; 6: F3(y) = y^2 + 1; 7: F4(x,y) = x^2 + y^2; 8: G1 = x^2 + 1; 9: F5 = x^2 + 1; 10: G2(X,Y,Z,W) = X Y^2 + X Z + W + 1; 11: EXECUTE 12: INPUT X; 13: INPUT Z; 14: Y = F1(Z); 15: W = F2(X,Z,Z); 16: OUTPUT W; 17: OUTPUT Y; 18: INPUT X; 19: INPUT Y; 20: INPUT Z; 21: Y = F3(X); 22: W = F4(X,Y); 23: OUTPUT W; 24: OUTPUT Y; 25: INPUT X; 26: INPUT Z; 27: INPUT W; 28: W = G2(X,Z,W, 29: Z); 30: INPUTS 31: 1 2 3 18 19 22 33 12 11 16 Then the locations of variables will be X 0 Z 1 Y 2 W 3 5.2 Statements We explain the semantics of the four kinds of statements in the program. 5.2.1 Input statements Input statements get their input from the sequence of inputs. We refer to i’th value that appears in inputs as i’th input. The i’th input statement in the program of the form INPUT X is equivalent to: mem[location("X")] = i'th input 11 5.2.2 Output statements Output statements have the form OUTPUT ID where the lexeme of the token ID is a variable name. This is the output variable of the output statement. Output statements print the values of their OUTPUT variables. If the output statement has the form OUTPUT X; , its effect is equivalent to: cout output_file.txt will read standard input from input_data.txt and produces standard output to output_file.txt. Now that we know how to use standard IO redirection, we are ready to test the program with test cases. 1 Programs have access to another standard stream which is called standard error e.g. std::cerr in C++. Any such output is still displayed on the terminal screen. It is possible to redirect standard error to a file as well, but we will not discuss that here 19 Test Cases For a given input to your program, there is an expected output which is the correct output that should be produced for the given input. So, a test case is represented by two files: • test_name.txt • test_name.txt.expected The input is given in test_name.txt and the expected output is given in test_name.txt.expected. To test a program against a single test case, first we execute the program with the test input data: $ ./a.out < test_name.txt > program_output.txt With this command, the output generated by the program will be stored in program_output.txt. To see if the program generated the correct expected output, we need to compare program_output.txt and test_name.txt.expected. We do that using the diff command which is a command to determine differences between two files: $ diff -Bw program_output.txt test_name.txt.expected If the two files are the same, there should be no difference between them. The options -Bw tell diff to ignore whitespace differences between the two files. If the files are the same (ignoring the whitespace differences), we should see no output from diff, otherwise, diff will produce a report showing the differences between the two files. We consider that the test passed if diff could not find any differences, otherwise we consider that the test failed. Our grading system uses this method to test your submissions against multiple test cases. In order to avoid having to type the commands shown above for running and comparing outputs for each test case manually, we provide you with a script that automates this process. The script name is test1.sh. test1.sh will make your life easier by allowing you to test your code against multiple test cases with one command. Here is how to use test1.sh to test your program: • Store the provided test cases zip file in the same directory as your project source files • Open a terminal window and navigate to your project directory • Unzip the test archive using the unzip command: bash $ unzip tests.zip This will create a directory called tests • Store the test1.sh script in your project directory as well • Make the script executable: bash $ chmod +x test1.sh • Compile your program. The test script assumes your executable is called a.out • Run the script to test your code: bash $ ./test1.sh The output of the script should be self explanatory. To test your code after you make changes, you will just perform the last two steps (compile and run test1.sh).