COURSEWORK OVERVIEW Module: BENV0085: Engineered Environmental Elements Coursework: Portfolio: Performance Analysis of Building Systems Components Weighting: 100% Coursework Issued: November 6th , 2024 Submission Deadline: 11:00am, January 8th , 2025 Word Limit: 3000 words Page Limit: 20 pages File format: Submit your report online as a single Word or PDF file. Use the provided cover page and make sure not to include your name anywhere (including the name of the file) - use your candidate code instead. Besides, you should submit separately (as a single zip file) all models you created and key output files (as required in the brief below). In preparing your report, make sure you use the cover page provided and you check your work for plagiarism. You should name all your files according to the convention: [Module code]__[Your UCL candidate code]__[date] Submission instructions: ALL students to submit a complete electronic copy of coursework submission through Moodle (using the Turnitin system). This is the copy that will be used to assess the work, so it is the FINAL copy. Coursework Aims (Learning Outcomes) Upon successful completion of the coursework, students should be able to: • Explain and analyse a wide range of fundamental building systems components, their performance and inter-relationship. • Demonstrate hands-on experience with using component-level simulation tools and pre-developed libraries of building systems components models. • Plan building systems components performance analysis, assemble appropriate components model, conduct analysis and critically examine model’s outputs. • Communicate project results directed at audiences of varied backgrounds. Overall Brief As part of the everyday activities of digital engineers and energy analysts, they have to address a variety of engineering problems concerning the design and performance of energy systems. These problems typically require performing calculations and communicating recommendations/decisions to clients. This coursework collects a set of situational examples that might be encountered in your professional life. You need to understand the requirements (as stated in the coursework brief), develop your models and/or use property tables or software, use your knowledge and relevant tools to perform engineering calculations, and summarise your results in a form that is suitable for external non-specialist audiences (clients). You will be involved in three projects: Project 1: Analysis of a vapour compression system Refrigeration system with outdated refrigerant is improved by replacing the outdated refrigerant with new and environmentally friendly refrigerants and by changing the system design (replace the initial system with a cascade system). The main goal is to evaluate the performance of modernised systems and to compare their performance to that of the initial system. Project 2: Air conditioning facility in Barcelona, Spain An initially designed air-conditioning system has limited control capabilities of a supply air tempera- ture. The proposed improvements include two ways of controlling supply air temperature. The main goal is to assess the energy performance of the key system elements as well as the behaviour of the most important system parameters for varied system configurations. Project 3: Offsetting electric energy demand using renewable energy sources Energy demand can, in some cases, be fully or partially covered by energy generated by renewables. Bringing supply and demand closer to each other can result in a reduction of CO2 emissions. Solar photovoltaic (PV) system is one of the renewable technologies which can be utilised to achieve this. The main goal of this task is to analyse the impact of the main PV system parameters (such as size and orientation) on the PV plant output and to evaluate the potential for fully or partially offsetting electricity demand created by the air-conditioning system described in the project 2. Details on what is required for each project can be found in the corresponding sections below. Resources needed to complete this coursework The weather files and supplementary information is available on the Moodle page of the module. The main software packages you will be using for the tasks are: • Coolprop (using the Python binding) • Modelica/Dymola • Selected components from the Modelica Buildings library (version 10.0.0) Details on where to obtain/download and install each of these tools have been provided during in- duction week. You are welcome to use alternative tools (or even tables), but using the tools above is recommended, as much of the pre-requisite knowledge has been covered in the practical sessions that followed each lecture. You are also welcome to use (referencing appropriately) external sources for additional information/data required to make a solid recommendation. If you have any questions or remarks on any of the tasks, you are encouraged to post on the Moodle Question & Answer Forum on the Module Moodle Page. Project 1: Analysis of a vapour compression system Statement A frozen food processing factory located in Barcelona, Spain with a peak ambient temperature of 35 。C, keeps the foods stored in a cold room at −15 。C and the cooling load (refrigeration effect) at the peak capacity is 20 kW. Currently, the factory employs a single-stage vapour compression system with R236fa as a refrigerant. The company has decided to upgrade the system to one with a more environmentally friendly refrigerant. A consultant (you) has been appointed to redesign the system. The refrigerants available at the local market, which cover the operating temperature range, are R40 and R1233zd(E). The proposal is to evaluate two possible system upgrade scenarios: 1. Replace R236fa in a single-stage vapour compression system (Figure 1) with: (a) R40, or (b) R1233zd(E) Figure 1: Single-stage vapour compression system Figure 2: Cascade vapour compression system 2. Use a cascade system (Figure 2) with: (a) R40 in both lower temperature cycle and higher temperature cycle, (b) R1233zd(E) in both lower temperature cycle and higher temperature cycle, (c) R40 in the lower temperature cycle and R1233zd(E) in the higher temperature cycle, or (d) R1233zd(E) in the lower temperature cycle and R40 in the higher temperature cycle. The client has appointed you to provide them with answers to the following: 1. Which solution from the first upgrade scenario would you propose based on the COP of the system at peak capacity? How do the proposed systems perform (in terms of COP) when compared to the original R236fa system? 2. Which solution from the second upgrade scenario would you propose based on the optimal COP of the system at peak capacity? What is the optimal intermediate temperature in the proposed cascade system (Tc_low ≈ Te_high)? Plot the proposed cascade system performance (COP vs intermediate temperature). How do the proposed systems perform (in terms of COP) when compared to the original R236fa system? 3. A detailed description of a cascade vapour compression system. Hints: • Use the ideal refrigeration cycle for this analysis. • The temperature of the refrigerant in the evaporator (evaporation temperature) is 10 。C cooler than the cold room temperature. • The temperature of the refrigerant in the condenser (condensation temperature) is 15 。C higher than the ambient temperature. • Heat rejection from the lower cycle to the higher cycle takes place in an adiabatic counter-flow heat exchanger where phase change in both streams happens at about the same temperature (Tc__low ≈ Te__high). (In practice, the working fluid of the lower cycle is at a higher temperature in the heat exchanger for effective heat transfer.) Project 2: Air-conditioning facility Statement An air-conditioning facility installed in Barcelona, Spain, shown in Figure 3, is composed of an air- cooled chiller, a chilled water pump, a fan and a heat exchanger. The initial design has the following characteristics (Task2a in the Modelica .mo input file provided with the coursework brief): • Air-cooled liquid chiller with R134a refrigerant and the nominal capacity of 743.7 kW. • Constant speed chilled water pump which keeps the constant chilled water volume flow rate in the chilled water loop of 115 m3/h (0.032m3/s or mass flow rate of 32kg/s). • Constant speed air fan which keeps the constant outdoor air volume flow rate of 48 000 m3/h (13.33m3/s or mass flow rate of 16kg/s). • The chilled water temperature leaving the chiller (and entering the cooling coil – water side), tch__out , is controlled to a fixed setpoint of 7 。C. The system as designed operates with a variable temperature of the air delivered to the conditioned space (and leaving the cooling coil – air-side), t air_out, which fluctuates between 8 。C and 10 。C during most of the warm season period. Due to persistent complaints from occupiers of the conditioned space about non-comfortable conditions caused by the cold air draft, a consultant (you) has been appointed to modify the system to operate with the fixed temperature of the air delivered to the conditioned space, tair__out . The desired fixed setpoint is set to 13 。C. You decide to analyse two possible system modification scenarios: Figure 3: Air-conditioned system (initial design) Figure 4: Air-conditioned system (with the cooling coil bypass) 1. To introduce the three-port valve and the bypass in the chilled water loop (Figure 4) which are used to modulate the chilled water flow rate through the cooling coil to keep the air temperature constant at 13 。C. The amount of water flowing through the chiller (and chilled water pump) remains as initially designed (constant). The air-side has no changes except the installation of the air temperature sensor in the air stream path after the cooling coil (Task2b in the Modelica .mo input file provided with the coursework brief). 2. To keep the water-side intact and to control the air delivered to the conditioned space temperature (at 13 。C by modulating the airflow rate via installing the variable speed drive (VSD) which controls the fan motor speed (Figure 5). (Task2c in the Modelica .mo input file provided with the coursework brief). The chilled water temperature is controlled by a fixed setpoint (tch_out = 7 。C) in both scenarios. The conclusions are to be provided based on the analysis conducted for the particularly warm summer week; week 35 (the simulation interval start time is a day 239 while the simulation interval stop time is a day 246. The external weather conditions are provided in .mos file supplied with this coursework brief). You are asked to prepare a report with the following information: Note: You may want to focus on the subset while providing more in-depth analyses. • Mandatory: Choose one from three provided systems (initial design + two system modification scenarios) and determine the heat and moisture removal rates from the air passing across the cooling coil on the Noon of the first day of the simulation period (timestamp: 239.5d). Assume that the moisture in the air that condenses during the process is removed at the temperature of the air leaving the cooling coil and assume that the air pressure is 1 atm. Use Dymola to Figure 5: Air-conditioned system (with the VSD) obtain necessary air properties such as air mass flow rate, temperature, specific humidity and/or enthalpy of air entering/leaving the cooling coil. Conduct calculations in Python by using the CoolProp. Sketch the process in the psychrometric diagram. • Chiller, pump and fan energy consumptions for the analysed week with the discussion on differ- ences (and cause of differences) among analysed scenarios. • Chiller COP in all three scenarios (and a reason for a difference, if any). • Heat exchanged between water stream and air stream. • The temperature of the air delivered to the conditioned space both uncontrolled and controlled (temperature oscillations as a function of the control mechanism). • For the second modification scenario air flow rates compared to the constant flow rate operation. • The temperature of the water leaving the cooling coil in all three scenarios. • The first modification scenario water flow rates through the cooling coil compared to the operation without bypass. Hint: Use 0.001 for the integration tolerance within the simulation setup to decrease simulation time. Project 3: Offsetting electric energy demand using renewable energy sources Statement As part of the previous project, there is a discussion about offsetting electricity requirements using renewable energy sources. You are tasked to explore the opportunity of installing a solar photovoltaic (PV) plant to support and partially offset the electricity requirements of the air-conditioning facility. Analyse the coupling of the air-conditioning facility with the PV plant with respect to the PV net surface area, the PV tilt angle, and the PV azimuth angle. Hint: Compare the PV power output with the fan/pump/chiller power inputs.
ECON 385 Intermediate Macroeconomic Theory II, Fall 2024 Midterm Exam 1. (15 points) Consider the DMP model. Weakening of labor unions leads to a reduction in the worker’s bargaining power, a; at the same time, the cost of posting a vacancy, k, goes down. (a) (5 points) Analyze the changes using the relevant diagrams, clearly labeling axes and showing all the relevant values on the x-axis and y-axis before and after the changes in the worker bargaining power and unemployment insurance benefit. (b) (10 points) Discuss economics behind the possible e↵ects of changes on i) labor market tightness; ii) unemployment rate; iii) labor force; (iv) the vacancy rate, (v) aggregate output, and (vi) the wage. (The full mark will be given only to answers that comment on the changes—simply stating direction of the change will be not enough.) 2. (20 points) Provide short answers to each question below. (a) (5 points) Does the production function Y = K1/5 L4/5 − A KL exhibit increasing, constant, or decreasing returns to scale in K and L? Assume A is a fixed positive number. (Note: Show your work. Simply guessing it correctly will not yield any points.) (b) (5 points) Let us make a conjecture that technology levels are the same across countries so that the di↵erences in GDP per capita are the result of di↵erences in capital per capita. Let the ratio of capital per worker in the UK relative to the U.S. value equal 0.832. Assume that production function for the U.S. and UK is Cobb-Douglas, constant-returns-to-scale in capital and labor, with the share of capital costs in total income being equal to 1 3 . What would be the predicted ratio of real wages in the UK relative to the U.S. value? (c) (5 points) Assume a Solow economy with no technological progress. Production function is Cobb-Douglas. The share of capital income in total income equals 50%. Population growth equals 2% and the depreciation rate equals 3%. The savings rate in the economy is 10%. What is the ratio of output per capita in the steady-state of the economy with the savings rate of 10% relative to the value of output per capita in the golden rule steady state? (d) (5 points) Consider a Solow economy with population growth and technological progress. Prove that in the steady state (balanced growth path) the capital-output ratio is constant in this economy 3. (15 points) Consider an economy experiencing a reduction in the aggregate capital from K0 to K1 < K0 at some point in time t0 (due to, e.g., a war). Assuming the economy with technological progress at a rate of 3% and population growth of 3% starts in its initial steady state, use the Solow model to explain in words what happens to the economy over time and in the very long run. Support your answer with three diagrams: 1) the Solow diagram that outlines the changes; 2) output per capita against time using a ratio scale and 3) total output against time using a ratio scale. Assume that the growth rate of population stays constant over time at a rate of 3%.
Neural Network-Based High-Frequency Trading System Detailed Implementation Plan and Technical Specification 1. Executive Summary This project aims to develop a sophisticated high-frequency trading (HFT) system powered by neural networks, capable of processing fine-grained market data to execute rapid trading decisions. The system will leverage modern deep learning techniques while maintaining low latency and high reliability requirements essential for HFT operations. 2. Technical Stack and Resources 2.1 Programming Languages - **Primary**: Python 3.11+ (for ML pipeline and backtesting) - **Secondary**: C++ (for low-latency components) - **Auxiliary**: Bash/Shell (for automation and deployment scripts) 2.2 Machine Learning Frameworks - **PyTorch**: Primary deep learning framework - **NumPy**: Numerical computations and data manipulation - **pandas**: Time series data handling and analysis - **scikit-learn**: Feature preprocessing and traditional ML algorithms - **Ray**: Distributed computing and model training 2.3 Data Processing and Storage - **Apache Kafka**: Real-time data streaming - **InfluxDB**: Time-series database for market data storage - **Redis**: In-memory cache for real-time data access - **PostgreSQL**: Persistent storage for trade records and system state 2.4 Infrastructure and Deployment - **AWS**: Primary cloud platform - EC2 instances with GPU support (p3.2xlarge or similar) - S3 for data storage - CloudWatch for monitoring - **Docker**: Containerization - **Kubernetes**: Container orchestration - **Jenkins**: CI/CD pipeline 2.5 Market Data Resources - **Primary**: IEX Cloud API - **Secondary**: Alpha Vantage API - **Backup**: Yahoo Finance API - **Historical Data**: Kaggle Financial Datasets 3. Potential Challenges and Mitigations 3.1 Technical Challenges 1. **Latency Requirements** - *Challenge*: Achieving microsecond-level response times - *Mitigation*: - Implement critical paths in C++ - Use FPGA acceleration for signal processing - Optimize network routes using co-location services 2. **Data Quality and Consistency** - *Challenge*: Ensuring clean, reliable market data - *Mitigation*: - Implement redundant data sources - Develop robust data validation pipeline - Create automated anomaly detection 3. **Model Stability** - *Challenge*: Maintaining consistent performance across market conditions - *Mitigation*: - Implement ensemble methods - Use adaptive learning rates - Regular model retraining 3.2 Operational Challenges 1. **Regulatory Compliance** - *Challenge*: Meeting SEC and exchange requirements - *Mitigation*: - Regular consultation with legal team - Implement compliance checking in trading logic - Maintain detailed audit logs 2. **System Reliability** - *Challenge*: Ensuring 24/7 operation - *Mitigation*: - Implement redundant systems - Automated failover mechanisms - Comprehensive monitoring and alerting 4. Detailed Team Assignments 4.1 Team Structure - 2 ML Engineers (ML1, ML2) .(boyangs3,zh44) - 2 Backend Engineers (BE1, BE2) (boyangs3, qi14) - 1 Infrastructure Engineer (IE) (boyangs3) - 1 Data Scientist (DS) (tzhao33) 4.2 Sprint Schedule (6-week timeline) **Weeks 1-2: Setup and Infrastructure** *ML1*: - Days 1-3: Set up development environment - Days 4-7: Design initial neural network architecture - Days 8-11: Implement basic model training pipeline *ML2*: - Days 1-3: Research and select appropriate ML frameworks - Days 4-7: Design feature engineering pipeline - Days 8-11: Implement data preprocessing modules *BE1*: - Days 1-3: Set up version control and CI/CD - Days 4-7: Design database schema - Days 8-11: Implement basic API structure *BE2*: - Days 1-3: Configure development environments - Days 4-7: Set up market data feeds - Days 8-11: Implement data collection pipeline *IE*: - Days 1-3: Configure AWS infrastructure - Days 4-7: Set up Kubernetes clusters - Days 8-11: Implement monitoring systems *DS*: - Days 1-3: Define data requirements - Days 4-7: Design feature set - Days 8-11: Create initial data analysis pipeline **Weeks 3-4: Core Development** *ML1*: - Days 1-4: Implement LSTM model - Days 5-8: Develop attention mechanism - Days 9-12: Begin model training and validation *ML2*: - Days 1-4: Implement data augmentation - Days 5-8: Develop feature selection - Days 9-12: Create model evaluation metrics *BE1*: - Days 1-4: Implement order management system - Days 5-8: Develop risk management module - Days 9-12: Create trading execution engine *BE2*: - Days 1-4: Implement real-time data processing - Days 5-8: Develop market impact analysis - Days 9-12: Create order routing system *IE*: - Days 1-4: Optimize network infrastructure - Days 5-8: Implement auto-scaling - Days 9-12: Set up backup systems *DS*: - Days 1-4: Develop backtesting framework - Days 5-8: Create performance metrics - Days 9-12: Implement strategy validation **Weeks 5-6: Integration and Testing** 5. Success Metrics and Deliverables 5.1 Technical Metrics - System latency < 100 microseconds - 99.99% uptime - Model prediction accuracy > 60% - Sharpe ratio > 2.0 in backtesting 5.2 Project Deliverables - Fully functional HFT system - Comprehensive documentation - Test suite with > 90% coverage - Performance analysis report - Deployment and maintenance guides 6. Budget and Resource Requirements 6.1 Computing Resources - 4 GPU-enabled EC2 instances - 2 High-memory instances for data processing - Storage: 5TB S3, 1TB EBS - Network: 10Gbps dedicated connection 6.2 Data Resources - Market data feed subscriptions - Historical data sets - Development tools and licenses 6.3 Human Resources - 4 full-time team members - Part-time legal consultant - DevOps support 8. Conclusion This project represents a significant technical challenge requiring careful coordination of multiple specialized team members. The success of the project will depend on maintaining strict timelines, clear communication, and rigorous testing throughout the development process. Regular reviews and adjustments to the plan will be necessary to ensure all objectives are met within the specified timeframe.
Systems and Networks COMPSCI 4043 December 2022 1. (a) Express the following in 16-bit two’s complement representation, giving your answers in hexadecimal. Show your working. i. 1000 ii. -2000 [4] (b) Recall that a sign-and-magnitude code represents integer values by using the most significant bit to indicate the sign, and the remaining bits to hold the magnitude (or size) of the number. Thus $8002 is -2, while $0002 is +2, and soon. Write a commented Sigma16 program to convert a variable, x, containing a 16-bit integer value in sign-and-magnitude form, into to a two’s complement value stored in another variable y. Thus $8001 (sign and magnitude code for -1) should turn into $FFFF (two’s complement code for -1) and soon. Assume that x will never contain the anomalous sign-and-magnitude code for “negative zero” ($1000) so you do not need to check for this. [5] (c) Write a Sigma16 program to read a 16-bit variable x, swap the most significant and least significant bytes and store the result in another variable y. Thus, if x contains $1234 initially, y should be $3412 etc. [4] (d) Write a commented Sigma16 program to process an array, X, of n 16-bit signed numbers in memory. If a number is positive, double it; if it is negative, square it (multiply it by itself). [7] For reference, here is a summary of the instruction set of the Sigma16 CPU. Mnemonic Syntax Action lea Rd, x[Ra] Rd:= x +Ra load Rd, x[Ra] Rd:= mem[x +Ra] store Rd, x[Ra] mem[x +Ra]:=Rd add Rd,Ra,Rb Rd:= Ra+Rb sub Rd,Ra,Rb Rd:= Ra-Rb mul Rd,Ra,Rb Rd:= Ra*Rb div Rd,Ra,Rb Rd:= Ra/Rb, R15:=Ra mod Rb and Rd,Ra,Rb Rd:= Ra AND Rb inv Rd,Ra,Rb Rd:= NOT Ra or Rd,Ra,Rb Rd:= Ra OR Rb xor Rd,Ra,Rb Rd:= Ra XOR Rb cmplt Rd,Ra,Rb Rd:= RaRb shiftl Rd,Ra,Rb Rd:=Ra logic shifted left Rb places shiftr Rd,Ra,Rb Rd:=Ra logic shifted right Rb places jumpf Rd, x[Ra] If Rd=0 then PC:=x+Ra jumpt Rd, x[Ra] If Rd0 then PC:=x+Ra jal Rd, x[Ra] Rd:= pc, pc: =x +Ra trap Rd,Ra,Rb PC:= interrupt handler jump x[Ra] PC:= x +Ra 2 (a) Taking the Sigma16 instruction STORE R4, 100[R1] as an example, describe how this would appear as machine code in memory and outline the steps involved in fetching and executing it. [6] (b) The following Sigma16 code is intended to take a 10-element array of two’scomplement numbers (only the first DATA element is shown), replace every odd element with 1 and every even element with 0. However, although the code will assemble, it contains several logical errors (as opposed to syntax errors or inefficiencies, which you may ignore). i. Draw up a register use table for the program (suitable for inclusion as comment). ii. Identify the errors and explain how you would correct them. iii. Write out the corrected program. LEA R1,1[R0] ;Set R1 to constant 1 LOAD R2,0[R0] ;i:=0 ADD R5,R1,R1 ;R5:=2 LOAD R3,n[R0] ;Set R3 to n FOR CMPEQ R14,R3,R2 ;Is i=n? JUMPF R14,OUT[R0] ;if yes, exit LOAD R4,X[R2] ;load X[i] DIV R6,R4,R5 ;R6= X[i] mod 2 ADD R2,R2,R1 ;i:=i+1 STORE R6,X[R2] ;X[i]=X[i] mod 2 JUMP FOR[R0] ;loop OUT TRAP R0,R0,R0 ; Data Area n DATA 10 X DATA -8 DATA ... [6] (c) Estimate how many memory cycles the corrected program would take to run. [4] (d) In the corrected program estimate the advantage a system with a cache memory would gain if a primary memory cycle took 10ns and a cache cycle 1ns. [4] 3. (a) Processors that support multitasking operating systems (unlike Sigma16) will generally have privileged instructions in their instruction sets. Explain why the notion of “privilege” is necessary and give two examples of what such instructions might do. [5] (b) Explain why analogue transmission of data is more vulnerable to noise than digital. If bit errors do occur in digital transmission, briefly describe one way these might be detected. [5] (c) UDP is often used to transmit real-time audio (e.g. in VoIP applications). Explain why UDP might be considered more suitable than TCP for this purpose. [5] (d) Suppose a TCP sender is transmitting at 1.5Gbytes/sec (1.5 x 109 bytes/sec) with TCP segments that are approximately 1500 bytes long. Each segment has a 32-bit (unsigned) sequence number in its header that is incremented by 1 at the sender each time a new segment is transmitted. If the first segment sent has number 0, how long before the numbers run out? Discuss briefly what would happen then? [5]
Lecture 7: Advanced Data Analysis in Excel using ToolPak ECON10151: Computing for Social Scientists November 10, 2024 In previous lectures, we covered essential Excel tools for organising and summarising data. We manually calculated mea- sures like the mean, variance, and standard deviation, and used Pivot Tables for flexible data summaries. Today, we’ll take this further by introducing a more efficient method — the Excel Analysis ToolPak. The ToolPak is an add-in that performs statistical calculations quickly and accurately, allowing you to run analyses with just a few clicks. This is especially valuable for larger datasets, saving time and reducing errors. The ToolPak provides tools for advanced data analysis, including statistical tests, visualisations, and regression, all within Excel. We’ll start by using the ToolPak to analyse student performance data from a sample dataset with scores of 72 students across different subjects. The dataset includes: • Student ID • Gender • Math score • Writing score 1 Installation Guide: Setting Up the Data Analysis ToolPak To get started, let’s make sure the ToolPak is enabled in Excel. Follow the steps below for your specific operating system. 1.1 Mac To enable the Analysis ToolPak on a Mac: 1. Open the Tools menu. 2. Select Excel Add-Ins. 3. Tick the Analysis ToolPak checkbox and click OK. 4. If the Analysis ToolPak is not listed, click Browse to find it, or select Yes if prompted to install it. 5. Once installed, the Data Analysis button will appear on the Data tab. 1.2 Windows To enable the Analysis ToolPak on Windows: 1. Navigate to File > Options > Add-Ins. 2. In the Manage dropdown box, choose Excel Add-ins and click Go. 3. Tick the Analysis ToolPak checkbox and click OK. 4. If the Analysis ToolPak is not available, click Browse to locate it, or select Yes if prompted to install it. 5. Once installed, the Data Analysis option will appear in the Analysis group on the Data tab. 1.3 ToolPak Overview Clicking on Data Analysis opens a dialog box with a variety of tools for performing data analysis using built-in mathematical formulas. Here are the key tools we will be using today: • Descriptive Statistics • Rank and Percentile • Correlation • Regression These tools are crucial for identifying data patterns, summarising information, and supporting informed decision-making. If some of these terms are new, don’t worry — we will cover each tool in detail with practical examples. 2 Descriptive Statistics The Descriptive Statistics tool is one of the simplest yet most powerful options for summarising data. This tool provides a quick overview by producing essential summary measures, such as the mean, median, and variance. Let’s use this tool to analyse the math score variable in our dataset. Once the Analysis ToolPak is enabled, follow these steps to generate descriptive statistics in Excel: Step 1: Open the Data Analysis Dialog Box: • Mac: Go to Data > Data Analysis. • Windows: On the Data tab, in the Analysis group, click Data Analysis. Step 2: Select Descriptive Statistics from the list and click OK. Step 3: In the Descriptive Statistics dialog box, set the following options: (a) Input Range: Select the range of data to analyse. For our example, choose the math scores in C1:C73. (b) Grouped By: Select whether your data is organised by columns (default) or rows. For this dataset, keep "Columns" selected. • Choose "Rows" only if your data is arranged horizontally. (c) Labels in First Row: Tick this box if your data includes column headers in the first row. (d) Output Range: Specify where to display the results. You can: • Place the output in a new worksheet to keep things organised, or • Select a specific cell, such as H1, in the current worksheet, ensuring there is enough space for the output. Note: The output requires at least 2 columns per variable, so make sure there is adequate space. (e) Tick Summary Statistics to generate key measures. (f) Click OK to produce the table. The report will include important statistics such as: • Central Tendency: Measures that summarise the centre of a dataset, including the mean (average value), median (mid- dle value in sorted data),and mode (most common value). • Variability: Statistics that show how spread out the data is, including: – Standard Deviation and Variance: Indicate the dispersion of data points around the mean. – Minimum and Maximum: The smallest and largest values in the dataset. – Range: The difference between the maximum and minimum, indicating the total spread. • Sum and Count: The total of all values and the number of data points. Note: Kurtosis and Skewness are also shown to indicate the shape of the data distribution; the Standard Error shows how much the average value (mean) might vary if we took different samples. It helps indicate how precise the mean is. These details are not crucial to know for this course. 2.1 Analysis Limitations: Text Data Attempting to generate a descriptive statistics table for the Gender variable will result in an error message: “Descriptive Statistics - Input range contains non-numeric data.” Text data can be difficult to analyse quantitatively, so it often needs to be recoded into a numerical format: • Binary Text Data (e.g., yes/no, true/false) can be converted to 0s and 1s for easier analysis. • Ordered Categories can sometimes be mapped to integers. Examples include: – Freshman, Sophomore, Junior, Senior – Strongly Agree, Agree, Disagree, Strongly Disagree • Some text data may not translate meaningfully into numbers. For instance, country names cannot be easily ranked or quantified. In our dataset, the Gender column is binary (female/male), so we can recode it as 1s and 0s using the IF function. Recall that this function performs a conditional test, returning one value if the condition is TRUE and another if it is FALSE: = IF(logical_test,value if true,value if false) To create a binary numerical variable for Gender, enter the following formula in a new column (e.g., cell E2): = IF(B2 = “female” , 1, 0) This assigns a value of 1 if the student is female (as indicated in B2) and 0 otherwise. Label Column E as Gender Dummy. Once recoded, we can generate descriptive statistics for all three variables in the dataset by setting the Input Range in Step 3 above to C1:E73. 3 Rank and Percentile The Rank and Percentile tool in the Analysis ToolPak helps quickly identify the rank of values in a list and the corresponding percentile for each value. The percentile indicates the percentage of data points that fall below a given number, showing the relative position of each data point within the dataset. To illustrate, let’s calculate the Rank and Percentile for the writing score data: Step 1: Open the Data Analysis Dialog Box: • Mac: Go to Data > Data Analysis. • Windows: On the Data tab, in the Analysis group, click Data Analysis. Step 2: Select Rank and Percentile from the list and click OK. Step 3: In the Rank and Percentile dialog box, set the following options: (a) Input Range: Select the range for the writing scores (e.g., D1:D73). (b) Grouped By: Ensure it is set to Columns. (c) Labels in First Row: Tick this box since the first row contains column headers. (d) Output Range: Choose where to display the results (e.g., O1), ensuring there is enough space for the output. (e) Click OK to generate the table. The output table includes four columns: • Point: The position of each value in the original list, allowing you to match values to their original order. • Writing Score: The original data values (e.g., writing scores), retaining the original label. • Rank: The rank of each writing score, sorted in descending order, shows how each score compares within the dataset. For example, the highest score will have a rank of 1, the next highest will be 2, and so on. This helps you quickly identify where each value stands in relation to others. Note: Scores with the same value will share the same rank. • Percent: The percentile rank indicates the percentage of data points that fall below each writing score. This helps show the relative standing of each score within the dataset. For instance: – If a writing score is in the 100th percentile, 100% of the scores in the dataset are lower than this value — this score will have the highest rank. – If a writing score is in the 50th percentile, 50% of the scores in the dataset are lower — this represents the median. 4 Correlation Loosely speaking, correlation measures how strongly two variables are related, indicating whether they move together in a similar way: • A positive correlation means that as one variable increases, the other tends to increase as well (or if one decreases, the other also decreases). For example, as study time goes up, test scores might also go up. • A negative correlation means that as one variable increases, the other tends to decrease. For instance, as the number of hours spent watching Netflix increases, time spent studying might decrease. • Correlation takes values between -1 and 1: – A correlation value close to ±1 indicates a strong linear relationship between the variables, meaning they move closely in sync, either in the same direction (positive correlation) or in opposite directions (negative correlation). – Values near 0 suggest little to no linear relationship between the variables. (You will learn about correlation more formally in the Semester 2 Advanced Statistics course.) To explore the relationship between variables, follow these steps: Step 1: Open the Data Analysis Dialog Box: • Mac: Go to Data > Data Analysis. • Windows: On the Data tab, in the Analysis group, click Data Analysis. Step 2: Select Correlation from the list and click OK. Step 3: In the Correlation dialog box, set the following options: (a) Input Range: Select the range for the data you want to analyse (e.g., C1:E73 for the scores and gender dummy). (b) Grouped By: Ensure it is set to Columns. (c) Labels in First Row: Tick this box if the first row contains column headers. (d) Output Range: Choose where the results should be displayed (e.g., H22), ensuring there is enough space for the output. (e) Click OK. Excel will generate a table showing the correlation coefficients between the variables. 5 Regression Regression analysis helps us identify trends and understand relationships between variables. Today, we’ll use Excel to perform simple regression analysis on a dataset from Starbucks. The dataset includes annual advertising costs from 2000 to 2018 in column B (input variable, X) and sales revenues in column C (output variable, Y). Although many factors affect sales, we will focus on these two variables for simplicity. Previously, we used scatter plots to visualise the relationship between variables by plotting data points. Today, we’ll take this further by applying simple linear regression, which fits a line to the data to quantify the rela- tionship and make predictions. Specifically, we’ll model the relationship between advertising costs (X) and sales revenue (Y). Our objective is to learn how to use simple linear regression to predict sales revenue from advertising costs. In short, we want to answer: If we know the advertising cost, can we predict sales revenue, and how? The linear regression model is expressed as: Y = a+bX + e, where: • X: The input variable (advertising cost). • Y: The output variable (sales revenue). • a: The Y-intercept, or the estimated Y value when X is 0. • b: The slope, which tells us how much Y is predicted to change for a one-unit change in X. • a+bX: The equation used to predict Y based on X. • e: The error term, or prediction error — the difference between actual and predicted Y values, also known as the residual. The e term indicates that our predictions may not be perfect due to factors not included in the model. Usually, e is not zero because other variables, such as customer preferences or store locations, can influence sales revenue. 5.1 Creating a Scatter Plot with a Trendline in Excel Let’s revisit how to create a scatter plot and add a trendline to visualise a simple linear regression. Follow these steps: Step 1: Select the Data: Highlight the data range for the input (X) and output (Y) variables. For example, select the range B1:C20 to include both advertising costs and sales revenues. Step 2: Insert the Scatter Plot: • Go to the Insert tab at the top of the Excel window. • In the Charts group, click on the Scatter icon and select Scatter with only Markers. Step 3: Add Labels and Titles: • Click on the chart to activate the Chart Design Tools. • Click the Add Chart Element icon (the + sign) and add Axis Titles and a Chart Title. • Label the x-axis as “Advertising Costs (X)” and the y-axis as “Sales Revenue (Y)” . • Edit the chart title to a descriptive name, such as “Relationship Between Advertising Costs and Sales Revenue” . Step 4: Add a Trendline: • Right-click on any data point in the scatter plot and select Add Trendline. Alternatively, you can click the Add Chart Element icon (the + sign) and choose Trendline from the dropdown menu. • Choose Linear as the trendline type. • Tick the Display Equation on chart box to show the regression equation. • To make the trendline more visible, change the line colour to red and adjust the line style if desired. Explanation: The trendline represents the best-fit straight line through the data points (in the sense of minimising prediction errors), illustrating the linear relationship between advertising costs and sales revenue. The displayed equation (e.g., y = 1.1343x − 8.0544) is your regression line, where a = −8.0544 is the intercept and b = 1. 1343 is the slope. 5.2 The Regression Tool in ToolPak We can take this regression analysis further with the Regression Tool in the Analysis ToolPak. This tool allows us to gain more detailed insights into the relationship between variables by providing key statistical outputs. To perform. regression analysis in Excel: Step 1: Open the Data Analysis Dialog Box: • Mac: Go to Data > Data Analysis. • Windows: On the Data tab, in the Analysis group, click Data Analysis. Step 2: Select Regression from the list and click OK. Step 3: In the Regression dialog box, set the following options: (a) Input Y Range: Select the output variable, e.g., Sales Revenues (C1:C20). (b) Input X Range: Select the input variable, e.g., Advertising Costs ( B1:B20). (If there are multiple X variables, they should be in adjacent columns.) (c) Tick the Labels box if your data has headers. (d) Choose where to display the output, either in a new worksheet or in a specific cell (e.g., E1) in the current worksheet, ensuring there is enough space for the results. (e) Optional settings: • Check Residuals to see the differences between predicted and actual values. • Check Line Fit Plots to visualise actual versus predicted values. • Check Residual Plots to visualise the residuals. (f) Click OK to generate the regression analysis output. Key Insights from Regression Outputs: • The coefficients show the values of a (intercept) and b (slope), which define the regression line equation. • The residual output includes the predicted Y values, calculated as a+bX for each X , and the residuals, e = Y −(a+bX), which are the differences between actual and predicted Y values. • The line fit plot shows the actual data alongside the predicted Y values, similar to a scatter plot with a trendline. To customise the marker format, click any marker in the plot to open the Format Data Series pane. Then, click the Marker button, expand the Marker Options dropdown, and change the setting from Automatic to Built-in to modify the marker style. • The residual plot visualises the residuals, indicating where the predicted line deviates from actual data points. Positive and negative residuals show over- and under-predictions, respectively. Note: The output may include standardised residuals and a normal probability plot, which standardise the residuals by their mean and standard deviation to check if they are normally distributed. For this course, you do not need to know these two in detail. Take-Home Exercise: Use the Regression Tool to find the line that best predicts the Math score based on the Writing score and Gender from the student performance dataset. Hint: In Step 3 above, for the Input Y Range, select the Math score (C1:C73), and for the Input X Range, select both the Writing score and Gender Dummy (D1:E73). (You do not need to generate the line fit plot or the residual plot for this exercise.)
Department of Mathematics Midterm # 1, MATH-UA.0325 - Fall 2024 Exercise 1. (10 pts) True or false, prove or find a counterexample. a) Let {xn}∞ n=1 ⊂ R be a bounded sequence. Then, it is not possible to find a convergent subsequence {xnk }∞ k=1 ⊂ {xn}∞ n=1. b) Assume the real number x > −1. Then, (1 + x) n ≥ 1 + nx, n > 1. Exercise 2. (10 pts) Answer to the following questions: a) Prove that if f maps E → F and A ⊂ E, B ⊂ E, then f(A ∪ B) = f(A) ∪ f(B). b) Consider the sets V = {(x, y) : √x2 + y2 < 1} and W = {(x, y) : max{|x|, |y|} < 1}. Prove that V ⊂ W. Exercise 3. (10 pts) Let xn = (1 + n2/n2) cos 3/2nπ. Find lim inf xn and lim sup xn. n→∞ n→∞ Exercise 4. (10 pts) Use the Cauchy’s criterion or the ratio test to determine whether or not the following sequences converge (justify your answer): a) {xn}∞ n=1, where xn = n!3n/n n. b) {yn}∞ n=1, where yn = 2/sin 1 + 2 2/sin 2 + · · · + 2n/sin n. Exercise 5. (10 pts) Determine whether or not the sequence {xn}∞ n=1, where xn = nn/n! , converges by answering to the following questions: a) Is {xn}∞ n=1 monotone decreasing? b) Is {xn}∞ n=1 bounded from below? c) Compute lim xn/xn+1. n→∞ In addition, d) Is lim √xn = √lim xn? Justify your answer. n→∞ n→∞
Submission details Module title: Machine Vision Module code: UFMFRR-15-M Assessment title: Group Report on Apple Counting in Orchards Assessment type: Report - Written Group Project Report Assessment weighting: 50% of total module mark Size or length of assessment: Maximum word count 5,000 Module learning outcomes assessed by this task: 1. Interpret the current key research issues in machine vision. 2. Identify requirements of an application task; formulate and constrain a machine vision problem. 3. Design and implement machine vision solutions to real-world problems and evaluate algorithm performance. 4. Explain, compare and contrast machine vision techniques including image acquisition, feature extraction and machine learning. Use of Generative AI (GenAI) in assessment: You can use Generative AI in this assignment for checking spelling, grammar etc. Guidance on Referencing (inc AI): Please note that the aim of referencing is to demonstrate you have read and understood a range of sources to evidence your key points. You need to list the references consistently and in such a way as to ensure the reader can follow up on the sources for themselves. You must use the UWE Bristol Harvard referencing style. Referencing - Study skills | UWE Bristol Using generative AI at UWE Bristol - Study skills | UWE Bristol Submission and feedback dates Submission deadline: Before 14:00 on 7th January 2025 This assignment is eligible for the 48-hour late submission window but not the Reasonable Adjustment to deadline. Submission format: 1. A 5,000-word group report as a MS Word document (compulsory). If you use LaTeX to prepare the report, please submit a PDF file instead. For any student completing this assignment as a group of one, the word count limit is reduced to 2,000. 2. A peer assessment form. using the template “PeerAssessmentForm_24-25.xlsx” that can be found in Blackboard->Assessments (compulsory). Not applicable to any student completing this assignment as a group of one. 3. python scripts as .py files or .ipython files, compressed into a single .zip file (compulsory). Do not upload trained models or datasets. This file will only be used if the marking team feel that additional verification needs to be carried out. Marks and Feedback due on: 4th February 2025 N.B. all times are 24-hour clock, current local time (at time of submission) in the UK Marks and Feedback will be provided via: Blackboard Completing your assessment What am I required to do on this assessment? Detection, counting, and localisation of fruits in orchards are important tasks in agricultural automation, which can assist with automated fruit picking. Amongst the various types of sensors employed to achieve this reliably in a real-world environment, visual sensors - primarily cameras - have been the most widely used. Despite the differing computer vision approaches utilised to analyse fruit images in the literature, a number of challenges have not been resolved to date, including varying illumination conditions, great variability in fruit/fruit tree appearance, fruit occlusions, and variable camera viewpoint. The specific requirements are outlined below: 1. You are required to design, implement and evaluate algorithms for apple counting in an orchard environment. 2. You must use the template at the end of this document for report writing. Read carefully the marking criteria that have been embedded in the template. 3. You are not expected to carry out physical data capture experiments, but you are required to identify relevant and publicly available datasets from the internet, such as (but not limited to) the MinneApple dataset (Naeni, Roy and Isler, 2019) downloadable from this webpage. Haeni, Nicolai; Roy, Pravakar; Isler, Volkan. (2019). MinneApple: A Benchmark Dataset for Apple Detection and Segmentation. Retrieved from the Data Repository for the University of Minnesota, https://doi.org/10.13020/8ecp-3r13. 4. You are required to propose, implement, and compare a conventional image processing based approach and a machine learning approach for apple counting (choose one approach only if you are completing this assignment as a group of one). It is not expected that both approaches will achieve outstanding performances, but you need to show that careful considerations have been made to the design of algorithms and interpretation of results. 5. Each group member is expected to contribute equally to the assignment, with every group member actively participating in technical development tasks (beyond literature review and report writing). 6. You must use Python for coding. You may choose to use a python IDE on your local computer or use the Google Colab. Be aware that you may need a GPU if you employ certain deep learning models. Where should I start? Attend the weekly Group Tutorial session and use the exercises and the provided templates to help you make progress with this assignment. What do I need to do to pass? Refer to the marking criteria and achievement of the minimum mark which is 50%. How do I achieve high marks in this assessment? Refer to the marking criteria. What additional resources may help me complete this assessment? · https://www.uwe.ac.uk/study/study-support/study-skills · Critical thinking and writing - Reading and writing | UWE Bristol · Writing - Reading and writing | UWE Bristol · Writing feedback - Writing | UWE Bristol Use relevant academic publications, GitHub repositories, and online tutorials to help you improve the breadth and depth of this report. What do I do if I am concerned about completing this assessment? It is recommended that you review all of the relevant materials on Blackboard. You can also speak to your module leader for advice and guidance. UWE Bristol offer a range of Assessment Support Options that you can explore through this link, and both Academic Support and Wellbeing Support are available. For further information, please see the Student study essentials. How do I avoid an Assessment Offence on this module? Use the support above if you feel unable to submit your own work for this module. Understanding the University rules and requirements around assessment offences is your responsibility: https://www.uwe.ac.uk/-/media/uwe/documents/study/academic-conduct-policy-and-academic-misconduct-procedures.pdf Marks and Feedback Your assessment will be marked according to the following marking criteria (embedded in the template at the end of this document). You can use these to evaluate your own work before you submit. 1. In line with UWE Bristol’s Assessment Content Limit Policy (formerly the Word Count Policy), word count includes all text, including (but not limited to): the main body of text (including headings), all citations (both in and out of brackets), text boxes, tables and graphs, figures and diagrams, quotes, lists. 2. UWE Bristol’s UWE’s Assessment Offences Policy requires that you submit work that is entirely your own and reflects your own learning, so it is important to: · Ensure you reference all sources used, using the UWE Harvard system and the guidance available on UWE’s Study Skills referencing pages. · Refer to peer reviewed primary sources, rather than using AI or secondary sources · Avoid copying and pasting any work into this assessment, including your own previous assessments, work from other students or internet sources · Develop your own style, arguments and wording, so avoid copying sources and changing individual words but keeping, essentially, the same sentences and/or structures from other sources · Never give your work to others who may copy it · If an individual assessment, develop your own work and preparation, and do not allow anyone to make amends on your work (including proof-readers, who may highlight issues but not edit the work) and When submitting your work, you will be required to confirm that the work is your own, and text-matching software and other methods are routinely used to check submissions against other submissions to the university and internet sources. Details of what constitutes plagiarism and how to avoid it can be found on UWE’s Study Skills pages about avoiding plagiarism. Assignment Resit Specific resit information will be sent to you at a later date. The resit report will be based on a similar but different task. Group Report Template with Marking Criteria Texts in Blue are not included in the word count Please complete the form. below which serves to provide additional group-work information to the peer assessment process. No. Name of group members Contribution to project Contribution to report Signature e.g., Wenhao Zhang e.g., Data pre-processing, implementation of approach A e.g., Section 1, 80% of section 5 1 2 3 4 5 1. Introduction (5%) Introduce the background of the project. Illustrate any assumptions made, for example, the lighting conditions you need to deal with in an orchard environment. Clearly show the aim and objectives of this project and discuss the challenges. 2. Related works (10%) Conduct a short literature review on methods relevant to apple counting. For example, if algorithms proposed in prior works on berry detection/counting are deemed applicable to apple counting, you may include a critical review of these as well. In later sections, you may use this literature review to assist with justification of your methodology as well as with discussing its capabilities and limitations. You will be assessed on the breadth and depth of the review. 3. Data acquisition and datasets (10%) Note that you are not expected to carry out any physical data capture experiment. Instead, illustrate the types of image sensors/imaging systems that can be employed to achieve effective apple counting in a real-world application. Describe the process of data acquisition using the sensor(s) of your choice. Describe the dataset(s) you employed in this project. Discuss data quality, variability, appropriateness for use in this project, and briefly how they were used in this project with reason. 4. Methodology (30% in total, 15% per approach), (or for any student completing this assignment as a group of one, 25% in total) Present the approach(es) you proposed. Show technical breadth and depth. Justify the use of specific algorithms. Use flowcharts to illustrate the process if applicable. You are welcome to use any image processing/machine learning approach, however basic it may seem, as long as you can justify it well, e.g. why do you think the proposed approach can deal with the challenges identified in Section 1 and 2. Refer to Requirement 4 for more information. 4.1 Approach A 4.2 Approach B (not applicable to an as a group-of-one assignment) 5. Experiment and Implementation (15%), (or for any student completing this assignment as a group of one, 20% in total) Demonstrate that you are able to implement the proposed approaches (introduced in Section 3) using Python programming. Describe the Python IDE/platform/hardware used, core python packages used, how you trained your machine learning model(s) if applicable, parameter tuning/optimisation of key algorithms if applicable. For example, if you used manual thresholding for binarization, explain how you chose an appropriate threshold. If you used deep learning models, explain how you loaded your images and ground truth data; how you split the data for training, validation and testing; and justify the training epochs used. Note that you are not expected to describe each line of your code here. Use flowcharts, diagrams and/or pseudocode where applicable. 6. Results and Evaluation (15%) Present results; evaluate the proposed approaches (quantitatively and qualitatively) using appropriate metrics; and interpret findings. Make sure you explain how results were obtained and what they mean. Having a method that can detect all apples in all your images does not automatically grant you high marks. Compare approach A and approach B and discuss their respective capabilities and limitations. For a group-of-one assignment, compare your approach with those in the literature. Use your results to support your statements but also explain this from a theoretical point of view. 7. Conclusions and Future works (5%) Conclude the project. Identify challenges relevant to apple counting (as well as detection and localisation) that have not been fully resolved within the scope of this project. Propose future works to deal with these challenges, e.g. is it possible to employ 3D approaches? The remaining 10% of the mark is allocated to report presentation including logical structure and clarity, quality of writing, spelling, grammar, diagrams, figures and tables, clarity of expression and use of English, and accuracy, consistency and completeness of citations and references.
Assignment 5 CS-GY 6033 Fall 2024 Due Date: Dec 9th 11:55pm on Gradescope Question 1: Graph Traversals, Warm up (a) 5 points Execute DFS on the directed graph shown below, using DFS-visit which keeps track of time stamps. Start with DFS-visit from vertex C. Upon completion, you must draw the resulting DFS tree(s) and label the edge types (as tee, forward, back, cross). (b) 5 points Execute the algorithm from class for finding a topological sort on the directed acyclic graph shown below. Start DFS on vertex K and then A. You must indicate the start/finish times for each vertex and draw the final topo sort with edges shown. (c) 5 points Execute Dijkstra’s algorithm on the following directed weighted graph, using vertex A as the source You must show the values of v.d and how they change as each edge is added. (d) 8 points Execute Bellman-Ford’s algorithm on the following directed, weighted graph, using source vertex A. You must process the edges in lexicographic order (AB, AC, etc). You must show the values of v.d and how they change as each edge is added. You must also show the current edges of the tree as the algorithm executes. Question 2: Graph traversals on unweighted graphs (a) 8 points Let G be a connected graph which may contain cycles, however all cycles are vertex-disjoint. This means that any two cycles in the graph do not have a vertex in common. An example of such a graph is shown below. Your job is to write a procedure called MaxCycle(v) which returns the maximum-length of a cycle in G. The input parameter v is any vertex in tehe. graph. Be sure the carefully describe any initialisation that is necessary before calling the procedure, and justify the runtime of O(V + E). (b) 8 points Let G be an undirected, connected graph. Update the pseudo-code for DFS-visit so that it returns the NUMBER of degree-one vertices in G. Call your algorithm CountLeaves(u) which takes as input any node u of the connected graph G. Ensure that you do not use any new external variables. . Instead use recursion to RETURN the correct result. (c) 8 points A village on the island of Naxos in Greece is made up of n tiny farms. The old roads connecting the farms are so narrow, that they have all been designated as one-way roads. The island residents would like to ensure that they are able to drive to and from every pair of farms in the village, using a sequence of the roads. A brilliant computer scientist has arrived on the island, and modelled the island map using a directed graph: each farm is a vertex, and each one-way road is a directed edge. She then uses the strongly connected component algorithm on G to determine the number of strongly connected components. Unfortunately, she found that there are two SCCs, meaning some villages cannot be reached from other villages! Your job: Design an algorithm that determines where to add exactly one more road so that it is possible to drive to and from every pair of villages. You do not need pseudo-code, but you must carefully describe your steps. You must determine the pair of cities that must be connected with the new road (don’t concern yourself with intersections, etc). Justify the overall runtime must be O(n2 ). (d) 8 points Let G be an undirected, connected graph with vertex set V and edge set E. The graph represents a hiking map where each vertex is a marker on the map, and each edge (u,v) is either a hiking trail between u and v or a bridge between markers u and v. Let b(u,v) be an edge attribute that is true if the edge e = (u,v) is a bridge, and false otherwise. The goal is to determine if there is a route from marker S to marker T that traverses at most one bridge. You must design an algorithm to solve this problem, which must be based on DFS, with a runtime of O(V + E). The procedure must return true if there is a route and false otherwise. (e) 6 points Suppose the above algorithm returns true. Your job is to write a procedure that outputs the routs from S to T. The output consists of a sequence of print statements, where you print out each of the trail makers. The markers must come out in the correct order: from S to T. Question 3: Spanning Trees (a) 6 points Let G be a weighted, undirected graph on vertex set V and edge set E. Suppose we run Kruskal’s algorithm, and store the edges of the MST in a list of edges, T. However, once the edges have been stored in T, an error occurs, and an additional edge from E is accidentally added to the set T. This means that the edges in T no longer represents the edges of the MST. Your job is to design an algorithm that restores T so that it contains the edges of the MST. Your algorithm must run in time O(V), and therefore cannot simply re-compute the MST from scratch, which would take O(ElogV). You do not need to write the pseudo-code, but you must carefully describe all aspects of your algorithm, including how you design any input, and describe the steps of the procedure that determines which edge of T can be removed. Justify the runtime of each stop. (b) 4 points Suppose a weighted graph G has distinct edges weights. Is there more than one possible MST ? Justify your answer. (c) 4 points Draw a graph on 10 vertices and 15 edges where the MST produces the same tree as the SSSP from a particular vertex. Question 4: Weighted graphs (a) 10 points Consider a road map which has n marked intersections and m roads. Each road is a bi-directional connection between two intersections on the map. Suppose that each road has a toll, stored in t(u,v), which represents the cost of taking the road between intersection u and v. Further more, a tax must be paid at certain positions on the map. This is stored in u.tax which is a positive numerical value representing the tax at position u. Note that the tax value may be zero. Let s be a start position on the map and t be an end position on the map. Your Job:. Determine the minimum cost in travelling from position s to position t. You must write the pseudo-code for your procedure, which outputs the minimum cost route. You must also provide the pseudo-code that prints out the best route. Justify the runtime of O(mlog n). (b) 10 points Consider a mountain biking trail map consisting of n trail markers, and m trails, where each trail connects two markers and travels in only ONE direct. Each trail also has an associated distance. Exactly 5 of the trail markers contain toilets. There is also a toilet at the START trail marker S and the FINISH trail marker F . A family would like to start hiking at marker S and hike to trail marker F . However, they would like to plan a route so that they never have to travel more than 20 miles without a bathroom break! Your job is to design an algorithm that finds the shortest overall route from S to F such that the condition is met. You must describe in detail the steps of your procedure, including what graph model you use, what attributes you use and what they are used for, and what algorithms (with input/output) you use from class. Justify the runtime of O(m log n).
BIOL5393M Research Project module write up guidelines. The project will be written up in the format of a research paper which will contribute 60% of the mark for the module. The format adopted is that of Scientific Reports. It publishes a wide variety of work across the range of Biosciences so you should be able to find an example paper in the same broad area as your project to use as guidance. You may also find it helpful to look at papers from other journals that have used similar techniques and analyses. Please note that you are NOT required to format the paper to look like a published paper in Scientific Reports but you should adhere to the guidelines below. Please use a minimum size font of Arial 11pt. The following are taken from the Scientific Reports instructions to authors https://www.nature.com/srep/publish/guidelines The main text should be no more than 6,000 words EXCLUDING Abstract, References, Supplementary Information and figure legends, and should contain up to 8 figures (which can be composite) and/or tables. Supplementary information files are optional. Introduction (~1000 words) Articulates the problem and provides essential background to the research described. Cites appropriate and recent primary literature. Methods (~1000-1500 words). All the methods used in the paper are described in sufficient detail to allow replication. Methods are written in a suitable journal format (not like a lab protocol). Key references are cited. Results (~2000-2500 words) Clear presentation of the data. Figures and tables fully described and interpreted in the text. Conclusions (even if negative) fully supported by the data. Where appropriate suitable quantitative analysis has been carried out. Replication has been mentioned. Appropriate controls have been used. Figures and their legends Are the figures appropriate? Graphs plotted with appropriate axes and units? Error bars where appropriate. Are figures good quality? Are the component parts properly labelled, and legible? Do the figure legends fully and accurately describe the figures and do the figure and legend make sense without the text? Are the figures publication quality or close to it (even if the results are not)? Discussion (~1000-1500 words) Where results are unexpected or negative some analysis of where things have gone wrong can be included but the discussion should also bring in a critical analysis of the relevant literature and whether the results agree or disagree with published work. Any novel findings should be highlighted. Where results are preliminary this should be discussed, and it should be permissible to propose how the work could be further developed. With the supervisor's agreement results and discussion can be combined. Presentation (including abstract and referencing) Abstract (max 200 words) should serve as a general introduction to the topic and a brief non-technical summary of the main results and their implications. References (Scientific Reports suggest as a guideline up to 60). For this piece of work please use the Leeds Harvard reference format. Endnote or other referencing software should be used. References should be complete, accurate, appropriate, and correctly formatted. Paper should be well structured and clearly written. Optional Supplementary data Can include useful information such as restriction maps or diagrams of constructs, raw data sets, code etc as advised by the supervisor. Allow the student to present information that are relevant but a distraction from the narrative of the paper or any further information required to fully reproduce the results. Mark Scheme used will be the Level 5 marking scheme for research project dissertations.
Global Business School for Health Assessment brief 2024 Please complete one form. per assessment Module code and name GBSH0011 Economic Evaluation and Health Financing Title of assessment Economic Evaluation and Health Financing Final Exam What learning outcomes will be assessed Learning outcomes to be assesses highlighted in yellow: LO1: Subject-specific knowledge: LO1a: Critically assess economic evaluations of health and healthcare interventions LO1b: Appraise and critique the research findings from health economic studies to a specific health question LO1c: Apply the concepts of health economics to patient care LO1d: Reach and evaluate the outcomes of an economic assessment analysis LO2: Intellectual, academic and research skills: LO2a: Differentiate between types of health economic evaluation models LO2b: Evaluate the economics of health studies Keywords Economic evaluation, Health technology assessment, Health financing Description of the assessment Be specific, a brief is a set of instructions given by the assessor to the learner outlining the requirements of the assessment and criteria for the particular assignment. Format: · This will be a 2-hour online written exam. It will include multiple choice, short answer, and mathematical calculation questions. There will be no free-form. text answers. · The exam will be conducted on Wiseflow. You will receive an email beforehand with instructions, log-in details, and the link to use to complete the exam. Please read them carefully. · The link will only be active for the 2-hour slot designated for the exam (time TBD). You will get a few extra minutes to submit, but we strongly recommend that you finish up and submit at the end of two hours so that there is no danger that you are unable to submit before the time is up. · Please note that Wiseflow will lock down your computer so you will not have access to anything else on your computer besides the exam window. You may, however, choose to have any written notes or books you’d like on hand for reference. Content: · Anything covered in the video lectures and the in-class tutorials may be covered in the exam. The essential readings often clarify the lecture content, so it’s a good idea to review them, but things in the essential readings that are not in the lectures/tutorials will not be covered. Optional reading content will not be covered. · There will be no open-ended questions. Only multiple choice, short answer, and mathematical calculation questions (you will not have to show your calculations but be able to pick the right numerical answer based on them). Assessment criteria And if relevant, the weighting given to each criteria · Multiple choice questions may have one or more correct answers. The number of correct answers will not be indicated. · Each correct selection will get 1 point (or 2-3 points for more difficult questions). · Each incorrect selection will get -0.25 points. Unselected options will get 0 points. · This means questions with multiple correct answers will be worth more points, and questions that are difficult will be worth more points. The total number of possible points will be indicated with each question so that you know how important it is for the total grade. · NOTE: Incorrect selections will get negative marks, whereas not selecting an answer gets zero points, so it is not advisable to guess at the answer.
ECON 385 Intermediate Macroeconomic Theory II, Fall 2024. Problem Set 2. Due by December 2. 128 points. 1. (20 points) An employee has to choose between two contracts. Assume that the net real interest rate on saving and borrowing equals r > 0. Under contract A, she has gross incomes y and y 0 in the current and future periods, respectively, and has to pay taxes t and t 0 in the current and future periods, respectively. Under contract B, an employer offers the employee an option to increase income next year by x·(1 +r) units and reduce income this year by x units. Taxes are the same under both contracts. (a) (10 points) Write down current and future budget constraints and the lifetime budget constraint under the two contracts. Which contract would the employee choose and why? (Hint: you should compare lifetime wealth under the two con tracts.) (b) (10 points) Assume that preferences over current and future consumption are U(c, c0 ) = −2/1 (c − c¯) 2 − 2/1 β(c 0 − c¯) 2 , where ¯c is the bliss consumption level and β = 1+r/1. Find consumption in the current and future periods and saving under the two contracts. Compare consumption levels and saving under the two contracts. 2. (18 points) Assume a consumer has current-period income y = 120, future period income y' = 150, current and future taxes t = 60 and t' = 50, respectively, and faces a market real interest rate of r = 0. Consumer’s preferences over current and future consumption are U(c, c') = min (c, c'). The consumer faces a credit-market imperfection in that she cannot borrow at all, that is, s ≥ 0. (a) (6 points) Calculate her optimal c, c' , s. (b) (6 points) Suppose that everything remains unchanged, except that now t = 40 and t 0 = 70. Calculate the effects on current and future consumption and optimal saving. (c) (6 points) Calculate the marginal propensity to consume for this consumer fol-lowing the tax change, that is, the change in the current consumption following the change in taxes and disposable income that it entails. Define the Ricardian equivalence and comment if it holds in this case. 3. (50 points) Consumer has quadratic preferences and cares about consumption over two periods: U(c0, c1) = − 2/1(c0 − c¯) 2 − β2/1(c1 − c¯)2. Assume that the real interest rate, r, is 1 9 , and the time discount factor, β, equals 0.9. (a) (7 points) Consumer’s disposable income in period 0 equals 10, and in period 1 equals 20. There’s no uncertainty. Write down the Euler equation and find the optimal consumption levels in periods 0 and 1, and the optimal savings. (b) Assume now that period 0 income stays at 10, while period 1 income is uncer-tain. There are two possible states of nature that might realize in period 1—with probability π = 1 3 , income will equal 0 in period 1 if state 0 occurs whereas with probability 1 − π = 2 3 income will equal 30 in period 1 if state 1 occurs. Con-sumer has to make decision about her consumption and saving for period 0 before uncertainty is resolved. Consumer now maximizes expected utility EU(c0, c˜1) = − 2/1 (c0 − c¯) 2 − πβ2/1 (c1(0) − c¯) 2 − (1 − π)β2/1 (c1(1) − c¯) 2 , where c1(k) is consumption in period 1, state k = 0, 1. (i) (3 points) Write down the Euler equation and find the expected value and variance of income in period 1. (ii) (6 points) Find the optimal consumption and saving in period 0, and con-sumption in period 1 in both states of nature. (iii) (1 point) Does your answer for the optimal consumption in period 0 and savings differ from the answer to (3a), and why it does or why it doesn’t? (c) Assume now that income in period 1 state 0 equals 0 with probability π = 0.99 and income in period 1 state 1 equals 2000 with probability 1 − π = 0.01. (i) (3 points) Write down the Euler equation and find the expected value and variance of income in period 1. (ii) (6 points) Find the optimal consumption and saving in period 0, and con-sumption in period 1 in both states of nature. (iii) (1 point) Does your answer for the optimal consumption in period 0 and savings differ from the answer to (3b), and why it does or why it doesn’t? Assume now that each period’s utility function is u(c) = ln(c). Continue assuming that the real interest rate, r, is 1/9, and the time discount factor, β, equals 0.90. (d) (7 points) Write down the Euler equation and find the optimal consumption in periods 0 and 1 and optimal saving in period 0 given the data in (3a). (e) (8 points) Write down the Euler equation and find the optimal consumption in periods 0 and 1 and optimal saving in period 0 given the data in (3b). Compare the optimal saving to the value you found in (3b) and argue why they are different (if different at all). (f) (8 points) Write down the Euler equation and find the optimal consumption in periods 0 and 1 and optimal saving in period 0 given the data in (3c). Compare the optimal saving to the value you found in (3c) and argue why they are different (if different at all). 4. (18 points) Suppose there is a credit market with the fraction of a good borrowers, and the fraction of 1 − a bad borrowers, with the total number of borrowers equal Nb. Banks cannot differentiate between good and bad borrowers when making loans (asymmetric information) and loan out l units of goods to each borrower, good or bad. There are Nd depositors/savers in the economy. Banks attract deposits in the amount of L from each of them and promise to pay a net real interest rate of r1 to depositors. Banks charge net interest r2 on loans. Good borrowers are identical and always repay their loans, while a debt collection agency makes bad borrowers pay a fraction 0 ≤ f ≤ (1 + r2) of their loans (who, in the absence of the agency, would pay nothing). The banking sector is competitive, and the profit equals zero in equilibrium. (a) (5 points) Using the bank balance sheet, find the relationship between Nd, Nb, l, and L. (b) (10 points) Using the assumption of the competitive banking sector, find an expression for the interest rate on loans, r2, made by banks, as a function of a, f, and r1. (c) (1 points) How will the interest rate change if the debt collection agency makes each borrower pay a higher fraction f of their loans taken? (d) (2 points) What must f be for the interest rate on loans to equal r1? 5. (22 points) Consider the short-run model of aggregate economy we studied in class. Aggregate demand (AD) curve Y˜ t = ¯a−¯bm¯ (πt −π¯) was derived from the following two equations: IS curve: Y˜ t = ¯a − ¯b(Rt − r¯) Textbook MP curve: Rt − r¯ = ¯m(πt − π¯) (a) Assume an alternative MP rule: Alternative MP curve: Rt − r¯ = ¯m(πt − π¯) + ¯nY˜ t (i) (6 points) Explain in words what this rule tries to achieve and how it com-pares to the standard textbook case. Derive the aggregate demand equation (AD') under the alternative rule. (ii) (6 points) Plot (AS), the textbook (AD) and (AD0 ) curves on the same graph. State which aggregate demand curve is steeper and why. (Hint: we are plotting πt against Y˜ t .) (b) Assume now there is a temporary positive inflationary shock to the economy (¯o in the AS curve goes from 0 to a positive number temporarily). (i) (4 points) Show how the economy responds over time using the AS/AD framework. (You should clearly label the axes and explain everything you want to show on your graph. You may use either AD or AD0 to avoid clut-tering.) (ii) (6 points) Show the path of real interest rates set by the central bank under the two alternative monetary policies. Which policy would result in a more prolonged adjustment of the real interest rate to its long-run value ¯r? (Your answer should be reflected in your graph.)
E340 Environmental Economics and Finance Benefit-Cost Analysis of Honda Accord Hybrid Sedan BACKGROUND Hybrid vehicles use both a conventional powertrain and an electric motor as power sources. The electric motor is run from a battery that is charged through a generator that receives energy from regenerative braking, or from the power train when the vehicle is going down-hill. “Plug-in hybrids” also allow for the battery to be charged independently using an electric power supply, such as an electrical socket in a homeowner’s garage. Hybrid vehicles are more fuel-efficient, in terms of gasoline consumption, than conventional vehicles. Less gasoline (or diesel) is used per mile driven with hybrid vehicles compared to conventional alternatives. Beginning around 2003, many states and the U.S. federal government applied incentives to encourage individuals to purchase hybrid vehicles. The 2005 Energy Policy Act granted individuals who purchased new hybrid vehicles a tax credit, which depended on the fuel efficiency of the model. This tax credit expired December 31, 2009. In the period since, there have been calls to renew a subsidy program for hybrid vehicle purchases. Some states offer incentives programs, but the current focus in federal policy is on electric vehicles as well as “plug-in hybrids.” In terms of climate policy, promoting plug-in hybrids or electric cars is a long-term strategy. These vehicles – in particular, electric-only vehicles -- are not likely to reduce CO2 emissions in the near-term, given that fossil fuels continue to be used to generate electric power. This case explores whether it makes economic sense to subsidize conventional hybrids – not the plug-in variety – as a transitional strategy to reduce CO2 emissions and other externalities. The issue is whether renewing subsidy programs like those from 2005 to 2009 is a wise use of public resources. To answer this question, there are two related issues to consider. First, from society’s perspective, the question is whether the additional (incremental) benefits of driving a hybrid are greater than the additional (incremental) costs, compared to the non-hybrid model. Fundamentally, the issue here is whether the value of the fuel savings from driving the hybrid are larger than the increased technology cost of hybrid vehicles, plus battery replacement costs. The value of fuel savings includes the value of conserving a scarce resource – energy-- but also the value of avoiding the social costs associated with energy usage, such as CO2 emissions, local air pollution, and the risks to national security associated with importing oil. In terms of the consumer perspective, prospective hybrid buyers are aware of the fuel savings benefits that they will realize if they buy and drive hybrids. However, the consumer’s perspective on fuel savings benefits will differ from the societal perspective, because fuel prices are distorted by fuel taxes. So the “price signals” consumers face about the value of fuel savings differ from the “shadow price” of fuel savings – the value of fuel savings from a societal perspective. (more on this below). Additionally, consumers are not fully aware of the value of the reduced global warming risks, or benefits to reducing local air pollution, or reducing national security risks. These external benefits are diffused and distributed to everyone. In sum, consumers may ignore the positive externalities associated with reducing fuel consumption, and get inaccurate price signals about the monetized value of fuel savings as seen from the larger societal perspective. The increased costs of hybrid vehicles are in the form. of the additional costs for the hybrid drivetrain, which adds to the conventional system: (1) an electric motor (2) a large capacity battery and (3) the power electronics that links these components together. The price consumers pay for hybrids will actually be higher than these costs, because there is 7% sales tax imposed on the purchase of hybrids. The societal analysis will “shadow price” the cost of producing Honda civic hybrids -- giving the “pure technology” cost -- as the price consumer’s pay less the sales tax. This gives the economic cost (from a societal perspective) of the hybrid technology. The costs of replacing a hybrid battery once during the vehicle’s life must also be added to the analysis. From the consumer perspective a 7% state sales tax must be added to this cost. The net-value of purchasing and driving hybrids from a societal perspective can be obtained in two equally valid ways. First, the net effect of driving a hybrid on all “stakeholders” can be summed. If the net-effects are positive, the project passes the Kaldor-Hicks standard. Passing that standard is equivalent to the benefits being larger than the costs. The stakeholders in this situation can be aggregated to the following four groups: (1) The hybrid user; (2) State governments, who receive additional sales tax revenue from higher hybrid sales prices, but also lose fuel tax revenue from improved hybrid fuel economy; (3) The federal government, which loses fuel tax revenue; (4) “the public” that gains the “societal” or external benefits of hybrid use; that is, the reduced air pollution, lower carbon dioxide emissions, and the value of reduced oil imports. Summing the net effects in (1)-(4) will give the same result as directly comparing the shadow-priced value of the fuel savings – the monetized benefits from the societal perspective -- against the costs. This conventional benefit-cost comparison is the second method for determining economic efficiency. The complete picture is shown in Table 1. Summing the net-stakeholder effects in the bottom row of the tableau produces the societal net benefit in the rightmost bottom cell, B1+B2-C1-C2. This is the pre-tax value of fuel savings plus environmental benefits less hybrid technology and battery replacement costs. You can see the same result by comparing the benefits against the cost in the right most column of the tableau. Note that the column for the “Hybrid Buyer” shows the private perspective from the purchase of the hybrid. This column represents the financial returns and losses to purchasing and using a hybrid. Comparing this column to the right-most column of the tableau shows how the private buyer perspective differs from the societal perspective. As mentioned above, there are two differences between these perspectives. First, there is the added benefits to society of avoided pollution and other external costs of fuel use, which the private user themselves do not experience, but society gains (B2). As noted, the private buyer might not account for this external benefit when they purchase the hybrid. Secondly, part of the hybrid buyer’s financial gain in avoiding fuel expenses is not a net gain from the societal perspective. That’s because fuel tax savings of the hybrid driver (T2+T3) are lost to governments, i.e., -T2 is lost to the state government, and -T3 is lost to the federal government. In short, what is the buyers’ gain is another stakeholder’s loss. As “financial transfers,” these effects cancel to zero from a societal perspective. Similarly, the additional financial loss to the buyer of paying a sales tax on hybrids (-T1a) and battery replacement (-T1b) are not a net-societal loss, because another stakeholder – the state in this case -- collects these sales tax (T1a+T1b). Again, the loss to one stakeholder is counterbalanced by the gain to another, so these gains and losses net to zero from the societal perspective. A table like Table 1 with actual values will show whether B1+B2-C1-C2 in the far right-hand corner is positive or negative. If it is negative, it does not make sense for the government to promote hybrid usage -- assuming the Kaldor-Hicks standard (Potential Pareto Criteria) is the decision-making standard. That is, if benefits are less than costs, it doesn’t make sense to subsidize hybrids on efficiency grounds. If it is positive, we then have to look at the net result (bottom cell) in the “Hybrid Buyer” column. If it is privately profitable to buy a hybrid without assistance, then the argument for providing subsidies is diminished – whatever the merits of driving hybrids. Why subsidize people to buy hybrids if they are going to buy them anyway? If, however, private hybrid buyers are taking a financial loss and yet driving hybrids has positive societal benefits, then it might make sense to offer a subsidy to promote the socially desirable behavior. (This is like subsidizing people to get flu shots, given that individuals may not account for the larger social benefits of reducing flu in the population when they make a decision about getting a flu shot). Note that such a subsidy is a pure financial transfer from the societal perspective – a gain to the hybrid driver, and a loss of equal amount to the government. So subsidizing hybrid users has no net efficiency effect from the societal perspective. But from an equity point of view, it wouldn’t make sense to transfer taxpayer money to hybrid drivers unless there was some larger societal purpose to do so. Note: As shown in Table 1 and discussed above, the computation of the buyer’s financial effect should not include tax credits. It is necessary to compute the impact on the buyer without the tax credit, to see whether a tax credit is needed. The impact on state and federal fuel tax collections is also policy relevant. States and the federal government rely on fuel tax receipts to fund transportation investments. Growing fuel economy will reduce overall tax receipts, thereby reducing funds for transportation infrastructure. “Erosion” of fuel tax revenue from greater fuel economy has worried some states enough to consider switching the tax base from fuel consumption to mileage. See the state of Oregon’s experiments in this regard: http://www.terrapass.com/blog/posts/oregons-successful-mileage-tax-experiment In short, the impact of hybrid driving on the receipts of state and federal tax revenues is policy relevant. THE ASSIGNMENT Your tasks are as follows: (1) produce two Kaldor-Hicks tableaus (Tables 1 and Tables 2) to represent the nature of the program for these specific scenarios: 10,000 miles driving per year, $2.5/gallon, and $100/ton of carbon emissions 20,000 miles of driving per year, $4.5/gallon, and $300/ton of carbon emissions (2) Fill in the values for a “Table 3” that shows the incremental NPV of driving a hybrid from the societal (economic) perspective as follows: Table 3: Net Present Value of Purchasing and Using Hybrids from a Societal Perspective Carbon Shadow Price Miles driven per year Pre-Tax Fuel Price Scenarios 2.136 3.16 4.136 100/ton 10000 20000 200/ton 10000 20000 300/ton 10000 20000 (3) Fill in the values for a “Table 4” that shows the incremental NPV of driving a hybrid from the private buyer’s point of view as follows: Table 4: Net Present Value of Purchasing and Using Hybrids from a Private Buyer Perspective Miles driven per year After-Tax Fuel Price Scenarios 2.5 3.50 4.50 10000 20000 4. Write a no-longer-than 4 page double-space memorandum (this page length EXCLUDES THE TABLES YOU INCLUDE) that describes the analysis, presents the results, and then makes a recommendation whether or not Congress should renew tax credits for conventional hybrids. Your memo should be broken down explicitly into sections with bolded headers as follows: Introduction Analysis Method and Assumptions Results Policy Recommendation See memo for Cincinnati Vehicle Emissions Inspections case as a model for this kind of policy/decision memo. GROUND RULES AND INSTRUCTIONS (1) You are encouraged to form. working groups (of no more than 3) to collaborate on the analysis. (2) You can write the memo individually, or as a group of no more than 3. (3) To complete the assignment, post two things on the assignment tab in Canvas: * your memo in Word, which should include the tables as at the end of the memo under the heading “Tables” (NOT AS AN APPENDIX). * a copy of your spreadsheet work. (4) If you do the assignment as a group, be sure to put all group member names on the memo/spreadsheet. (5) If you do your assignment as a group, you can post one assignment (memo/spreadsheet) per group. (6) Do the analysis from the end of 2020 perspective. Thus, for discounting purposes, period zero is 2020, period 1 is 2021, etc. Comment 1: This analysis implicitly assumes that consumers are making a purchase decision about the hybrid version of the Honda Accord purely as an investment option with this question to be answered: do the fuel bill savings justify the additional costs? In fact, many hybrid owners are likely to make the decision to drive hybrids because they want to be socially responsible citizens, e.g., to lower their carbon footprint. This group may be willing to buy hybrids even if the fuel bill savings do not cover the costs. Given this, it might be useful to think about this analysis as a way to encourage hybrid purchases beyond the group who would buy them anyway. That is, the purpose of the subsidies is to expand the purchase of hybrids to consumers who would not be considering the social benefits of driving hybrids, but would just be comparing the benefits of long-run fuel bill savings to the higher initial expense. Comment 2. It might (or might not) make sense to subsidize hybrid vehicle fleets used as “company cars” by business firms, or to subsidize hybrid use in ride hailing services, or taxi companies. Some of these companies may have thin profit margins, and not be able to purchase more expensive cars when the cost is not covered by private fuel savings. Table 1: Kaldor-Hicks Tableau (all values in present value at 7%) Hybrid Buyer State gov Federal Government Public receiving environmental and national security benefits of fuel savings Net Society Benefits Pre-tax value of fuel savings from driving hybrids B1 B1 Environmental and other external benefits B2 B2 Transfer payments State sales tax on vehicle purchase -T1a +T1a 0 State sales tax on battery replacement -T1b +T1b 0 State fuels taxes +T2 -T2 0 Federal fuel taxes +T2 -T3 Costs Cost differential of hybrid technology -C1 -C1 Battery Replacement Cost -C2 -C2 Net B1+T2+T3-(T1a+C1)-(T1b+C2) T1a+T1b-T2 -T3 B2 B1+B2-C1-C2 Note: B1+T2+T3=consumer fuel expenditure savings, which equals avoided fuel tax payments, plus net-of-tax fuel expenditures. The retail price differential of the hybrid (before sales tax) is C1. The after sales-tax price differential is T1+C1. So the net for consumers is the difference between avoided fuel expenses (B1+T2+T3) less the incremental price of hybrids (T1+C1). All figures in the table should be discounted present values. Table 2: Data on Alternatives NOTE: For the analysis, assume (1) equal country-city driving (2) battery replacement cost of $14,500, and (3) battery replaced in the 10th year for 10,000 mile driving scenario, and 8th year for 20,000 mile driving scenario. Table 3: Basic financial parameters which don’t vary Discount rate .07 State sales on car purchases .07 (Indiana) State fuels tax 18 cents per gallon (Indiana) Federal fuels tax 18.4 cents per gallon Table 4: Fuel Price Assumptions High 4.5 per gallon Medium 3.5 per gallon Low 2.5 per gallon Other Basic Assumptions ● Operation and maintenance (O&M) costs: no difference between options ● Insurance costs: no difference between options. ● Average vehicle life: 15 years/no salvage value at end: no difference between options (it does not matter to the analysis whether the car is sold, and resold during this period. It is simplest therefore to treat it as a single ownership over the whole period) ● Driving behavior. assume no difference between options: Low: 10,000 miles per year High: 20,000 miles per year Equal mix of city and highway driving Table 5: Societal Costs of Automobile Usage, Excluding Carbon ($2021) Fuel-Related National Security $.15 per gallon Local Pollution $.26 per gallon** Subtotal $.41 Mileage related** Congestion $1.35 per mile** Accidents $.81 per mile** Subtotal $2.16 **Note: this was listed as $.52 per mile. I took about half of this figure, because local auto emissions are controlled by catalytic converters, so reducing fuel consumption doesn’t necessarily reduce pollution that much. And these external costs are at least partially related to miles driven, which we are assuming is the same for both vehicles ***I have included mileage related externalities just to show the external costs of driving beyond what the driver themselves imposes. However, YOU SHOULD NOT INCLUDE THESE IN THE ANALYSIS. REASON? WE’RE ASSUMING THAT THE MILES DRIVEN BY BOTH HYBRIDS AND NONHYBRIDS ARE THE SAME. SO THESE EXTERNAL COSTS DON’T VARY ACROSS OPTIONS. Source: Parry, I. W. H., Walls, M. & Harrington, W. (2007). Automobile externalities and policies. Journal of Economic Literature, XLV, 373–399.
Econ 1150 Mini-Exam 3 1. Suppose the true population regression is Yi = β0 +β1X1i +β2X2i +β3X3i +ui . We obtain an n-sized sample {(Yi , X1i , X2i , X3i)}, where i = 1, ..., n. However, we left out X2i and X3i in our sample regression estimation. That is, we estimated Yˆ i = βˆ 0 + βˆ 1X1i . Assume that E(ui |X1i , X2i , X3i) = 0. (a) Using the formula show that as n → ∞, βˆ 1 converges to a sum of the true population parameter β1 and omitted variable bias terms coming from leaving out X2i and X3i. (b) Suppose β1 > 0, β2 > 0, β3 < 0, and cov(X1i , X2i) > cov(X1i , X3i) > 0. Is βˆ 1 biased? If so, what is the direction of the bias and how do you know that? 2. We’re interested in studying how choice of college degree field is associated to earnings. We have the following variables for each college-graduate i: income in dollars incwagei , work-experience in years experiencei , and binary variables sciencei and engineeringi . ❼ sciencei = 1 if i’s degree field is science and sciencei = 0 if i’s degree field is not science. ❼ engineeringi = 1 if i’s degree field is engineering and engineeringi = 0 if i’s degree field is not engineering. The estimated regressions with their associated Stata output are as follows: (a) In Regressions A and B, how should one interpret βˆ 1 A and βˆ 1 B? Do either of them differ signifi-cantly from zero at the 5% level? (b) In Regression C, how should one interpret βˆ 1 C and βˆ 2 C? Do either of them differ significantly from zero at the 5% level? (c) Using the appropriate regression, predict the income for a college graduate whose degree field is neither science nor engineering but has 10 years of work experience. (d) By comparing the Stata output in Regressions A, B, and C, explain why it makes sense that βˆ 1 C > βˆ 1 A. (Hint: Which individuals are you comparing science-graduates to in Regression A? What about in Regression C?)
EEC 210 HW 7 1. a. Use the Miller approximation to calculate the −3-dB frequency of the small- signal voltage gain of a common-source transistor whose ac schematic is shown below. Assume the dc drain current ID = 0. 5 mA. Also, assume that W = 100 µm, L drawn = 2 µm, Ld = 0. 2 µm, Xd = 0, λ = 0, k′ = 60 µA/V2 , χ = 0, Cdb = 0, Cgb = 0, and fT = 3 GHz (at ID = 0. 5 mA). b. Calculate the nondominant pole magnitude for the circuit in (a). Compare your answer with a SPICE simulation. 2. For the circuit below, assume that VI is adjusted so that ID = 0. 5 mA. Calculate the low-frequency small-signal voltage gain vo /vi , and use the zero-value time-con- stant method to estimate the −3-dB frequency. Use the same data as in the previous problem except: a. Cdb ≠ 0. Calculate the zero-bias drain-bulk capacitance as Cdb0 = AD (Cj0′) + PD (Cjsw0′), where AD = (5 µm)W is the drain area and PD = W is the drain perimeter. Let Cj0′ = 0. 4 fF/(µm2 ) and Cjsw0′ = 0. 4 fF/µm. Use Equation (1.202) with ψ0 = 0. 6 V to calculate Cdb . In case you do not have the book, Equation (1.202) shows that b. Cox ′ = 0. 7 fF/(µm2 ), and fT is no longer given. 3. Consider the amplifier stage shown below. Assume IB is adjusted so that the dc output voltage VO = 0. a. Calculate the low-frequency, small-signal transconductance vo /ii , and use the zero-value time-constant method to estimate the −3-dB frequency. Use the formula for Cdb0 given in Problem 2. For all transistors, assume L drawn = 2 µm, Ld = 0. 2 µm, Xd = 1 µm, χ = 0, W1 = 100 µm, and W2 = W3 = 100 µm. Use Equations (1.201) and (1.202) with ψ0 = 0. 6 V for the junction capacitances. In case you do not have the book, Equation (1.201) shows that and Equation (1.202) is given in the previous problem. For M1 , assume Vtp = − 1 V, kp = 20 µA/V2 , λp = 1/50 V, Cox′ = 0. 7 fF/(µm2 ), Cj0′ = 0. 2 fF/(µm2 ), and Cjsw0′ = 0. 2 fF/µm. For M2 and M3 , assume Vtn = 1 V, kn = 60 µA/V2 , λn = 1/100 V, Cox′ = 0. 7 fF/(µm2 ), Cj0′ = 0. 4 fF/(µm2 ), and Cjsw0′ = 0. 4 fF/µm. b. Repeat (a) with a 20-pF capacitor connected from the drain to the gate of M1 . 4. An amplifier stage is shown below. Calculate the zero-bias drain-bulk and source- bulk capacitances as Cdb0 = AD (Cj0′) + PD (Cjsw0′) and Csb0 = AS (Cj0′) + PS (Cjsw0′), where AD = AS = (5 µm)W is the drain and the source area and PD = PS = W is the drain and the source perimeter. Assume W = 100 µm,Ldrawn = 2 µm, Ld = 0. 2 µm, Xd = 0, λ = 0, k′ = 60 µA/V2 , χ = 0, Vt = 1 V, Cox′ = 0. 7 fF/(µm2 ), Cj0′ = 0. 4 fF/(µm2 ), and Cjsw0′ = 0. 4 fF/µm. Use Equations (1.201) and (1.202) with ψ0 = 0. 6 V for all important junctions. (These equations are given in the previous problems.) a. Calculate the low-frequency, small-signal voltage gain vo /vi . b. Apply the zero-value time-constant method to the differential-mode half cir- cuit to calculate the −3-dB frequency of the gain.
BMAN 70141 Derivative Securities Aims This course introduces students to important financial derivatives, such as forwards, futures, plain-vanilla and more exotic options. It equips students with essential techniques useful for valuing financial deriva- tives and hedging financial risk. The course emphasizes the general principles central to derivatives val- uation, including no-arbitrage arguments and risk-neutral valuation methods, together with their implica- tions for the pricing of financial derivatives. It also discusses some more advantage topics, such as valuing derivatives using Monte-Carlo simulations and finite difference methods, using alternatives to the Black- Scholes model, such as the constant elasticity of variance (CEV) model, the mixed jump-diffusion model, and stochastic volatility models, or calculating a financing institution’s value at risk (VaR). All topics are introduced from an intuitive – and not a mathematically rigorous – perspective. Learning Outcomes On completion of this unit successful students will • Be familiar with the most common derivative contracts traded in financial markets and OTC; • Have some broad knowledge about how derivative contracts have developed over time, are quoted in the financial press, are traded in financial markets, etc.; • Be able to understand, from an intuitive perspective, how derivative securities are valued, using replication approaches, risk-neutral valuation approaches, Monte Carlo valuation or numerical methods (such as the finite-difference or the Longstaff-Schwartz least squares methods); • Be able to understand how derivative securities can be used in financial markets to either in- crease (speculate) or decrease risk (hedging); • Be able to solve standard exercises involving the calculation of derivative values/prices or the optimal number of derivative contracts used for hedging purposes; • Be able to use Monte-Carlo simulations, the implicit and explicit finite difference method, and the Longstaff-Schwartz approach to value more complicated (exotic) derivatives; • Be able to use the simulation or the model-building approach to calculate value-at-risk; • Be able to exercise a capacity for independent and self-managed learning; Methods of feedback to students I provide written and verbal, formative and summative, feedback on the group coursework. One revision session gives students formative feedback on how to improve their examination performance. If students attend that, they enhance their ability to achieve the learning outcomes and perform. well on the course. Methods of feedback from students University course evaluation questionnaires. Feedback from appointed student representative(s). Employability The skills developed during the course allow students to work in the investment and asset management industry, either in the role of developing financial instruments or of implementing risk management of these instruments. The course also teaches important skills for students interesting in working in the non-financial sector, for example, as financial officer or controller. The most important of these skills is the hedging of corporate exposures through either direct or indirect hedging techniques. Social Responsibility The course flags up the various dangerous sides of derivatives instruments (as, e.g., the unlimited losses that they can generate), motivating students to think carefully about when and how to use such instruments. As the course is, however, a quantitative course introducing students to the mathematical tools necessary to use and value derivatives, it cannot dwell deeply into the socially undesirable aspects of derivatives. A separate less mathematical course would be needed to that end. Assessment Group project (25%) The group project will be handed to you during the course. Each group consists of about five students, with group composition determined by you. If you are unable to find a group, I will of course help you to do so. The data necessary to work on the group project are available from the Blackboard site. In the group project, students are asked to use multiple-step binomial trees, Monte-Carlo simulations, and the finite difference method to value complicated (exotic) derivatives. This will be fun! The submission deadline is usually Wednesday of “teaching week 13.” In this academic year, that Wednesday would be 18 December 2024. But the date is generally only confirmed by assessments at the start of the academic year, so it is currently preliminary. The project report has to be submitted in soft-copy (no hard-copy). The soft-copy has to be submitted via the course’s Blackboard site. Feedback on the assignment will be given to students until at latest mid-January 2025 (again TBC). 1 ½ hour examination (75%) There will bean online exam with type-in-a-number, MCQ, and open-ended questions. The exam will take place late in January 2025. The exact exam date is generally released by the university after read- ing week, so in early December. More information to follow. Overview of sessions Week 1 (26 September 2024). Introduction to the Course/Forward Contracts 1. Course aim, structure, assessment, etc. 2. Forwards: Definition, payoffs, and market microstructure 3. Forwards: Determination of arbitrage-free forward price Reading: Chapters 1-2, 5 (the chapter numbers are from the ninth edition of Hull) Week 2 (3 October 2024). Forwards and Futures 1. Forwards: The forward price is not the forward’s value: Valuation 2. Futures: Definition and comparison with forwards 3. Hedging with futures under basis risk Reading: Chapters 2, 3, and 5 Week 3 (10 October 2024). Options (Basics/Binomial Tree Valuation) 1. Options: Definition, payoffs/profits, terminology, market microstructure 2. Bounding the value of options: How and why should we be interested? 3. Option valuation using the binomial tree approach Reading: Chapter 10-11, 13 Week 4 (17 October 2024). Options (Black-Scholes Valuation) 1. What is a stochastic process? Which ones are popular? 2. Deriving the famous Black-Scholes partial differential equation (pde) 3. The Black and Scholes (1973) formula: a. A sketch of the derivation; b. Using the Black and Scholes (1973) model; Reading: Chapter 14-15, 17 Week 5 (24 October 2024). The Greeks and Volatility Smiles 1. Setting the stage: A simple example 2. The Greeks: What are they and why are they useful? 3. Delta and Gamma hedging of a portfolio’s value: Examples 4. What is implied volatility: Definition, approximation, put and call parity 5. What does implied volatility imply about the Black-Scholes model? Reading: Chapter 19-20 Reading Week (28 October - 3 November 2024) Week 7 (7 November 2024). Basic Numerical Procedures 1. Introduction to Monte-Carlo simulation techniques 2. The explicit and implicit finite difference method 3. Discussion of the coursework assignment (CWA) Reading: Chapters 21 Week 8 (14 November 2024).Exotic Options 1. Definition and valuation of various exotic options 2. Examples: gaps, choosers, compounds, barriers, binaries, etc. Reading: Chapter 26 Week 9 (21 November 2024) More on Models and Numerical Methods. 1. More advanced models: CEV, mixed jump-diffusion, and stochastic volatility 2. More advanced binomial tree valuation approaches 3. Longstaff-Schwartz least-squares approach Reading: Chapter 27 Week 10 (28 November 2024) Value-at-Risk 1. Value-at-Risk: Definition, why important? 2. Simulation-based approach to calculate the VaR 3. Model-Building approach to calculate the VaR 4. Comparison of approaches Reading: 22 Week 11 (5 December 2024) Revision/Preparation for Exam Paper 1. Revision of course material; 2. A look at a possible exam paper; Reading: NONE
Assignment 6 CS-GY 6033 INET Fall 2024 Due date: Dec 16th 2024, 11:55pm Question 1: Complexity classes 12 points Short answers! Consider the following problems. For each problem, determine if it is possible that there exists a polynomial-time algorithm for solving that problem. Justify your answer using what is currently know about their complexity classes. ● Travelling salesman problem ● n × n chess ● The Halting Problem ● Vertex Cover ● Integer Factorization ● Given a set of n items, where each item has a specific weight, can we pack them onto K trucks where each truck can hold at most weight B . Question 2 12 points Below are a list of runtimes for decision problems. For each runtime, determine if the corresponding problem is in P or EXP or both or neither. 1. T(n) = (log n)6 2. T(n) = log(n6 ) 3. T(n) = (6n)6 4. T(n) = n + 1000 5. T(n) = nn 6. T(n) = 3n + n6 7. T(n) = 3n2 +6 Question 3 27 points For each problem below, determine whether or not there is a known polynomial-time algorithm for solving the problem. You must justify why there is no known poly-time algorithm OR identify a poly-time procedure that solves the problem. (a) Consider a the political meeting which has n participants. There are m issues which are to be discussed at the meeting. Each participant must list exactly two issues that interest them. The organisers would like select at most k issues, so that each person is interested in at least one of the selected issues. (b) A graph G has n vertices and m edges. The problem is to determine if G contains a simple cycle of length at least 3. (c) A graph G has n vertices and m edges. The problem is to determine if G contains a simple cycle of length at least k (d) A directed graph G contains n vertices and m edges. The problem is to determine if there is path from vertex s to every other vertex in the graph. (e) A directed graph G contains n vertices and m edges. The problem is to determine if there is path from vertex s to and from every other vertex in the graph. (f) A directed graph G contains n vertices and m edges. The graph is not weighted. The problem is to determine if there is path from vertex s to every other vertex in the graph, where the number of edges in the path must be at most k. (g) A directed graph contains n vertices and m edges. The problem is to determine if G is a DAG. (h) An undirected graph has weighted edges. The problem is determine if there is a path that starts at vertex s and travels to vertex t where the sum of the edge weights is less than k. (j) An undirected graph has weighted edges. The problem is determine if there is a path that starts at vertex s and visits all vertices exactly once, where the sum he edge weights is less than k. Question 4 10 points Prove that the following problem is NP-complete using a reduction from either : Vertex Cover, Inde- pendent Set, Dominating Set, or Clique. Recall the two steps that are necessary in order to show that a problem is NP complete. A set of n people attend a political meeting, where m issues are to be discussed. Each person attending has created a sublist of issues (selected from the main set of m issues) that they are most interested in. The organisers would like to select at most k issues so that each person is interested in at least one of the selected issues. The problem is to determine if it possible or not. Question 5 10 points Prove that the following problem is NP-complete using a reduction from either: Vertex Cover, In- depednent Set, Dominating Set, Subset Sum, Hamiltonian cycle, or Clique. Recall the two steps that are necessary in order to show that a problem is NP complete. A graph G consists of a set of n vertices and m edges. A specific vertex is labelled S. The problem is to determine if there is a simple path that starts at vertex S and visits all other vertices in the graph.
70068 Scheduling and Resource Allocation Assessed Coursework Working: Pairs (recommended) or Individual Submission deadline: 27th November 2024 (19:00) To submit: PDF + ZIP (see spec) 1 Context The topic of this coursework arises in the context of performance management of server- less workflows. These are workflows where jobs are executions of serverless functions, i.e. lightweight functions running remotely on the cloud. Serverless workflows are increasingly used in industry to implement image, video, and data processing pipelines. The processing times for the functions considered in this coursework are obtained from actual measurements on Microsoft Azure VMs. The functions deal with image processing: each function receives an input image, passes it through a neural network filter that mixes the content of an image with the style of another image, and creates in output a new image. The inputs and outputs of the functions create precedences in their executions, which can be described by means of a directed acyclic graph (DAG). The challenge is to schedule the execution of a serverless workflow DAG on a single machine as close to optimality as possible. 2 Assumptions We take a number of assumptions similar to what was seen in the lectures: a) A machine is a VM with a single CPU core b) Single machine scheduling : only one function can run at a time. c) Scheduling is non-preemptive. d) Processing times should be treated approximately as deterministic. In reality, they depend on the processing sequence, since the image sizes get modified along the way, but for the sake of the scheduling model you should treat them as usual (sequence independent). e) There are no release times, i.e., all jobs are ready at the earliest time at which their precedences are met. f) A filter (e.g., blur) can be invoked multiple times in the same workflow. Each invocation can be treated as a separate job with identical processing time. 2.1 What to upload • A PDF file with the answers to the questions. Length is unrestricted, but werecommend to aim for a couple of pages of text (excluding tables, figures, etc). Please include some text to help the markers understand how you structured your code. • A ZIP archive with: i) your code, ii) a README file with minimal instructions to compile and use your code, iii) a printout (text file) of the execution of your code, which must print out the current solution considered at each iteration and its cost. 3 Questions 3.1 Question 1 (40%) For a schedule S, consider an additive cost function g(S) =ε gj(Cj) of the completion times of job j = 1, . . . ,n and define gm(*)ax = maxj gj(Cj). The Least Cost Last (LCL) rule, also called Lawler’s algorithm, solves the cost minimization problem 1|prec|gm(*)ax to optimality. LCL finds the optimal schedule in backward order, from the last processed job to the first one. At each step, we look at the set of jobs V such that their successors, if they exist, have all been already added to the schedule. Within the set V , we choose the job l incurring the minimum cost gl(Cl). Ties are broken arbitrarily. Quesiton 1 asks you to: 1. Give a short presentation of the proof at page 68 of the PDF available at the URL: https://arxiv.org/pdf/2001.06005.pdf (starting at “Let N = {1, 2,..., n} be the index set of all jobs, ...”) that justifies why LCL is optimal (in the PDF notation, fj(·) corresponds to our gj(·)). Include in your answer a brief discussion of each passage in the proof and an example with a very small DAG of your choice that illustrates the application of LCL. 2. Using a programming language of your choice, implement the LCL rule for a general DAG and apply it to the workflow, processing time and due dates given in Appendix A. In your code, use as the cost function, the tardiness gj(Cj) = Tj = max(0, Cj − dj) with the due dates given in Appendix A. Report in your PDF answer sheet the first two iterations, the final iteration, and a few selected intermediate iterations obtained during the execution of your code, showing the partial schedule S at each iteration. 3.2 Question 2 (60%) Suppose now that we wish to solve a total tardiness problem for the same workflow studied in Question 1. The problem is now NP-hard and since the measure ε Tj is no longer expressible as a maximum of cost functions, the LCL rule is no longer optimal. Using a programming language of your choice, write a tabu search algorithm for the 1|prec|ε Tj problem for the workflow given in Appendix A. In your implementation: • Make sure that your implementation is generic, i.e., it can accept an arbitrary set of processing times, due dates, and precedences (the latter can be assumed to form a directed acyclic graph). • Include in your implementation code to generate a valid initial solution (which may not be optimal). • Explore the local neighborhoods using the same rules used in the exercises of Problem Sheet 3,i.e., for a schedule 1234 if at iteration k you last considered adjacent interchange e.g. (2, 3), at iteration k + 1 consider first adjacent interchange (3, 4), then (1, 2), etc. • Compared to the tabu search algorithms seen in class, you will need to introduce in the generation of the neighborhood a strategy to account for job precedences. To illustrate your implementation, include in your answer the following results: 3.2.1 Execution of the tabu search method using the processing times, due dates, and prece- dences in Appendix A. Assume a tabu list length L = 20. Obtain the tabu search schedules with K = 10, K = 100, and K = 1000 iterations. Set the tolerance to γ = 10. Force the initial solution to be x0 = [30, 29, 23, 10, 9, 14, 13, 12, 4, 20, 22, 3, 27, 28, 8, 7, 19, 21, 26, 18, 25, 17, 15, 6, 24, 16, 5, 11, 2, 1, 31] Include in the PDF the first few iterations and notable intermediate solutions (i.e., those where a new optimum is found) as the search progresses. Use a level of detail similar to what seen in the tabu search example in the lecture notes. 3.2.2 In this part, you are free to vary the values of γ and L as you wish. Include in your answer the best schedule xTS you find using the tabu search algorithm and its total tardiness. Discuss your findings, commenting on the effects that you observed by varying the parameters of γ and L. Suggestion: as you develop your code, you may consider applying your code to some of the exercises solved in the Tutorial Problem Sheet 3 to verify its correctness. You are not asked to document this debugging phase in the coursework answer. A Workflow The figure below shows the considered image processing workflow, consisting of 31 nodes. Node 31 is the exit node. Despite its size may come across as large at first, in reality, this is of moderate size for typical workflows used in industry. A.1 Incidence matrix of the direct acyclic graph Element G(i,j)=1 if and only if there exists an edge from node i to j. MATLAB-like format (indices start at 1): Python-like format (indices start at 0): A.2 Nodes The following table gives the node numerical index used in the previous incidence matrix dec- larations and the corresponding filter type. For example, nodes 3 and 12 are both of emboss type, so they should be treated as having the same processing time. Processing times: p = [3, 10, 2, 2, 5, 2, 14, 5, 6, 5, 5, 2, 3, 3, 5, 6, 6, 6, 2, 3, 2, 3, 14, 5, 18, 10, 2, 3, 6, 2, 10] Due dates: d = [172, 82, 18, 61, 93, 71, 217, 295, 290, 287, 253, 307, 279, 73, 355, 34, 233, 77, 88, 122, 71, 181, 340, 141, 209, 217, 256, 144, 307, 329, 269]
Department of Mathematics Midterm # 2, MATH-UA.0325 - Fall 2024 Exercise 1. (6 pts) True or false. Justify your answer. a) The function g(x) = sin x is uniformly continuous on R. b) Let f(x) be a continuous function on [0, 1], then c) Exercise 2. (10 pts) Compute the following limits: a) b) Exercise 3. (4 pts) Find the antiderivative of the function if any. Exercise 4. (10 pts) Find the Taylor series centered at x0 and find the interval on which the expansion is valid. a) where x0 = 1. b) where x0 = 0. Exercise 5. (20 pts) Answer to the following questions: Determine whether or not the sequence where converges by answering to the following questions: a) Is the function f(x) obtained from the n−term of the sequence continuous? b) Is the function f(x) positive? c) Is the function f(x) decreasing? d) Does the improper integral f(x)dx converge? Justify your answer.