Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] Stock Return RegressionPython

The questions below should be carried out in Python or R. You’ll need to use the Excel attached for   question    III.   Submissions    are    evaluated   based    on   correctness,    analytical   thinking, documentation, structure and clarity of the code/model, as well as the style. of the output and data visualization. Please state all your assumptions that you’d like us to consider. I.    Stock Return Regression Load info.txt and performance.csv into Python or R. The file performance.csv has columns ID, Date, and Performance, which are monthly returns for each instrument identified by ID and each Date since January 1, 1990. The file info.txt has static information about each instrument identified by ID, including Name, Currency, and (Geographic) Focus. a).  Join input files performance.csv and info.txt, and produce a data frame with Date, Index1, Index2, Index3 and Stock A. The values are monthly returns. b).  Split the dataset into training and testing per 80/20 split. Use the training dataset, regress the Stock A returns against all 3 indices together, and interpret the results. Please set seed to make reproducible outputs. c).  What might be the potential problems with this regression, and how to identify the problems? d).  Use Lasso model to re-run the regression. Interpret the new coefficients, and explain why it’s different from the results from linear regression. e).  Use the fitted models from b) and d), predict Stock A returns in testing dataset, and plot the fitted values against the actual values. Which model provides a better fit, and what metric do you use to measure the fitness? II.   Optimal Investing a).  Download stock price data for 10+ equities from free online sources. b).  Use  these  stocks  to  compute  a  mean-variance  efficient  frontier.  While  mean-variance optimization is built into a lot of software packages, using a generic optimization package with the correct objective function and constraints is preferred here. How did you compute the expected return for each stock? The covariance matrix? What start/end date did you use for the return series? What frequency are the returns?   Visualize the covariance matrix. Report all the expected returns and covariances as annualized quantities. c).  Explain what the efficient frontier means. Which portfolio would you invest in and why? d).  Add short-sale constraints and plot the constrained efficient frontier together with the one from b). Explain any differences you see. e).  If you wanted to invest in 200 equities, explain any difficulties in computing the covariance matrix and how these can be overcome. III.     Case Study - See attached Excel This is an open-ended case study. We ask that you make assumptions where you see fit and specify all assumptions made in the sheet. Don't stress it - we know you may not be an expert in private markets, so it's ok if it isn't perfect. We kindly ask that you just do what you can to the best of your ability. IV. Correlated Contribution Times Say you invest in a PE fund that will invest your commitment equally in  10 companies.  Ignore management fees.  Let τ1, . . ., τ10   denote the time when each company is purchased. a).  Assume τi ~iid exp(λ) for λ = 1/2.5. Simulate the contributions for one fund. Plot the cumulative contributions. What is the interpretation of λ? b).  Now simulate 100 funds and compute the average cumulative contribution curve. Plot it. What is this curve? c).  In practice, contribution times may be dependent.  If I find one excellent company to buy, it’s more likely that I will soon find another excellent company to purchase.  Find a model that will allow you to simulate dependent contribution times.   Describe the model and the intuition behind the dependence. How did you set the parameters?  Simulate one fund and plot it. d).  Simulate  100  funds  and compute the average cumulative contribution curve.   How  does  it differ from b)? e).  For the  models in a) and c), compute the maximum contribution in any one calendar year across many simulations. Plot the distribution of the maximum. LPs need to hold cash in order to meet these contributions.  Under which model does the client need to hold less cash?

$25.00 View

[SOLVED] CSSE2310/CSSE7231 Semester 2 2022 Assignment 4

CSSE2310/CSSE7231 — Semester 2, 2022 Assignment 4 (version 1.1) Marks: 75 (for CSSE2310), 85 (for CSSE7231) Weighting: 15% Due: 6:00pm Friday 28 October, 2022 Specification changes since version 1.0 are shown in red and are summarised at the end of the document. Introduction The goal of this assignment is to further develop your C programming skills, and to demonstrate your un-derstanding of networking and multithreaded programming. You are to create two programs which together implement a distributed communication architecture known as publish/subscribe. One program – psserver – is a network server which accepts connections from clients (including psclient, which you will implement). Clients connect, and tell the server that they wish to subscribe to notifications on certain topics. Clients can also publish values (strings) on certain topics. The server will relay a message to all subscribers of a particular topic. Communication between the psclient and psserver is over TCP using a newline-terminated text command protocol. Advanced functionality such as connection limiting, signal handling and statistics reporting are also required for full marks. CSSE7231 students shall also implement a simple HTTP interface to psserver. The assignment will also test your ability to code to a particular programming style. guide, to write a library to a provided API, and to use a revision control system appropriately. Student Conduct This is an individual assignment. You should feel free to discuss general aspects of C programming and the assignment specification with fellow students, including on the discussion forum. In general, questions like “How should the program behave if h this happensi ?” would be safe, if they are seeking clarification on the specification. You must not actively help (or seek help from) other students or other people with the actual design, structure and/or coding of your assignment solution. It is cheating to look at another student’s assignment code and it is cheating to allow your code to be seen or shared in printed or electronic form. by others. All submitted code will be subject to automated checks for plagiarism and collusion. If we detect plagiarism or collusion, formal misconduct actions will be initiated against you, and those you cheated with. That’s right, if you share your code with a friend, even inadvertently, then both of you are in trouble. Do not post your code to a public place such as the course discussion forum or a public code repository, and do not allow others to access your computer – you must keep your code secure. You must follow the following code referencing rules for all code committed to your SVN repository (not just the version that you submit): Uploading or otherwise providing the assignment specification or part of it to a third party including online tutorial and contract cheating websites is considered misconduct. The university is aware of these sites and many cooperate with us in misconduct investigations. The course coordinator reserves the right to conduct interviews with students about their submissions, for the purposes of establishing genuine authorship. If you write your own code, you have nothing to fear from this process. If you are not able to adequately explain your code or the design of your solution and/or be able to make simple modifications to it as requested at the interview, then your assignment mark will be scaled down based on the level of understanding you are able to demonstrate. In short - Don’t risk it! If you’re having trouble, seek help early from a member of the teaching staff. Don’t be tempted to copy another student’s code or to use an online cheating service. You should read and understand the statements on student misconduct in the course profile and on the school web-site: https://www.itee.uq.edu.au/itee-student-misconduct-including-plagiarism The publish/subscribe communication model From wikipedia: “In software architecture, publish-subscribe is a messaging pattern where senders of messages, called publishers, do not program the messages to be sent directly to receivers, called subscribers. Instead, senders categorise published messages into classes without knowledge of which subscribers, if any, there may be. Similarly, subscribers express interest in one or more classes and only receive messages that are of interest, without knowledge of which publishers, if any, there are.” In this way, the participants in the communication are very loosely coupled - when publishing a message on a given topic there is no knowledge or requirement that other participant be listening for that topic. When correctly implemented, publish/subscribe can be very efficient for scaling to large numbers of participants - take a look at the wikipedia page for more details if you are interested, but this is not required to complete this assignment. Specification – psclient The psclient program provides a commandline interface that allows you to participate in the publish/subscribe system as a client, connecting to the server, naming the client, subscribing to and publishing topics. psclient will also output notifications when topics to which that client is subscribed, get published. To fully implement this functionality, psclient will either require two threads (easiest), or if you wish you may use select(). Choose whichever you prefer, as long as you achieve the required functionality. Command Line Arguments Your psclient program is to accept command line arguments as follows: ./psclient portnum name [topic] ... • The mandatory portnum argument indicates which localhost port psserver is listening on – either nu-merical or the name of a service. • The mandatory name argument specifies the name to be associated with this client, e.g. barney • Any topic arguments, if provided, are to be treated as strings which are topics to which psclient should immediately subscribe after connecting to the server, e.g. news, weather • If no topic arguments are provided then psclient will connect to the server without subscribing to any topics. psclient behaviour If insufficient command line arguments are provided then psclient should emit the following message (termi- nated by a newline) to stderr and exit with status 1: Usage: psclient portnum name [topic] ... If the correct number of arguments is provided, then further errors are checked for in the order below. Restrictions on the name The name argument must not contain any spaces, colons, or newlines. The name argument must not be an empty string. If either of these conditions is not met then psclient shall emit the following message (terminated by a newline) to stderr and exit with status 2: psclient: invalid name Topic argument(s) The topic arguments are optional. All topic arguments must not contain any spaces, colons, or newlines. All topic arguments must also not be empty strings. If either of these conditions is not met for any of the topics specified on the command line then psclient shall emit the following message (terminated by a newline) to stderr and exit with status 2: psclient: invalid topic Duplicated topic strings are permitted – your program does not need to check for these but should instead just attempt to subscribe multiple times. (A correctly implemented server will ignore requests to subscribe to a topic that is already subscribed to.) Connection error If psclient is unable to connect to the server on the specified port (or service name) of localhost, it shall emit the following message (terminated by a newline) to stderr and exit with status 3: psclient: unable to connect to port N where N should be replaced by the argument given on the command line. (This may be a non-numerical string.) psclient runtime behaviour Assuming that the commandline arguments are without errors, psclient is to perform. the following actions, in this order, immediately upon starting: • Connect to the server on the specified port number (or service name) – see above for how to handle a connection error. • Provide the client’s name to the server using the ‘name’ command (see the Communication protocol section for details of the name command). • Send subscription requests to the server for any topics listed on the command line, in the order that they were specified (see the Communication protocol section for details of the subscribe command). From this point, psclient shall simply output to its stdout, any lines received over the network connection from the server, without any error checking or processing. A correctly implemented psserver will only ever send publication notices, however your psclient does not need to check for this. Specifically – psclient is not responsible for any logic processing or error checking on messages received from the server – for example if a faulty server keeps sending psclient publication messages on a topic to which psclient never subscribed, or from which it has unsubscribed, psclient does not have to detect this. It shall simply and blindly output those messages to stdout. If the network connection to the server is closed (e.g. psclient detects EOF on the socket), then psclient shall emit the following message to stderr and terminate with exit status 4: psclient: server connection terminated Simultaneously and asynchronously, psclient shall be reading newline-terminated lines from stdin, and sending those lines unmodified to the server (effectively, this interface requires the user to type in command strings that are sent to the server – this simplifies the implementation of psclient dramatically). The three valid message types are: • pub – publish under topic . Note that s may contain spaces and colons. • sub – subscribe this client to topic . • unsub – unsubscribe this client from future publication of . Note that psclient is not required to perform. any error checking on the input lines – simply send them unmodified to the server. In this sense, psclient is a lot like netcat. If psclient detects EOF on stdin or some other communication error, it shall terminate with exit status 0. Your psclient program is not to register any signal handers nor attempt to mask or block any signals. psclient example usage Subscribing to a topic from the commandline, then sending an invalid publish (missing value) and then imme-diately publishing on that same topic and receiving the publication notice back from the server (assuming the psserver is listening on port 49152). Lines in bold face are typed interactively on the console, they are not part of the output of the program: $./psclient 49152 fred topic1 pub topic1 :invalid pub topic1 value 1 fred:topic1:value 1 Subscribing to a topic interactively, publishing on it (and receiving the publication notice back from the server), unsubscribing, then publishing again: $./psclient 49152 fred sub topic1 pub topic1 value1 fred:topic1:value1 unsub topic1 pub topic1 value1 Here you can see that the publishing of ‘topic1’ after subscribing, causes the client to receive the publica- tion notice as expected. Then the client unsubscribes, and no notification is subsequently received despite the republication of the same topic. Subscribing to several topics on the command line, and having these published by other clients at some point after connecting: $./psclient 49152 fred topic1 topic2 ... barney:topic1:value 1 ... wilma:topic2:value:2 barney:topic2:some Value ... unsub topic1 wilma:topic2:a String Here ... Note that after sending the ‘unsub’ command, and assuming a correctly functioning server, we do not expect this client to receive any further publication messages regarding ‘topic1’. Messages on other subscribed topics will still be received. Specification – psserver psserver is a networked publish/subscribe server, allowing clients to connect, name themselves, subscribe to and unsubscribe from topics, and publish messages setting values for topics. All communication between clients and the server is over TCP using a simple command protocol that will be described in a later section. Command Line Arguments Your psserver program is to accept command line arguments as follows: ./psserver connections [portnum] In other words, your program should accept one mandatory argument (connections), and one optional argument which is the port number to listen on for connections from clients. The connections argument indicates the maximum number of simultaneous client connections to be per- mitted. If this is zero, then there is no limit to how many clients may connect (other than operating system limits which we will not test). The portnum argument, if specified, indicates which localhost port psserver is to listen on. If the port number is absent or zero, then psserver is to use an ephemeral port. Important: Even if you do not implement the connection limiting functionality, your program must correctly handle command lines which include that argument (after which it can ignore any provided value – you will simply not receive any marks for that feature).

$25.00 View

[SOLVED] COMP3314 Machine Learning Quiz 1 Processing

COMP3314 Machine Learning Quiz 1 19 Oct. 2020 Q1(5 marks): Consider the perceptron algorithm and the following sequence of samples. Write down all steps of the perceptron algorithm using the given sequence of training data. Let all weights be initialized with 1 and let the bias be 0. Q2.1 (1 mark): Consider data that is linearly separable. Explain what that means. Q2.2 (1 mark): What is the impact of the parameter C in SVM? Q2.3 (2 mark): Consider soft margin SVM on a binary classification problem. Let the data be linearly separable. If we would use a relatively small value ofC, could it hurt the training accuracy? Explain. Q2.4 (3 marks): Consider, again, soft margin SVM on a binary classification problem. This time let’s use a relatively large value for C on the data below. Copy the figure to your handwritten notes and sketch the decision boundary that you think we would get with SVM. Provide a short explanation. Q2.5 (3 marks): Again, copy the figure above to your handwritten notes and this time indicate which points are special points. A special point is a point that you could remove from the training set and then a retrained SVM would get a different decision boundary compared to the full training set. Q3 (5 marks): We will use the dataset below to learn a decision tree which predicts if students pass COMP3314 (Yes or No), based on their cumulative GPA (High, Medium, or Low) and if they studied (True, False). Draw the full decision tree that would be learned for this dataset using the entropy. Show all calculations and write down the entropy values (use log2) for all nodes in the tree.

$25.00 View

[SOLVED] Programming Assignment 2Java

Programming Assignment 2 Objective of this assignment: •    Develop and implement in your preferred language a simple application using  UDP and TCP sockets. The application using UDP sockets must be developed for this Programming Assignment 2.  The application using TCP sockets will be due for Programming Assignment 3. Insure the language is already available on Tux machines. It is your responsibility to check.  10 bonus if client and server are developed with different languages. What you need to do: 1.   Implement a simple UDP Client-Server application (Programming Assignment 2) 2.    Implement a simple TCP Client-Server application (Programming Assignment 3) Objective: The objective is to implement a client-server application using a safe method: start from a simple working code for the client and the server. You must slowly and carefully bend (modify) little by little the client and server alternatively until you achieve your ultimate goal. You must bend and expand each piece alternatively the way a black-smith forges iron. From time to time save your working client and server such that you can roll-back to the latest working code in case of problems. Failing to follow this incremental approach may result in a ball of wax impossible to debug in case your program does not behave or work as expected. If you plan to use Java for this programming assignment, you are advised to start from the Friend client and server application to implement the calculator server. You will first implement the calculator server using UDP (Programming Assignment 2), and then TCP (Programming Assignment 3). If using a language other than Java, you are on your own. Insure that you preferred language is already available on Tux machines. It is your responsibility to check. Part A: Datagram socket programming (Programming Assignment 2) The objective  is to design a  Calculating Server (CS). This calculating server  performs  bitwise  boolean and arithmetic computations requested by a client on  16-bit signed integers. Your server must offer the following operations: 1) addition (+), 2) subtraction (-), 3) bitwise OR (|), 4) bitwise AND (&), 5) division, and 6) multiplication. A client request will have the following format: Field TML Operand 1 Op Code Operand 2 Request ID Size (bytes) 1 2 1 2 2 Where 1)   TML is the Total Message Length (in bytes) including TML. It is an integer representing the total number of bytes in the message. 2)   Request ID is the request ID. This number is generated by the client to differentiate requests. You may use a variable randomly initialized and incremented each time a request is sent. 3)   Op Code is a number specifying the desired operation following this table Operation + - | & / * OpCode 0 1 2 3 4 5 4)   Operand  1: this number is the first or unique operand for all operations. 5)   Operand 2: this number is the second operand. Operands are sent in the network byte order (i.e., big endian). Hint: create a class object Request like "Friend", but with the information needed for a request..... Below are two examples of requests Request 1: suppose the Client requests to perform. the operation 240 + 4: (This is the 5th request) 0x08 00x 0xF0 0x00 0x00 0x04 0x00 0x05 Request 2: suppose the Client requests to perform. the operation 240 –  160  (if this is the 9th  request): 0x08 0x00 0xF0 0x01 0x00 0xA0 0x00 0x09 The Server will respond with a message with this format: Total Message Length (TML) Result Error Code Request ID one byte 4 byte 1 byte 1 byte Where 1)   TML  is the Total Message Length (in bytes) including TML. It is an integer representing the total numbers of bytes in the message. 2)   Request ID is the request ID. This number is the number that was sent as Request ID in the request sent by the client. 3)   Error Code is 0 if the request was valid, and  127 if the request was invalid (Length not matching TML). 4)   Result is the result of the requested operation. In response to Request 1 below 0x08 0x00 0xF0 0x00 0x00 0x04 0x00 0x05 the server will send back: 0x07 0x00 0x00 0x00 0xF4 0x00 0x05 In response to Request 2, 0x08 0x00 0xF0 0x01 0x00 0xA0 0x00 0x09 the server would send back: 0x07 0x00 0x00 0x00 0x50 0x00 0x09 a)   Repetitive Server: Write a datagram Calculating Server (ServerUDP.xxx) in your preferred language. This server must respond to requests as described above. The server must bind to port (10010+TID) and could run on any machine on the Internet. TID is your Canvas team #. The server must  accept a command line of the form.: prog ServerUDP portnumber  where prog  is the executable, portnumber is the port the server binds to. For example, if your Team ID (GID) is Team 13 then your server must bind to Port #  10023. Whenever a server gets a request, it must: i.   print the request one byte at a time in hexadecimal (for debugging and grading purpose) ii.   print out the request in a manner convenient for a typical Facebook user: the request ID and the request (operands and required operation) b)   Write a datagram client (ClientUDP.xxx) in your preferred language: i.   Accepts a command line of the form.: prog ClientUDP servername PortNumber where prog is the executable, servername is the server name, and PortNumber is the port number of the server. Your program must prompt the user to ask for an Opcode, Operand1 and Operand2 where OpCode is the opcode of the requested operation (See the opcode table). Operand1 and Operand2 are the operands. For each entry from the user, your program must perform. the following operations: ii.   form a message as described above iii.   send the message to the server and wait for a response iv.   print the message one byte at a time in hexadecimal (for debugging and grading purpose) v.   print out the response of the server in a manner convenient for a typical Facebook user: the request ID and the response vi.   print out the round trip time (time between the transmission of the request and the reception of the response) vii.   prompt the user for a new request. Part B: TCP socket programming (Due for Programming Assignment 3) Repeat part A using TCP sockets to produce (ServerTCP.xxx, ClientTCP.xxx). How to get started (If using Java)? 1) Download all files (UDP sockets) to run the "Friend" application used in Module 2 to illustrate how any class object can be exchanged: Friend.java, FriendBinConst.java, FriendDecoder.java, FriendDecoderBin.java, SendUDP.java, and RecvUDP.java. 2) Compile these files and execute the UDP server and client. Make sure they work 3) Create a new folder called Request and duplicate inside it ALL files related to the Friend class object 4) Inside the Folder Request, change ALL occurrences of "Friend" with "Request" including the file names. 3) Adapt each file to your calculator application. Replace the fields used by Friend with the fields used by a request. 4) Aim to have the client send one request and have the server understand it (just like what we did with a friend object). 5) When your server will receive and print out correctly a request, then you need to send back a response... 6) Create a class object Response.... Report •    Write a report. The report must include screenshots of the client and the server. We must see on the screenshot of the client four successful requests for the operations - (subtraction), I (or), & (and), and *. To receive any credit, the screenshots must clearly show the Tux machine, the username of one of the classmates, and the date. To get the date, just run the command date before executing your client or server. Each missing screenshot will result in a 25 points penalty. Your screenshot should have the information on this template: •      If your program does not work, explain the obstacles encoutered. What you need to turn in: •     Electronic copy of all your source programs (submit them on Canvas separately). •      In addition, put all the source programs in a folder that you name with your concatenated last name and first name. Zip the folder and submit the zipped folder TOO. The grader should see on Canvas all your source programs separately AND a zip folder containing all the source programs needed to compile and execute your program. •     Electronic copy of the report (including your answers) (standalone). Submit the file as a Microsoft Word or PDF file. Grading 1) UDP/TCP client is worth 40% if it works well: communicates with YOUR server. Furthermore, screenshots of your client and server running on Tux machines must be provided. The absence of screenshots or screenshots on machines other than the Tux machines will incur a 15% penalty. 2) UDP/TCP client is worth 10% extra if it works well with a working server from any of your classmates. 1) UDP/TCP server is worth 40% if it works well: communicates with YOUR client. Furthermore, screenshots of your client and server running on Tux machines must be provided. The absence of screenshots or screenshots on machines other than the Tux machines will incur a 15% penalty. 2) UDP/TCP server is worth 10% extra if it works well with a working client from any of your classmates.

$25.00 View

[SOLVED] Fall 2025/26ISOM5260 Project Description

Fall 2025/26—ISOM5260: Project Description Note: This project is recommended for students who have taken Database Management courses in their Undergraduate/Postgraduate studies or have practical experience working on databases. Due Date: Oct 18, 2025 (before 9am) You are required to build a simple information system with graphical user interface and a database, and also to compile  managerial  reports from the database  using  SQL. You may choose to  build with any database product, e.g., Microsoft Access, Oracle, MySQL, etc. The steps are explained briefly below: 1.    Identify the functional requirements of the “Coffee Ordering System” The system should  involve  5-7  entities  in the  E-R  diagram  or  at  least  10  tables  in  the  relational database.  The  following  data  has  to  be  captured  by  the  system:   members’  personal  data  and preference, the drinks information to be chosen by members and walk-in customers, order quantity and  status,  etc.  You  should  also  document  the  scope  and  business  values  of  the  system,  other requirements can be documented if necessary. 2.    Conceptual Data Modeling After identifying entities and attributes, make use of E-R diagram to show the relationship among entities. Note that the E-R diagram should show all entities, attributes, primary keys, maximum and minimum cardinalities. You can make use of online drawing tool draw.io for drawing the E-R model. You may also consider using Computer Aided Software Engineering (CASE) tools, for instance, Data Modeler, to build the E-R model. A list of business rules and assumptions that must be enforced in the database are required to be documented in detail. 3.    Create a logical database design and physical database development From the E-R model you created, develop a Relational Schema of the database. As a logical database design, show all functional dependencies in the database. Normalization is to be performed in logical database design, if necessary. The logical database design should be normalized into third normal form (3NF). You should also document any further assumptions, constraints, and business rules. 4.    Input records to each of the tables via user interface There are no absolute requirements on how much data should be in tables of the database, however, you should input adequate records to facilitate queries and reports generation. Input the records into the database using INSERT INTO statements of SQL. 5.   Setup queries using SQL Create  10-15  sets  SQL  statements  in  which  you  feel  users  may  find  useful  in  their  operations. Document the SQL statements and state the purposes. Besides basic SQL statements, you should also write up advanced SQL statements like processing multiple tables. Aggregation functions may  be applicable in dealing with numbers in tables. Note: The above description is the basic requirement of the project. You can provide more deliverables if you wish. Basic Deliverables -      A softcopy of documentation showing the following: 1.    Project Initiation Document •    Business Values of the system     Briefly mention the reasons for supporting the development of this new database system  and  to  determine  what   benefits  the   system  will   bring   users,  or  other stakeholders. •    Scope of the system     Briefly describe the basic requirements that the system will perform.    State any other requirements if necessary. 2.    Design Specification •    Conceptual Data Model - E-R diagram and Business Rules     Present the finalized E-R diagram and provide necessary explanations of the model, such as why specific entity class, relationships and attributes are modeled in your model. State all the business rules, constraints and assumptions you made that must be enforced in the database system. •    Logical Data Model - Relational Schema    You should map the  E-R diagram into  relations,  normalize the  relations  into 3NF. Show all the functional dependencies for every relation and state which normal form each relation is in. •    Data Dictionary     Explain all relations including their attributes, primary keys and foreign keys.    Show the description of all tables in the database. 3.    Configuration Specification •    SQL Statements configurations and specifications     Document all SQL statements (10-15 sets) you used in the database, explain the SQL statements and describe how these statements are useful to users. 4.    A  brief  conclusion  about  any  thoughts  you  have  on  the  project,  such  as,  suggestions  and comments for further development. -      A working database system Note:  If  you  choose  Oracle  database  as  your  platform.  for  the  substitute  project,  a  new  Oracle database account, password and connection string will be provided. You do not need to build the substitute project using your OWN Oracle database.  

$25.00 View

[SOLVED] CSYS5040 Criticality in Dynamical Systems Assignment 2

CSYS5040 Criticality in Dynamical Systems Assignment 2 Due Date: This is assignment is due in TurnItIn by Sunday at the end of week 7. This assignment is worth 25% of your final mark. You must do all of your working in a Mathematica notebook that I can run (no pdfs of Mathematica notebooks). The * for some questions indicate the relative difficulty of the question. This is an individual assessment; your answers must reflect your own work. Marks will be based on the correctness of each answer, the effort put into exploring each question, and the originality of the examples you choose to look at. You are strongly encouraged to read beyond the class material to get a higher grade. Question 1 (7.5%): The dynamics of a stochastic differential equation a.   Choose the constant valued parameters (i.e. μ, x!  and E ) of a linear stochastic differential equation of the form. dx = μ dt + E dw where E is the strength of the stochastic (noise) term, x!  is the initial starting point and then plot the time series of the solution, making sure μ is not equal to zero (it can be positive or negative though). Choose some numerical value b (for ‘boundary’) that has the same sign as  μ, run some simulations of your solution, does your solution ever cross the boundary b and if so at what value of t does your solution cross it? Without simulating your solution, how can you know when to expect the solution to cross the boundary? b.   * Repeat part a. except that the stochastic differential equation is now non-linear,  i.e. dx = (αx + μ) dt + E dw or uses even higher order terms in x, e.g. x 2 or x 3 such as dx = (βx2  + αx + μ) dt + E dw you will score higher the more sophisticated your model is in this part and consequently for part 1 c. below. Do not implement the same equations here that are needed for Question 2.a. c.   Write a single paragraph on one application of the methods you used in parts a. and b. For example you can look up: “drift diffusion” and “neural network”, “two alternative forced choice task” (e.g. here:https://tinyurl.com/yygku3oa), “Ornstein- Uhlenbeck process”, or see here: https://en.wikipedia.org/wiki/Ornstein–Uhlenbeck_process Question 2 (7.5%): Plotting non-linear functions for a non-linear map a.   From the article we looked at in Week 4 I want you to implement one of the non- linear stochastic neural models. Do not implement the same equations you used for 1.b. For this first step all you have to do is replicate the work I showed you in class but using either 1 or 2 (depending on which model you chose to implement) stochastic non-linear equations that you will you find in the articles listed below where a decision is reached once a “decision variable” crosses a boundary threshold. This question is not intended to be difficult to understand but it can still be tricky to  implement, just modify the code I’ve already given you in class to reflect the non-linear model you’re implementing. For the different equations for each system see pages 705 to 707 of the article here https://sites.engineering.ucsb.edu/~moehlis/moehlis_papers/psych.pdf Or Iook at the different equations Iisted on the Wikipedia page under “Other ModeIs” here:https://en.wikipedia.org/wiki/Two–aIternative_forced_choice b.   * Find an articIe (using GoogIe SchoIar etc.) that has used the modeI you impIemented in part 2a. and iIIustrate some aspect of the resuIts from that articIe using the modeI you’ve just impIemented. c.    Discuss the impact of the modeI you’ve used in the context of the articIe you’ve found, e.g. you might discuss why this stochastic modeI was used rather than some other modeI, or what the parameters mean in a practicaI setting, or what new interpretation the modeI has provided in the area of study etc. Question 3 (5% + 5%): Parameters in non–Iinear dynamicaI systems a.   **Based on your answer to Question 1 for the non-linear system, write a Mathematica function that shows what happens when a parameter vaIue, e.g.  or α changes and the system switches from one equiIibrium state to another. See the Mathematica notebook from Week 1, at the end of the notebook there is a stochastic diffusion processes with more than one stationary state, we didn’t discuss this modeI but I want you to base your answer on the ideas outIined there. Note that the stationary state the system was attracted to (settIed in) depended on the initiaI  starting point x。. For this question, using a system with more than one stationary state, I want you to Iet the system find its stationary state (i.e. it is in equiIibrium) and then change the parameter vaIues so that the system switches from one equiIibrium point to another by going through a tipping point. The minimum you need to do to pass this question is to pIot the time series of the system passing  through this tipping point. b.   ** Process Design and Methodology You are given an unknown univariate time series that is suspected to contain both deterministic nonIinear dynamics and stochastic components. Design a comprehensive anaIyticaI framework that wouId aIIow you to: 1.  Identify the presence and nature of any nonIinear dynamics 2.  Distinguish between deterministic part and stochastic noise 3.  Detect potentiaI bifurcation points or regime changes 4.  Characterize the underIying dynamicaI structure Your answer shouId outIine the sequentiaI steps you wouId take, justify the choice of each anaIyticaI method, and expIain what each step wouId reveaI about the system. Consider methods such as (but not Iimited to): phase space reconstruction, correIation dimension anaIysis, Lyapunov exponent estimation, recurrence anaIysis, and tests for nonIinearity vs. stochastic processes.

$25.00 View

[SOLVED] INFO20003 Semester 2 2025 Assignment 2

INFO20003 Semester 2, 2025 Assignment 2 : SQL Due: Week 8 - Friday 19th September 2025, 6 :00PM Melbourne Time. Case: The Best Movie Recommendations Ever! (BMRE!) Inspired by the Netflix Challenge x Community Ratings x Streaming Platforms Introduction In the 2000s, Netflix had a contest (‘The Netflix Prize’) to find the best ratings prediction algorithm to improve  recommendations for its users. This was one of the early releases of a large real-world dataset for data scientists and machine learning/AI researchers. Since then, there are many other movie-related datasets that have been released, for both researchers, as well as for everyday movie enthusiasts (like us)! Inspired by the Netflix Prize, we present – “The Best Movie Recommendations Ever!” – a new challenge. Assume you are part of a team of movie fans and AI experts looking to make the best recommendation algorithm. Because you are the database expert in the team, you are placed in charge of the database management and querying. IMPORTANT: You do not have to do any AI/ML stuff for this assignment, just the SQL. For the purposes of anonymity of Internet users, and respecting the intellectual property of real-world movie platforms, note that the data given for this assignment are synthetic. Description The following description explains what data is made available to you. Be careful as many tables share identical attribute names, to simulate real-world datasets. Assume that there are no discrepancies between the different tables (e.g., Netflix year of release is the same as IMDB’s year of release – see below). Netflix records, for each movie, a unique Netflix Movie ID (e.g., 80234304), movie title, year of release. Each movie is given ratings, where a rating record consists of the numeric rating (0-5 inclusive, integers), the timestamp of the rating, and a unique anonymous user ID. For privacy, no other user data is given. Now, in this new challenge, we also obtain data from the comprehensive Internet Movie Database (IMDB), with a record for each movie. Each IMDB movie record has an IMDB Movie ID (e.g., tt12584954), the movie title, year of release, averageIMDB rating (decimal from 0.0 to 10.0 inclusive), number of people who have rated, and classification (e.g., PG-13, or R). Each movie record is associated with a lead director and up to five main actor(s)/actress(es).  Finally, each movie record has at least one genre (e.g., only Comedy, or Comedy + Drama, etc)., and one language (e.g., ‘EN’ for English, using the 2-letter ISO code for languages). IMDB also uses some data from MetaCritic, which is a collection of critics scores for each movie. MetaCritic data records comprise of the IMDB Movie ID, Source, and Score (0-100 inclusive, integer); a movie can have any number of scores (e.g., one from a newspaper, the other from a film blogger). Another reputable data source is RottenTomatoes, a movie review database. Each RottenTomatoes movie review record has a unique RottenTomatoes ID (e.g., ‘the_lord_of_the_rings_the_fellowship_of_the_ring’), a ‘Tomatometer’ critics score (0-100 inclusive, integer), and a ‘Popcornmeter’ audience score (0-100 inclusive, integer). Finally, we also draw upon ‘tag’ data (like hashtags on social media) from the MovieLens recommendation service (inspired by Harper & Konstan, 2015). Anonymised users contribute tags to movies, in the form. of records containing the MovieLens User ID, MovieLens Movie ID, the tag, and a timestamp. Luckily, your teammates in this challenge have also supplied mappings in the following tables: imdb_to_netflix, imdb_to_rottentomatoes, and imdb_to_movielens (all self-explanatory) to link movie records across all data sources/platforms. Note that a movie *may or may not* have corresponding IDs in all tables (e.g., a new movie in cinemas may only have an IMDB record and no other corresponding records). The Data Model The Data Model from MySQL Workbench is provided in Figure 1. FIGURE 1. DATA MODEL FOR BMRE! Assignment 2 Setup Please pay special attention to the penalties listed []. A dataset is provided which you can use when developing your solutions. To set up the dataset, download the file BMRE.sql from the Assignment link on Canvas and run it in Workbench. This script creates the database tables and populates them with data. The sample dataset provided is a basic, synthetic, extract and not necessarily the ‘full data‘. You may find that you may need to add some more sample data in Workbench to fully test edge cases for queries. Note that this dataset is provided for you to experiment with, but it is NOT the same dataset as what your queries will be tested against (the schema will stay the same, but the data itself may be different). This means when designing your queries, you must consider edge cases even if they are not represented in this particular data set. The script is designed to run against your account on the Engineering IT server (info20003db.eng.unimelb.edu.au). If you want to install the schema on your own MySQL Server installation, uncomment the lines at the beginning of the script. WARNING: Do NOT disable only_full_group_by mode when completing this assignment. This mode is the default and is turned on in all default installs of MySQL workbench, and we’ve added a line to the top of slarc.sql to turn it on every time you run the script in case you disable it! You can check whether it is turned on using the command `SELECT  @@sql_mode 、; The command should return a string containing ONLY_FULL_GROUP_BY or ANSI. When testing, our test server WILL have this mode turned on, and if your query fails due to this, you will lose marks. The SQL Tasks Please pay special attention to the penalties listed []. In this section are listed 10 questions for you to answer. Write one (single) SQL statement per question. Each statement must end with a semicolon (;). Subqueries and nesting are allowed within a single SQL statement – however, you may be penalised for writing overly complicated SQL statements. WARNING: DO NOT USE VIEWS (or ‘WITH’ statements/common table expressions) OR VARIABLES to answer questions. Penalties apply. ? The Questions 1.    List aII IMDB movies which contain no MetaCritic reviews. Your query shouId return resuIts of the form (IMDBMovieID, MovieTitle). (1 mark) 2.    Find the NetfIix movie with the most recent review. Assume there are no ties: onIy one is the most recent. Your query shouId return one row (NetflixMovieID, MovieTitle, TimeOfMostRecentRating). (1 mark) 3.    List aII movies rated by MetaCritic source (The Washington Post’that have at Ieast 5 NetfIix user ratings. Your query shouId return resuIts of the form. (IMDBMovieID, NetflixRatingCount). (1 mark) 4.    Find the genre whose movies has the highest average Tomatometer score. If there are ties, then you must return aII genres with the highest average. Avg score must be rounded to 1 decimaI. Your query shouId return resuIts of form. (genre, TomatometerAvgScore), with one row per genre in case of a tie. (2 marks) 5.    List aII MetaCritic scores and sources for fiIms that feature an actor with more than 2 words in their fuII name (e.g.,“James EarI Jones”, but not“Anya TayIor-Joy”). Do not dupIicate resuIts if muItipIe such actors acted in the same fiIm. Your query shouId return resuIts of the form. (Score, Source, IMDBTitle). (2 marks) 6.    Find which year has the highest number of (pg’rated movies that have at Ieast one MovieLens tag of (action_thriIIer’. If there are ties, then you must return aII resuIts. Your query shouId return resuIts of the form. (Year, MovieCount), with one row per Year in case of a tie. (2 marks) 7.    Find the totaI number of movies that are in IMDB but not in NetfIix (defined as X). SimiIarIy, find the totaI number of movies that are in IMDB but not in MovieLens (defined as Y). Your query shouId return one row: (X, Y). (2 marks) 8.    We'II refer to a Metacritic review source that has reviewed at Ieast one movie in each Ianguage that currentIy exists in the IMDB movies Iist as a 'gIobaI-reviewsource'. For each gIobaI-reviewsource, evaIuate how their review count, average score, and score standard-deviation (hint: use the STDDEV operation) varies based on the Ianguage of the movie the source reviewed. Average and StdDev must be rounded to 1 decimaI pIace. Your query shouId return (globalReviewSource, language, countReviewsForLanguage, avgScoreForLanguage, popStdDevScoreForLanguage) (3 marks) As an exampIe, consider the foIIowing dataset: imdb_movie id language 1 (EN’ 2 (ES’ 3 (EN’ metacritic_review source imdb_movie_id score NYT 1 3 NYT 2 5 Variety 1 2

$25.00 View

[SOLVED] Assignment 3 Re Design Prototype

Assignment 3: Resolved Design Prototype Overview Using Unreal Engine, students will produce a resolved prototype of their game and project review. This includes using and applying interaction and design terminology, considering user testing, affordances and applying techniques of design. Students will present their prototypes as a quick 5-10 minute video, in week 12 seminar. Building on the work you have produced for assignment 2 and the tools and techniques you learnt in assignment 1, it is now time to build your game in Unreal Engine. This scene will be constructed in 3rd person or Ist person mode. PART A: Work in progress Students should document in their journal the progress of their assignment 3, including the following: • 1x playtest reviews based on self-use (completed in seminar in weeks 10 or 11) • 1x user test review (completed in seminar week 10 or 11) • Any modifications to assets /scene design with annotations (can be a screenshot with notes) - (different from assignment 2) PART B: IN-SEMINAR 5-10 MINUTE VIDEO PRESENTATION & REVIEW You are to step through your assignment review in a pre-prepared 5-10 minute video presentation. • 1. VIDEO PRESENTATION • Ensure you address each of the questions below using both audio and visual prompts. • You should step through your game as a play-test while talking through the following points: What is happening in the main scene and how does it work? • Describe and step through the game scene including any interactive components. 1. Menu, HUD, Pause menu, Quit menu 2. Game elements: Winning Condition, lossing condition, resources, challenge level, What is the hardest part of building the scene in Unreal Engine? • Explain what components you found difficult to work through in building your game. • o Did you need to adjust your expectations due to this? What are the most interesting parts of the scene? • What components in your game are intended to draw interest from the player and how? According to the users feedback • What components in building the game did you find most interesting when building/designing and why? What are the key design elements/principles you have applied within your scene? • What 2D/3D/interactive/game design elements and principles have been included to aid in the design of the scene? • How have these changed based on playtesting throughout the prototype build? How have you redefined the scope of your design? • What aspects from assignment 2 were left out or significantly changed throughout the building of your game and why?

$25.00 View

[SOLVED] BAFI1045 Equity Investment and Portfolio Management Assessment 2

BAFI1045 – Equity Investment and Portfolio Management Assessment 2 – Company Valuation Assignment Assessment Task 2 Company Valuation Report Company Singapore Airlines Marks/Weighting 50 marks, accounting for 50% of the total grade for this course Submission Date Sunday, 21 September 2025, 5:00 pm Singapore Time Word Limit Maximum 5,000 words (excluding ToC, Appendix, Executive Summary and References) Content in excess of the word limit will not be read or marked Submission The assignment will be submitted via Canvas, Turnitin Rubric A marking rubric is provided on Canvas. Format You must upload your file in PDF format The assessment is submitted as a group assignment with a minimum of 3 and a maximum of 4 students per group. You are required to analyse a listed company and prepare an investment recommendation report. The report provides an assessment of the company’s current position and future prospects, incorporating the use of various valuation techniques to arrive at estimates of the intrinsic value of the company’s shares. Your report should make a case for the company’s shares to be rated in one of the following ways: Sell Hold Buy The shares should be sold, as a materially negative return is anticipated in the next six to 12 months. The shares will have neither a materially positive return nor a materially negative return in the next six to 12 months. The shares should be bought, as a materially positive return is expected in the next six to 12 months. Your report should fulfil the following minimum requirements. Company Analysis Provide an overview of the company’s history, operations and any structural changes it has undergone since it began. This is to understand how the company got to where it is today and what may occur in the future. Discuss any market-relevant news or events that have happened to the company, particularly in the last 12 to 24 months. ESG Factors Discuss the  ESG factors  relevant to the  company,  including  analysis  of the  company’s contribution  to  the  conservation  of  the   natural  world,   consideration  of   people  and relationships and its internal governance standards. Since sustainability is an important aspect of a firm’s operations, any relevant initiatives undertaken by the company should be discussed in detail. Industry Analysis Analyse  the  structure  of  the  industry  in  which  the  firm  operates  and  whether  it  is domestically focused or has a global nature. Identify the industry’s major companies, their locations,  and  how  they  compete  for  supremacy  within  the  industry.  Consider  global macroeconomic and microeconomic variables (economic, social and political) which may affect the fortunes of the industry in which the target company operates. •   Are there any geopolitical factors that may affect the supply of or demand for this industry? •    If so, how do they affect the industry and the company you are analysing? Evaluate the relative historical financial performance of the company among its peers • identify the firm’s TWO major listed competitors and discuss why they have been selected as peers for comparison • identify and explain the relevance of three financial ratios of your choice (not to include ROE, Net Profit Margin, Total Asset Turnover or Financial Leverage) for the company and its peers over a historical period of five financial years. •    in choosing these ratios, you should consider various aspects of the firm’s financial standing and select ratios that describe the firm’s debt servicing ability, profitability and asset efficiency • explain the performance of the company compared to its peers using this analysis analyse and explain the reasons for changes in these ratios over the past five years compared to the average of the past five years do not simply describe the changes in the ratios; look for reasons why they have fluctuated over the analysis period, and consider if these factors may recur in future Estimate the ROE of the company and two major competitors for the most recent five years using the DuPont ROE approach. •    DuPont Analysis should be done using the 3-step procedure •    3 steps: Net Profit Margin, Total Asset Turnover and Financial Leverage • show your own calculations for each component over the previous five years for the company and its three selected competitors • analyse the company’s and your selected peer companies’ ROEs over the five years • compare the DuPont ROE of the company with its three peer group companies • show and describe how the three components of ROE have changed over the analysis period and find reasons for these changes for each of the companies analyse and comment on the reasons for the change in ROE for the firm and its competitors with reference to the difference in the three components over five years relevant charts/graphs should be used to illustrate these figures Analyse the company’s/industry’s current issues and explain the effect of these issues on the company’s future earnings a)  Atthe Macroeconomic Level •   general factors that apply to the industry and the company (GDP, employment, interest rates, regulation, global factors, supply, demand, inflation, etc.) b)  Atthe Microeconomic Level •   the  company-  and   industry-specific  factors  (operations,  financials,  objectives, competition, etc.) c)  As a Porter analysis •    analyse the company’s position in its industry using Porter’s Five Forces the macro and micro analyses should be supported with graphs, tables and figures for both the recent past and forecasts, as relevant to your analysis Intrinsic Value Estimation Start your valuation analysis with the estimation of the required return using CAPM You will require three inputs to calculate the CAPM required return The CAPM required return will be used as the discount rate in your valuation models 1.  A Calculation of the Company’s 5-Year Daily Beta •    use the daily closing price data for the company and the market index to calculate daily holding period yields for the most recent five years. Using this data, you can estimate raw beta by using regression analysis in Excel. • attach your ANOVA table output in the report Adjust the Raw Beta using the formula: Adjusted Beta = (0.67 x Raw Beta) + 0.33 2. The Risk-Free Rate of Return •    use the 10-year government bond yield as a proxy for the risk-free rate 3. The Market Return •    use an estimate of the broad market return Estimate the intrinsic value of the company’s shares using the Dividend Discount Model •   you  must  use  a  Multi-Stage  DDM.  Follow  the  methodology  discussed  in  the  Equity Valuation slides • calculate the growth rate for Period 1 using the Retention Ratio and ROE formula using data for the past five years •    if you believe this is not appropriate, please use your own estimated growth rate. • estimate dividends for a total of six future years, then apply the constant growth formula to find the terminal value in Year 6 • calculate the present value of each future year’s dividend and the terminal value, then add them to calculate the intrinsic value of the company provide justification if you use a different growth rate than the one calculated for Period 1 explain your growth rate assumptions using the discussion in the macro and micro analysis estimate the terminal growth rate using a proxy that represents the long-term growth rate and calculate the terminal value. Explain why you chose this rate for terminal growth Estimate the intrinsic value of the company’s shares using the Free Cash Flow to Equity (FCFE) model •   you must use a Multi-Stage FCFE model to calculate the intrinsic value of the stock • source the  components  for   FCFE  from  the  company’s  financial  statements   using Workspace • calculate the FCFE per share over the past six years. The average growth in FCFE per share for the last six years will be the growth rate for Period 1 • estimate the growth of FCFE for Period 2 using your macro and micro-analysis • estimate FCFE per share for a total of six future years, then apply the constant growth formula to find the terminal value in Year 6 • calculate the present value of each future year’s FCFE to calculate the present value, then add them to calculate the intrinsic value of the company provide justification if you use a different growth rate than the one calculated for Period 1 provide justification for your growth rate assumptions estimate the terminal growth rate using a proxy that represents the long-term growth rate and calculate the terminal value. Explain why you chose this rate for terminal growth are your estimated growth rates and years the same as those used for your DDM model or different? Why? Apply Relative Valuation techniques to ascertain the valuation of the firm • compare Price-to-Book Value and Price-to-Earnings ratios for the company and its peers over the preceding five years. • note any changes in the ratios and the reasons for these changes • determine the relative valuation of the firm using these multiples (do not attempt to calculate the share price) analyse and comment on the relative valuation of the firm in comparison to its peers are the company’s shares overvalued or undervalued according to this methodology? Using relevant charts, evaluate the company’s share price performance over the last five years •    compare the relative performance of the company to the Index •    compare the relative performance of the company to its peers common-base charts from Workspace must be used for all multiple data series charts to give the best view of the return relationships between the stocks being compared comment on these charts, giving reasons for any significant changes you have identified Perform. a technical analysis of share price movements over the last five years •    use 50-day vs 200-day simple moving average lines and volume analysis to identify the most recent Buy and/or Sell signals where these SMA lines intersect •    on a separate chart, draw support and resistance lines to indicate price trends and channels in the most recent 12 months using the charting tools in Workspace show and comment on these analyses with reference to charts sourced from Workspace use volume analysis to confirm your price signals label all important chart points clearly using the Workspace chart tools Evaluate your findings •    Why do the intrinsic values you have calculated differ from the current/recent share price? •    How does this difference inform your investment recommendation? •    What is your investment decision based on your evaluation? •    Is your recommendation to Buy, Sell or Hold shares in this company? •    Is this conclusion different from the signal obtained from the technical analysis? Why? •    Does your qualitative analysis agree with your quantitative analysis? If not, why not? Your investment recommendation •    Your analysis should logically support your recommendation •    use graphs and data of both historical and forecast data to support your analysis •    it’s important to link your conclusion to the various assessments and calculations you have made in the individual parts of the report •    note that not all your analyses will unequivocally support your conclusion; this is typical in real-world financial analysis General guidance for students Important points regarding Valuation Models •    explain any assumptions you have made in implementing your models. •    where appropriate, explain how you arrived at the variables you are using. For example, it is not sufficient to state that you are assuming a 2% growth rate. You will be expected to justify your 2% growth rate. • it’s not enough to simply describe the financial ratios. You must find reasons why they are changing, especially if there are significant changes year to year. This will require in- depth research. •    you  must use  LSEG Workspace and IBISWorld as major data sources. These can be supplemented with data from the companies’ annual reports and other sources you have found. •    while using LSEG Workspace 1.  use the web version, not the Windows software version 2.  use domestic currency for financial analysis 3.  set LSEG Financials as the fundamental source of data in Workspace Executive Summary An executive summary is often written for leaders in a business or organisation, such as CEOs, department heads, or supervisors, so they can quickly access critical information to inform their decision-making. An executive summary should summarise the key points of the report. It should restate the report’s purpose, highlight its major points, and describe any key results, conclusions, or recommendations from the report. It should include enough information so that the reader can understand what is discussed in the full report without having to read it. References and Citations Use proper citations and references, and include a list of references you use in your report. Failure to do so will result in a lower grade. RMIT provides a website that explains the use of the Harvard reference system. Please consult it here: https://www.lib.rmit.edu.au/easy-cite/ Presentation of Report The submission should be presented in the form of a stock analyst’s investment report. It should include an Executive Summary that outlines the main findings at the beginning. The remainder can be structured in line with the above points. Attach details of your work and calculations,  as  well  as  any  other  relevant  information,  as  an  appendix.  relate  all  the information in your analysis to your investment recommendation •    build a case for your recommendation using your findings from each of the points above •    your report should look professional, with charts and diagrams as required to illustrate your points •    enhance your arguments with relevant charts and numbers from various sources •    charts copied from Workspace should be easily readable, meaning that the scale, data points and annotations should be clear and not blurred or distorted •    a penalty will be applied for charts that are not readable • do not upload a separate Excel file. Don’t include all the data for the beta calculation; only the ANOVA table from Excel should be in the report • do  not attach  information  you  have  used  in  compiling  the  report  (annual  reports, newspaper articles, etc.) • include the report’s word count on the front page. The markers won’t read or mark any part of the report after 5,000 words Some useful resources for this assignment include Reilly,  Frank  K.,  Keith  C.  Brown  and  Sanford  Leeds, Investment  Analysis  and  Portfolio Management (11th Edition), Thomson South-Western, 2019. You should also conduct your own analysis using the companies’ websites, annual reports, LSEG Workspace, IBISWorld,  and  any  other  relevant  sources  for  your  report.  The  more resources you use for your research, the better your analysis will be. Assignment submission procedure All assignments must be submitted online through the course Canvas Turnitin for a plagiarism check. An assignment cover sheet must accompany them. If your similarity score is greater than 20% you must edit and resubmit as your report contains too much unoriginal material.

$25.00 View

[SOLVED] Market View Report

Stage 1: Market View Report (Individual Assignment) Weight: 20% Assignment due date:  Week 10 - Friday, 19 Sep 2025 by 7 pm Length: 700 words Feedback mode: Feedback will be provided by the local lecturer in class. Task:•     Conduct independent research in order to form. a view regarding current and future market conditions and report your findings explaining your market view. Do you believe that exchange rates will go up or down in the next 3 to 6 months?  Explain your thinking behind your conclusion? Students must choose from the following currencies: 1.  Australian Dollar (AUD) 2.  British Pound (GBP) 3.  Canadian Dollar (CAD) 4.  Euro (EUR) 5.  Japanese Yen (JPY) 6.  New Zealand Dollar (NZD) 7.  Swiss Franc (CHF) 8.  US Dollar (USD) ***Note: Each member must exclusively choose a different currency pair***   Market View Guidelines: You can choose any combination from these currencies, so something like AUD/USD or AUD/JPY is acceptable, and so is AUD/USD or JPY/GBP. You are required to analyse what will happen to these exchange rates in the next 3-6 months. Based on the theory that you have learnt in class from Topic 5 (exchange rate determination), you are required to analyse these exchange rates based on the economic indicators of the respective countries. The indicators you learn in this subject include relative interest rates, relative inflation rates, relative growth rates, government intervention, exchange rate expectations. You may also use other factors such the global health crisis (COVID-19 pandemic), geo-political climate, and latest news that may affect the exchange rate. It is a good idea to use at least 3 of the economic indicators that you have learnt in class and one other factor if you want your market view to be strong. For   news    and    market    data,   you    can    also   use   professional   magazines, newspapers (see RMIT library e-subscriptions) and financial institutions website (e.g IMF,  World  Bank,  OECD  databases  and  the  respective  country’s  central  bank website). Based on your research, you must individually develop a market view and submit it on Canvas. Note that when it says relative, you are required to compare factors relatively e.g., if you are looking at the AUD/USD and you want to analyse interest rates, you must compare Australia’s interest rate with the US interest rate. In your analysis, if you have looked at a particular indicator you must write why you believe that this indicator will cause a currency to appreciate or depreciate against another. For example, if the interest rate in the US is higher than in Australia and that the interest rate may increase in the US and you believe it will cause the AUD to depreciate against the USD, you need  to  explain  why  this  causes  depreciation.  In  addition  to  examining  various macroeconomic  indicators  (often  referred to  as fundamental analysis).  For theory discussion, you are expected to cite some academic references from academic journal articles, which can be accessed via RMIT library website. Based on your research, you must individually develop a market view and submit it on Canvas. Once you have completed your analysis, you must state clearly what you believe will  happen  to  the  currency  pair  of  your  choice.  For  example,  if  analysing AUD/USD and you expect the exchange rate to go down, you should state that the AUD would be expected to depreciate against the USD. Keep in mind that currency appreciation or depreciation happens against another currency, so statements like “the AUD will appreciate” and “the AUD/USD will appreciate” do not make sense. Note that only a qualitative forecast is required (i.e., currency X appreciate/depreciate against currency Y), a quantitative forecast goes beyond the scope of this subject. You are expected to source your information and relevant statistics from reputable sources. DO NOT use generic sources such as Wikipedia otherwise this will attract penalties. These sources get their  information from official sources therefore you should be able to get the data from these official sources. For example, if you are after the cash rate of Australia, you can easily source that from the  Reserve  Bank of Australia. Any information included from other sources must be appropriately referenced using Harvard referencing style. For information regarding Harvard referencing style, please refer to http://www.lib.rmit.edu.au/easy-cite/ . Very important: you should conduct thorough research and discuss your market view prior to developing your trading strategies as a group in the next stage. One of your individual market views will form. your group’s overall market view (stage 2). Hence, although you are developing your individual view, you are strongly encouraged to discuss your views with your members and prepare and form. the trading strategies as a group. You will then be required to write up your Forex report as a group in the next stage.

$25.00 View

[SOLVED] PUBPOL 5310 Applied Multivariate Statistics PROBLEM SET 2 Fall 2025

PUBPOL 5310 Applied Multivariate Statistics PROBLEM SET 2 Fall 2025 Due 11:55pm, Monday, September 22, via Canvas •      Work in teams of 2-3 (or solo, if you strongly prefer).  You can choose your own teams.  Clearly indicate all the members of the team at the beginning of the problem set.  Turn in one problem set per team (not one per person). •      Keep answers as brief as possible, and include key Stata output (charts and descriptive statistics) with your answers. Be sure to label your charts and output clearly, and to indicate which question each chart is intended to answer. •      Turn in the  main problem set as one file only, not several documents.   PDF preferred.   Clearly label the problem set file (example: “PS 1 – PADM 5310 – Olivero Miller.pdf”) •    NEW: However, also turn in related .log and .do files (if relevant), as separate uploads. •      Include  relevant  Stata  commands  and output  (such as tables or “summarize” output) in your answers, so that we know what commands led to what results •      When cutting and pasting Stata results into your Word document, use “Courier” or “Courier New” or other fonts that preserve the neat formatting in Stata 1. Bivariate regression with cross-sectional data. For this question, use the data on Hourly Wage and education of US residents in 2022 (for those individuals working “full time and full year”), given in the data set CPS-ASEC-2024_fall25.dta on the course website (in the “Data” folder). a.       Look at the variables.  Use “summarize” for the variables wage, age, education, and male and female, and “tabulate” for education.  Do the summary statistics look consistent with your expectations?  Explain. b.       Run a linear regression of WAGE on Years of education using the “regress” command. c.       Interpret the slope coefficient  – what does it tell us in words?  Is this reasonable? d.       Interpret the intercept coefficient.  What does it tell us in words?  Is this reasonable? e.       Based on the regression output, what is the predicted wage for someone with 12 years of education?  Show your work. 2. Predicting after a regression a. Immediately following the regression in the previous problem, generate a new variable that is “predicted wages”.  You can do this in Stata with “predict wage_hat” . (If you want to give your new variable a different name than wage_hat, that is fine too.)   Next, generate a new variable that is the residual from the regression.  You can do this in Stata with “predict wage_residual , resid” .  (If you want to give your new variable a different name than wage_residual, that is fine too.) b. Find someone in the dataset with 12 years of schooling.  Confirm that their predicted wage is the same as your answer to the question above. c. Confirm for a few sample cases that the residual is indeed equal to the actual value minus the predicted value.  (Hint:  try “list wage_per_hour wage_hat wage_residual in 1/10” to get a listing of these variables for the first 10 observations in the dataset.  (You only need to show your calculation for one observation; but confirm for yourself that this is what is going on). d.   Graphically show what is going on with the predicted values.  You can do this with a command such as: i.   graph twoway (line wage_hat years_education) ii.   or alternatively: graph twoway (line wage_hat years_education) (scatter wage_per_hour years_education) e.   Do the predicted values make sense?  Do they look like a “best fit line”?  It seems like for low educated individuals, the predictions are systematically strange.  Why do you think that this is happening? 3. Creating conditional averages as a way to clarify data presentation. For this question, you will need the Stata data set CPS-ASEC-2024_fall25.dta, available for download in the “Data” folder on our course website.  Let’s start with a graphical representation of the relationship between years of schooling (“years_education”) and wage (“wage_per_hour”). a.   Create a scatter plot with wage on the Y axis, and years of schooling on the x-axis.   This will look a little weird!  Why do you think the graph has these vertical lines? b.   It’s even worse than it looks.  Most of the data are “smooshed together” down in the lower range of the wage y-axis.  You can get a better sense of this by making the “marker size” smaller.  Add an option to your command “ , msize(tiny)”, and show the resulting graph. This helps, but it’s still not clear. c.    Before proceeding further, use stata’s “help” command to look up the following commands:  preserve, restore, collapse.  We will use collapse to compute “conditional averages”.  But this will alter the data in Stata’s memory, and we will later want to return to the main data.  The commands “preserve” and “restore” will help us with that part. d.   Use the “collapse” command to compute average wages for each value of years of schooling.  Hint:  this will require use of the “, by(years_schooling)” option. e.   Using the “list” command, confirm that your new dataset now has only one observation per year of schooling.  Create a scatter plot on this transformed data.  Is the relationship in this graph more clear, or less clear, than in your answers to (a) and (b) above? f.    Next, let’s return to our main data, and add a twist. [use can use “restore” command, if you previously preserved the data.]  Now (after “preserve”-ing again) collapse your data to “years of education by female” cells.  You should end up a dataset with 28 observations: one for each year of schooling for men (female == 0) and for women (female == 1).  Create a scatter plot with two different colors, one for men and one for women.  You can do this with a command like: graph twoway (scatter wage year if female == 1) (scatter wage year if female == 0) What do we learn from this graph about how wages vary across education and gender?  How does the gender gap in wages change across different levels of education? g.   What is the magnitude (in $/hour) of the gender wage gap for those with 12 years of schooling?  For those with 16 years of schooling? h.   Now restore the main data set, and then repeat part (d), except instead of looking at “years of schooling” collapse to “age by gender” cells.  Plot out the life-cycle pattern of wages for men and women.  Is the gap consistent over all ages, or does it grow at certain parts of the life-cycle? 4.  IPUMS Account creation and exploration The most natural place to go for data for your independent research project is the IPUMS website.  For data related to demographics and labor market outcomes in the US economy, the best data is the CPS.  This is the dataset that the government collects to calculate the monthly unemployment rate, and to measure trends in poverty, etc.  You can access the raw data at https://cps.ipums.org/cps/.  This problem has two parts. NOTE: Please include and answer for (a) and (b) for each member of the team, not just one answer per team. a – each member of your team should request an account to use IPUMS-CPS.  For this part of the problem set, just verify that this has taken place.  (it’s okay if the account has not yet been  approved by the submission deadline.)  Let us know in the problem set with a screen shot or something similar that this has taken place for each member of the team. b – Each member of your team should browse the variable listings, and daydream/brainstorm what variables you would each like to explore.  In particular, I’d like you to think of 1-4 “outcome variables” that you would like to predict, and 2-6 “predictor” variables that you would like to use to predict them.  You can search for variables (and see for what years/months those variables are available) at https://cps.ipums.org/cps-action/variables/group .  For this part of the problem, for each member of the team clearly indicate the “outcome” and “predictor” variables you are interested in, and provide a very brief motivation for why you think it would be interesting to examine relationships between these variables.  Also, indicate what time periods and/or other sample restrictions that you would like to examine.

$25.00 View

[SOLVED] Math 447 Course Syllabus

Course Syllabus Course Overview Careful development of elementary real analysis for those who intend to take graduate courses in Mathematics. We begin with a construction of the real number system and then move on to properties of real-valued functions. Topics include completeness property of the real number system; basic topological properties of n-dimensional space and general metric spaces; convergence of numerical sequences and series of functions; properties of continuous functions; and basic theorems concerning differentiation and Riemann integration. Credit is not given for both MAΤH 447 and either MATH 424 or MATH 444. General Information This is a 3 credit hour course. The course is 16 weeks long and consists of 16 Units. You should dedicate approximately 10 hours per week to work on the course itself, but actual time commitments will vary depending on your input, needs, and personal study habits. It is recommended that you log on to the course website and check your email frequently for updates, news and announcements. Required and Recommended Texts Required: • Kenneth Ross. (2013). Elementary Analysis: The Theory of Calculus (2nd Edition). Springer. Recommended • R. Creighton Buck. Advanced Calculus (3rd Edition) o A very good book at roughly the same level as our course with different areas of focus. This is a good source of examples and exercises and contains explanations that are not in our textbook or the lectures. • Walter Rudin. Principles of Mathematical Analysis. o A more advanced book that covers the same material. It is a (the) classic textbook on this subject. Course Components This course will consist of the following components: Pre-Assignment The pre-assignment should be completed first. It is used to ensure that you understand the policies, expectations and resources provided in the course. The pre-assignment is worth the same value as each of the other homework assignments. Units Each unit begins with an overview and the learning goals you are expected to achieve. These goals should guide your study through the unit. Every unit consists of a homework assignment, lectures, readings and additional exercises to support these goals. They are designed with the same structure and components unless otherwise specified. The module activities are explained in greater detail below. Homework Assignments Each unit contains a homework assignment consisting of several exercises. After clicking the assignment you will see assignment instructions and a link to a PDF file containing these exercises. View the PDF file and complete the exercises. When you are finished, scan or take a picture of your work and submit the file via the assignment link in Canvas.

$25.00 View

[SOLVED] Math 447 Real Variables Processing

Math 447: Real Variables (3 credits) Course Description: Careful development of elementary real analysis for those who intend to take graduate courses in Mathematics. Topics include completeness property of the real number system; basic topological properties of n-dimensional space; convergence of numerical sequences and series of functions; properties of continuous functions; and basic theorems concerning differentiation and Riemann integration. Credit is not given for both MATH 447 and either MATH 424 or MATH 444. Prerequisite: MATH 241 or equivalent; junior standing; MATH 347 or MATH 348 Course Objectives: Introduction to real analysis is a gateway. The idea is to find balance between rigorous proofs and real understanding. This principle is the core of mathematics at all levels. Be prepared to learn to write proofs. Be prepared to accept a little absract but clarifying approach to well known, and not so well known topics related to calculus. Course Content: 1.   Real Numbers Natural numbers Abelian groups Grothendieck’s construction Integers Fields Rational numbers Ordered fields Completeness Peano's axiom Uncountability of real numbers 2.  Sequences Limits Monotone sequences Subsequences Bolzano-Weierstrass Limsup and liminf Application to continuous functions 3.  Metric Spaces Metric spaces Cauchy sequences Completeness Sequential compactness and total boundedness Open, closed  and compact sets Application to Heine-Borel and continuity of inverses Connectes sets Intermediate value theorem 4.  Spaces of Continuous Functions Uniform. continuity C(K) is a complete metric space Dini's theorem, application Interchanging differentiation and limit 5.  Differentiation Rolle's lemma and the mean value theorem Differentiation of power series 6.   Integration Definition Interchanging limits Fundamental theorem and application to power series Format: ●   This is an online course featuring video lectures from the UIUC Spring 2018 course taught by Professor Marius Junge. ●   Text: Kenneth Ross. (2013). Elementary Analysis: The Theory of Calculus (2nd Edition). Springer. ●   Students must be able to view assignments online, write out solutions, then scan or take a photo of their written work and upload it to Moodle. ● This course requires multiple proctored exams.

$25.00 View

[SOLVED] MEC5897 Lean Manufacturing Assignment 1

MEC5897 Lean Manufacturing Assignment 1 – Production Flow Analysis AIMS This assignment aims to give you a chance to practice the skills you have learned in this unit to analyse a real production system. The specific goals of this assignment are listed: 1 Be able to use Value Stream Mapping (VSM) to analyse production system 2 Be able to identify the bottleneck, calculate the utilisation of different manufacturing resources under the maximum TH conditions 3 Build a simulation model to simulate a real production process with variability 4 Be able to analyse the simulated data and compare it with the theoretical calculation learned in the class 5 Identify the waste and 3Ms and come up with potential solutions BACKGROUND Fastcar is a foundry that is focusing on making an engine block for the automobile industry. Its customer is a car engine factory that assembly the engine block with other components into an engine. Figure 1 Engine Block of a car In Fastcar, the Engine block is made by the die casting process with Al alloy. Figure 2 shows the manufacturing process of die casting. The melting Al liquid will be poured into the casting machine and pumped into the die. In Engine block manufacturing, the die is made of metal which can be reused, while the core is made of sand. The sand core can only be used for a single-engine block. Figure 2 Die casting process The video in this link shows how to use die casting to make an engine block. https://youtu.be/N2hYTdrzujI Fastcar received an order from its customer every two weeks. The order is sent to the marketing office of the Faster car via its online ordering system. The marketing office will upload this information on its  internal  ERP  system.  Then,  the  production  planning  office  can  be  noticed  and  contacts  the purchasing department to order  raw  materials  (Al  alloy)  and  also  prepare  the  consumables. The production planning office will also inform. the workshop management team of the production plan. Workshop management will coordinate each workstation to make enough engine blocks to meet the customer orders. The engine block will be fabricated and then stored in the FG inventory. After all the units  have  been fabricated  in the  current order. Then  Fastcar will  ship the engine  block to  its customer by truck. Fastcar is working five days per week and 8 hours per day. In the workshop of Fastcar, the following process/resource should be used. •    Step 1: Make sand core •    Step 2: Die casting •    Step 3: Clean and Inspection •    Step 4: Machining The detailed time and specifications for each resource are provided in a data file. TASKS Task 1 (5 marks): Based on the given data collected from the production line calculate the maximum TH (Also known as process capacity). During the calculation, do not consider the machine failures and use the mean process time for calculation. Also, assume the batch size in the core making step is one. And then calculate the WIP (not include materials in the raw material inventory and FGs inventory), resource capacity, utilisation and flow time of this production line under maximum TH (no buffer inventory before each manufacturing resource is needed, also do not consider the time spend in raw material inventory and FGs inventory). Calculate resource capacity and utilisation for each step. Task 2 (5 marks): Build a simulation model to simulate the real production scenario. In this simulation model, we do not consider the variability in the process and the number of units that arrive in each order.  But  you should find the optimal  batch  size to  maximize  the  production TH  while  keeping inventory as  low as  possible.  For  each  resource, we  use  mean  process time and  do  not consider machine breakdown. Based on the simulation model you should calculate: 1 meanTH and real TH for production line 2 Utilisation of each manufacturing resource 3 Average WIP (include raw material inventory and FGI) 4 Average Lead time for each unit, average lead time for the order (the time between order arrive and order ship to the customer) You should draw plots to show how these values change with respect to time. Task 3 (5 marks): Build a new model that considers the variability in your production line.   Based on this model, calculate the following items: 1 mean TH of the production line 2 real-time TH of the production line, draw a plot 3 Utilisation for each manufacturing resource. 4 Average WIP in the system (include raw material inventory and FGI) 5 Average lead time for each unit and average lead time for each order You should draw plots to show how these values change concerning the time Task 4 (5 Marks): Discussion on how variability from manufacturing will affect the production line. Discussion based on comparing the results from the simulation model with variability to the model without variability. (Use the simulation data to support your claim) Task 5 (5  marks): Build  VSM for this  production  process,  identify  the wastes  and  3Ms from this production line. Come up with a solution to deal with the identified waste. Build a VSM for the future state. Use the simulation model to prove that your solution is feasible (You can make reasonable assumptions. For example, move more workers to make sand core from clean and inspection can decrease 2 min in the processing time for core making and increase 2 min process time for clean and production) DELIVERABLES •    Deliverable 1: Assignment report (upload on moodle). It should contain the following key points. : o Summarize the analysis result or calculation results (Task 1, Task 2 and Task 3) o Necessary explanation on the steps for calculation (Task 1) o Screenshot of simulation models (Task 2, Task 3 and Task 5) o The plot should be clear and with labels of axis and units  (all tasks) o Necessary explanation of simulation models including how to measure the key performance parameters such as TH (Task 2, Task 3) o The screenshot of the calculation results from the simulation model (Task 2) o Summarize the result in a table and do a comparison    (Task 4) o Value Stream Mapping (Task 5) o Proposed your improvement plan and validate it with the simulation tool (Task 5) •    Deliverable 2: simulation models (Task 2, Task 3 and Task 5). Save them separately and compress them into a zip folder. Upload them on moodle.

$25.00 View

[SOLVED] FIT1033 Foundations of 3D Assignment 2

FIT1033 Foundations of 3D S2 2025 Assignment 2: Crafting a 3D Diorama [20%] Brief This assignment involves the design and creation of a 3D scene, or ‘diorama’, that features stylised models and textures. Your diorama will be confined within the limits of a set stage provided on Moodle. The inspiration for the 3D models that you will make in Maya may draw from real or fictional references, and your textures will be created using Substance Painter. All materials included in Adobe Substance 3D Painter can be used as a starting point for your texturing. Please note: any textures, 3D models, and materials downloaded from the internet are not permitted and will be considered as evidence of plagiarism. Once the models that make up your diorama are modelled and textured, you will export your creation to a supplied Unity project and add dynamic lighting. Your environment is primarily architectural, and should avoid plants and other natural elements. Environment Design A diorama is a three-dimensional scene that captures a moment in time. Your selection of visual references will guide your concept and ideally, your diorama should tell something of a story. It should include a range of objects that add visual interest and hint at something about the kind of characters that might inhabit it, without actually showing them. We strongly encourage students to consider something more imaginative than simply making a model of their own bedroom. Your scene must contain 1 major set piece, as a focus for your composition and narrative. It will also contain a minimum of 3 minor ‘props’ as “set dressing” which can be used multiple times throughout the scene. You are welcome to make additional props as you see fit. The stage we provide for your scene can be edited in minor ways, either to allow for dramatic lighting, or changes in composition; however, the general scale and shape of the scene stage must not be changed. A human sized scale reference has been provided to assist you in setting the scale of your scene. Your stage must be textured in such a way to complement the visual style. and composition of the scene. Careful consideration for both material design and lighting will be important criteria in this assignment. Your prior conceptual and visual research will be key in crafting an original creation. Deliverables Submissions for this assignment should contain the following (three items): (a) Maya scene files and exported textures This should contain your final Maya file and 3 incremental saves showing your working history. You are also required to submit all exported textures created for this assignment. Note: You do not need to submit your Substance Painter Project as these files can easily exceed 100MB. However, you must make these files available upon request by your tutor. (b) A Unity3D Build (Mac or PC) and the Unity Project Importing your design into the Unity scene provided and exporting your scene as an interactive build. Please ask your tutor which build they would prefer. You will also need to submit your Unity project so your tutor can assess considerations such as lighting and materials, as well as in case your build fails to compile correctly. (c) Documentation as a PDF document Your documentation will contain research, references, screenshots, analysis, and written descriptions of your working process. You can use the screen recordings of your modelling process and include selected screenshots from this video in your documentation. Further information on what your documentation should contain can be found in the Documentation Guide uploaded on the Assessment page, and examples will be provided by your tutor. Note: You must screen record your modelling process. Your screen recorded videos do not need to be submitted with your assignment, but must be made available in the event that your tutors request them when marking your work. These videos can be deleted once your assignment has been marked and feedback has been returned to you. Submission is via Moodle. ALL submitted items must be named in a clear and logical way and compressed into a single .zip file, which should be named with the assessment number, and your name. The maximum total file-size for this submission is 500MB. Assessment Criteria Your submission will be graded on the following criteria: Quality of 3D scene, modelling, and geometry [6 marks] ● Complete scene modelling of architecture, infrastructure, and associated set dressing ● Clean and Efficient Geometry ● Use of modelling tools & techniques Quality of UV mapping & materials [5 marks] ● Application of UV mapping techniques and texturing tools ● Appropriate use and consideration of 3D materials ● Application of PBR map types (Albedo colour, normal, metal, roughness, emission) Unity Presentation [3 marks] ● Application of real-time lighting techniques ● Visual impact of your scene’s composition in your final Unity Build Written and Visual documentation [6 marks] ● Discussion of inspiration, research, design, and planning ● Coverage of 3D process (including modelling, material design and application, lighting, and Unity) ● Use of images/screenshots and other visual resources to illustrate workflow ● Explanation and reflection of decision-making process Suggested Timeline Below is a recommended timeline and workflow for this assignment, note that this only shows rough production stages, and your documentation should be written continually throughout this process. If you find yourself running behind on any of these steps, this is not the end of the world, however you may find that you need to either adjust the scope of your assignment or discuss your assignment with your tutor. Late Penalties Any submission received after the due date without a prior arranged extension will receive a 5% reduction to their available mark per day late, for a maximum of seven days. Submissions received more than 7 days after the due date - without a prior arranged extension - will receive a mark of 0 and no feedback will be provided. SUBMISSION DUE: Monday Week 9, 11:55 PM

$25.00 View

[SOLVED] Cse 575 classification using neural networks and deep learning project

Purpose In this project, you are required to understand the whole process of compiling different layers (Convolutional Layer, Fully-Connected Layer, Pooling Layer, Activation Layer, Loss function) of a simple Convolutional Neural Network (CNN) for the visual classification task. And you need to compile your own evaluation code to evaluate the trained CNN to obtain the training and testing results. The total points for this project is 10 points. Objectives Learners will be able to: ● Understand the process of compiling different layers of CNN. ● Implement and evaluate a CNN for image classification tasks. ● Modify hyper-parameters and observe the effects on training and testing errors. Technology Requirements Algorithm: ● Convolutional Neural Network Resources: ● MNIST dataset Language: ● Python Project Description In this project, we will revisit the Handwritten Digits Recognition task in Project 1, using a convolutional neural network. The basic dataset is the same MNIST dataset from Project 1, but you may choose to use only a subset for training and testing, if speed performance with the entire dataset becomes a bottleneck. For example, you may use only 6000 samples for training (each digit with 600 samples) and 1000 samples for testing (each digit with 100 samples). The basic requirement of this project is to experiment with a convolutional neural network with the following parameter settings: 1. The input size is the size of the image (28×28). 2. The first hidden layer is a convolutional layer, with 6 feature maps. The convolution kernels are 3×3 in size. Use stride 1 for convolution. 3. The convolutional layer is followed by a max pooling layer. The pooling is 2×2 with stride 1. 4. After max pooling, the layer is connected to the next convolutional layer, with 16 feature maps. The convolution kernels are 3×3 in size. Use stride 1 for convolution. 5. The second convolutional layer is followed by a max pooling layer. The pooling is 2×2 with stride 1. 6. After max pooling, the layer is fully connected to the next hidden layer with 120 nodes and relu as the activation function. 7. The fully connected layer is followed by another fully connected layer with 84 nodes and relu as the activation function, then connected to a softmax layer with 10 output nodes (corresponding to the 10 classes). We will train such a network with the training set and then test it on the testing set. You are required to plot the training error and the testing error as a function of the learning epochs. You are also required to change some of the hyper-parameters (the kernel size, the number of feature maps, etc), and then repeat the experiment and plot training and testing errors under the new setting. These are the minimum requirements. Additional requirements may be added (like experimenting with different kernel sizes, number of feature maps, ways of doing pooling, or even introducing drop-out in training, etc.). 2 Directions Accessing Ed Lessons You will complete and submit your work through Ed Lessons. Follow the directions to correctly access the provided workspace: 1. Go to the Canvas Assignment, “Submission: Classification Using Neural Networks and Deep Learning Project”. 2. Click the “Load Submission…in new window” button. 3. Once in Ed Lesson, select the assignment titled “Classification Using Neural Networks and Deep Learning Project”. 4. Select a code challenge to work on: a. To start the baseline code, click on the “Analysis of Baseline Code” b. To start the lab, click on the “Lab: Classification Using Neural Networks and Deep Learning Project” c. To start the result submission, click on the “Result Submission: Classification Using Neural Networks and Deep Learning Project” 5. When ready, start working in the notebook for the respective code challenge: a. For the baseline code, the notebook is titled “baseline.ipynb” b. For the lab, the notebook is titled “project3.ipynb” c. For the result submission, the notebook is titled “project3_submission.ipynb” Baseline Code The baseline code provides a basic understanding about the different layers of Convolutional Neural Network (CNN) using the keras library. Required Tasks 1. Run the baseline code and report the accuracy. 2. Change the kernel size to 5*5, redo the experiment, plot the learning errors along with the epoch, and report the testing error and accuracy on the test set. 3 3. Change the number of the feature maps in the first and second convolutional layers, redo the experiment, plot the learning errors along with the epoch, and report the testing error and accuracy on the test set. 4. Submit a brief report summarizing the above results in your submission space. Note: You can change the kernel size up to 5*5 and number of feature maps up to 32 for both the layers. Report Submission Draft a report that explains how changing kernel size impacted the results. The report must contain: ● Your full name and student ID number on the first page in the upper left corner ● The accuracy value from the default baseline code ● The accuracy value after modifying the kernel size and feature maps in the first and second convolutional layers ● Plot the learning errors along with the epoch ● A brief overview on how the changes impacted the results The report must also follow the required format: ● A maximum font size of 12pt ● A maximum length of two (2) pages (8×11 or A4 paper). ● Saved as a PDF (.pdf) file type Lab The layer definitions have been given in the code and please follow the steps to understand the principles of different layers. The dataset you will utilize for the classification task is a subset from the MNIST dataset. The demo code will randomly select four different categories and 500 training and 100 testing samples for each category. Therefore, the total size of the training and testing samples is 2000 and 400 respectively. The subset training and testing samples will be shuffled before providing to you so that you do not need to shuffle the data when doing the training process. 4 Required Tasks: 1. Evaluation Code: The function name, function inputs, and the use of the functions have been given in the code. You are required to write the remaining part to make the function work properly and obtain the accuracy and loss for both training and testing samples. 2. You are required to train the CNN with a fixed epoch number and initialization of parameters. The total epoch number should be 10 and the learning rate should be 0.001. The batch size for the training and testing process is set to 100 and 1 respectively. The number of feature maps in the convolutional layer should be 6 and the size of the filters is set to 5*5. The size of the pooling layer is 2*2 and the ReLU activation function is set to default. The number of neurons of the first fully-connected layer is set to 32. A cross-entropy loss with softmax activation function is utilized to train the CNN. All those mentioned parameters are set to default values in the code. 3. You are highly suggested to change those above-mentioned parameters to have a better understanding of the principle of CNN for the visual classification task. However, please reset parameters to the default values to obtain results for the submission. All the results of submission should be based on the default values. And you will surely lose points if your results are not based on the default parameter values. 4. Plot the following graphs: a. Training and Testing Accuracy vs epochs b. Training and Testing Loss vs epochs You are suggested to use the built-in Jupyter Notebook to implement your algorithm. You need to take responsibility for any errors caused by the use of any other programming environment. Note: The loss value should be divided by the number of training/testing samples to normalize its value so that the number of samples do not affect the loss value. Lab Submission You must complete the tasks mentioned and plot the required graphs in the designated code challenge workspace. Every student will get their specific training and testing subset samples from the code. Please train and test the CNN with your own specific training and testing samples. Additional requirements are: ● You should compile the evaluation code by yourself. ● You must submit the results in the “Result Submission” code challenge in order to receive credit for your work. You will not get any points for the project by simply programming in the “Lab” workspace. 5 ● All the submission results obtained should be based on the default settings Result Submission You must complete the “Lab” portion of the project in order to complete this part. From the “Lab”, you will get your specific training and testing subset samples. You need to submit the four values in the code space provided for each value: ● Final training accuracy ● Training loss ● Testing accuracy ● Testing loss after 10 epochs Submission Directions for Project Deliverables What to Submit: You must submit each deliverable for the project through Ed Lessons in their designated code challenge workspaces: 1. Baseline Code Submission: A brief report summarizing the results of baseline code 2. Lab Submission: Completed evaluate function with final 4 training and testing values and Training and testing accuracy vs epochs; Training and testing loss vs epochs plots. 3. Result Submission: You need to submit the four values i.e final training accuracy, training loss, testing accuracy, testing loss after 10 epochs in the code space provided for each value. Notebook Submission To receive credit for the course, You must complete and submit your work in each code challenge’s notebook provided in the project’s Ed Lesson: 1. Follow the directions provided for each code challenge. 2. When you are ready to submit your completed work, click on either “Save and Mark Complete” (Analysis of Baseline Code and Lab) or “Test” (Result Submission) at the bottom right of the screen. 3. You will know you have successfully completed the assignment when feedback appears for each test case with a score. 6 4. If needed: to resubmit the assignment in Ed Lesson a. Edit your work in the notebook b. Run the code cells again c. Click “Save and Mark Complete” or “Test” at the bottom of the screen Your submission will be reviewed by the course team and then, after the due date has passed, your score will be populated from Ed Lesson into your Canvas grade. Baseline Code Report Submission Your report will be manually graded by the course team. You must submit your report in the designated code challenge workspace titled “Analysis of Baseline Code”. 1. Click the Plus (+) icon in the upper left corner of the notebook workspace (second icon from the left) 2. Select “Upload” 3. Locate and select your report submission from your device (PDF file only) 4. Your file will appear in a left-pane menu that appears next to the notebook workspace 5. Click “Submit” in the upper right corner to submit your completed project. 6. If needed: to resubmit the report in Ed Lesson a. Click the “Toggle Files” icon in the upper left corner of the notebook (first icon from the left) b. Locate and right-click on your previous report submission file c. Click “Delete” to remove it from your attempt and then repeat the upload directions from Step 2 7 Your latest report submission will be reviewed by the course team and then, after the due date has passed, your score will be populated from Ed Lesson into your Canvas grade. Lab Submission This lab will be manually graded by the course team. You must complete the tasks outlined in the “Lab” code challenge. Then, you will need to submit the results from the lab into the “Result Submission” code challenge. Reminder: You will not get points for the project by simply programming in the lab.You must submit the results in the “Result Submission” code challenge in order to receive credit for your work. Result Submission The Result Submission will be auto-graded. Obtained from the “Lab” submission, you must submit the four values in the code space provided for each value. ● Training accuracy ● Training loss ● Testing accuracy ● Testing loss after 10 epochs When ready to submit: 1. In order for your answers to be correctly registered in the system, you must place the code for your answers in the cell indicated for each question. 8 a. You should submit the assignment with the output of the code in the cell’s display area. The display area should contain only your answer to the question with no extraneous information, or else the answer may not be picked up correctly. b. Each cell that is going to be graded has a set of comment lines (ex: ### TEST FUNCTION: test_question1) at the beginning of the cell. This line is extremely important and must not be modified or removed. 2. After completing the notebook, run each code cell individually or click “Run All” at the top to print the outputs. 3. Click on “Test” at the bottom right of the screen. 4. You will know you have successfully completed the assignment when feedback appears for each test case with a score. 5. If needed: to resubmit the assignment in Ed Lesson a. Edit your work in the notebook b. Run the code cells again c. Click “Test” at the bottom of the screen Your submission will be reviewed by the course team and then, after the due date has passed, your score will be populated from Ed Lesson into your Canvas grade. Evaluation The assignment will be evaluate in Ed Lessons and the grades will be automatically applied to the gradebook.

$25.00 View

[SOLVED] Cse 575 naive bayes classifier project

Purpose In this project, we will systematically implement and examine the three major categories of Machine Learning techniques of this course, including supervised learning, unsupervised learning, and deep learning. Objectives Learners will be able to: ● Understand and implement supervised learning, unsupervised learning, and deep learning techniques in the context of density estimation and classification. ● Extract relevant features from the training dataset and estimate the parameters for a 2-D normal distribution for each digit. ● Utilize the estimated distributions to perform Naïve Bayes classification on the testing dataset. ● Implement the fundamental learning algorithm Naïve Bayes ● Report the classification accuracy for digits “0” and “1” in the testing set. Technology Requirements The specific algorithmic tasks you need to perform for this part of the project include: 1. Extracting the features and then estimating the parameters for the 2-D normal distribution for each digit, using the training data. Note: You will have two distributions, one for each digit. 2. Use the estimated distributions for doing Naïve Bayes classification on the testing data. Report the classification accuracy for both “0” and “1” in the testing set. Algorithms: ● MLE Density Estimation, Naïve Bayes classification Resources: ● You may go to the original MNIST dataset (available here http://yann.lecun.com/exdb/mnist/) or you can download the dataset file from the PDF (MNIST DATABASE.pdf — located in Project Overview page of course) to extract the images for digit 0 and digit 1, to form the dataset for this project. Workspace: ● Any Python programming environment ● Ed Lesson Software: ● Python environment Language(s): ● Python Project Description This project involves implementing supervised, unsupervised, and deep learning techniques for density estimation and classification. The project focuses on a subset of the MNIST dataset containing images of digits “0” and “1”. The project involves four tasks: feature extraction, parameter calculation, implementation of Naïve Bayes classifiers, and prediction of labels for the test data using the classifiers. Finally, calculating the accuracy of the predictions. This project will be submitted and graded through Ed Lessons. Please follow the links in your course to access Ed Lessons and complete this project. Directions Accessing Ed Lessons You will complete and submit your work through Ed Lessons. Follow the directions to correctly access the provided workspace: 1. Go to the Canvas Assignment, “Submission: Density Estimation and Classification Project”. 2 2. Click the “Load Submission…in new window” button. 3. Once in Ed Lesson, select the assignment titled “Density Estimation and Classification Project”. 4. In the code challenge, first review the directions and resources provided in the description. 5. When ready, start working in the notebook titled “project1.ipynb”. Preparation Access the link to your workspace through your Canvas course. You will be in the ‘Project1’’ Jupyter notebook through Ed Lesson. As you run the code, you will load the trainset and testset for digit0 and digit1 respectively (Please read the code and you will understand). Both trainset and testset are sub-dataset from the MNIST dataset. The MNIST dataset contains 70,000 images of handwritten digits, divided into 60,000 training images and 10,000 testing images. We use only a part of images for digit “0” and digit “1” in this question. Therefore, we have the following statistics for the given dataset: ● Number of samples in the training set: “0”: 5000 ;”1″: 5000. ● Number of samples in the testing set: “0”: 980; “1”: 1135 We assume that the prior probabilities are the same (P(Y=0) = P(Y=1) =0.5), although you may have noticed that these two digits have different numbers of samples in testing sets. In the existing code, myID is a 4-digit string and please change this string to the last 4-digit of your own studentID; train0 is your trainset for digit0; train1 is your trainset for digit1; test0 is your testset for digit0; and test1 is your testset for digit1. They are all Numpy Arrays. You can also convert them into python arrays if you like. Other than the string named ‘myID’, please do not change any existing code and just write your own logic with the existing code. You may go to the original MNIST dataset (available here http://yann.lecun.com/exdb/mnist/) to extract the images for digit 0 and digit 1, to form the dataset for this project. To ease your effort, we have also extracted the necessary images, and store them in “.mat” files. You may use the following piece of code to read the dataset: ● import scipy.io ● Numpyfile= scipy.io.loadmat(‘matlabfile.mat’) 3 Files for you to download: “CSE 575_Project 1 Mat Files” (attached in the “Project Overview and Resources” page in the course) The Ed notebook files and data can be downloaded by selecting the “Toggle Files” icon in the workspace (first option in the right corner). Programming For your own code logic, you have 4 tasks to do: Task 1: You need to first extract features from your original trainset in order to convert the original data arrays to 2-Dimensional data points. You are required to extract the following two features for each image: ● Feature1: The average brightness of each image (average all pixel brightness values within a whole image array) ● Feature2: The standard deviation of the brightness of each image (standard deviation of all pixel brightness values within a whole image array) We assume that these two features are independent and that each image is drawn from a normal distribution. Task 2: You need to calculate all the parameters for the two-class naive bayes classifiers respectively, based upon the 2-D data points you generated in Task 1 (In total, you should have 8 parameters). ● (No.1) Mean of feature1 for digit0 4 ● (No.2) Variance of feature1 for digit0 ● (No.3) Mean of feature2 for digit0 ● (No.4) Variance of feature2 for digit0 ● (No.5) Mean of feature1 for digit1 ● (No.6) Variance of feature1 for digit1 ● (No.7) Mean of feature2 for digit1 ● (No.8) Variance of feature2 for digit1 Task 3: Since you get the NB classifiers’ parameters from Task 2, you need to implement their calculation formula according to their Mathematical Expressions. Then you use your implemented classifiers to classify/predict all the unknown labels of newly coming data points (your test data points converted from your original testset for both digit0 and digit1). Thus, in this task, you need to work with the testset for digit0 and digit1 (2 Numpy Arrays: test0 and test1 mentioned above) and you need to predict all the labels of them. Note: Remember to first convert your original 2 test data arrays (test0 and test1) into 2-D data points as exactly the same way you did in Task 1. Task 4: In Task 3 you successfully predicted the labels for all the test data, now you need to calculate the accuracy of your predictions for testset for both digit0 and digit1 respectively. Preparing the Deliverables Results Submission & Output: Submitting your work through Ed Lessons will create your results submission. As the result from your Notebook of Project 1, you should have your ASUId(string), 8 components for computed parameters and 2 components for accuracy. The order of these 11 components should be a list and look like the following: [‘ASUId’, Mean_of_feature1_for_digit0, Variance_of_feature1_for_digit0, Mean_of_feature2_for_digit0, Variance_of_feature2_for_digit0 , 5 Mean_of_feature1_for_digit1, Variance_of_feature1_for_digit1, Mean_of_feature2_for_digit1, Variance_of_feature2_for_digit1, Accuracy_for_digit0testset, Accuracy_for_digit1testset] Report Submission Draft a report to go with your Results Submission. The report must contain: ● Your full name and student ID number on the first page in the upper left corner ● A detailed description of your observations and analysis of the project The report must also follow the required format: ● A maximum font size of 12pt ● A maximum length of two (2) pages (8×11 or A4 paper). ● Saved as a PDF (.pdf) file type Submission Directions for Project Deliverables Result Submission This assignment will be auto-graded. You must complete and submit your work through Ed Lesson’s code challenges to receive credit for the course: 1. In order for your answers to be correctly registered in the system, you must place the code for your answers in the cell indicated for each question. a. You should submit the assignment with the output of the code in the cell’s display area. The display area should contain only your answer to the question with no extraneous information, or else the answer may not be picked up correctly. b. Each cell that is going to be graded has a set of comment lines (ex: ### TEST FUNCTION: test_question1) at the beginning of the cell. This line is extremely important and must not be modified or removed. 2. After completing the notebook, run each code cell individually or click “Run All” at the top to print the outputs. 6 3. When you are ready to submit your completed work, click on “Test” at the bottom right of the screen. 4. You will know you have successfully completed the assignment when feedback appears for each test case with a score. 5. If needed: to resubmit the assignment in Ed Lesson a. Edit your work in the notebook b. Run the code cells again c. Click “Test” at the bottom of the screen Your submission will be reviewed by the course team and then, after the due date has passed, your score will be populated from Ed Lesson into your Canvas grade. Report Submission Your report will be manually graded by the course team. You must submit your report in the designated code challenge workspace where you also submitted your results submission. 1. Click the Plus (+) icon in the upper left corner of the notebook workspace (second icon from the left) 2. Select “Upload” 3. Locate and select your report submission from your device (PDF file only) 4. Your file will appear in a left-pane menu that appears next to the notebook workspace 5. Click “Submit” in the upper right corner to submit your completed project. 6. If needed: to resubmit the report in Ed Lesson 7 a. Click the “Toggle Files” icon in the upper left corner of the notebook (first icon from the left) b. Locate and right-click on your previous report submission file c. Click “Delete” to remove it from your attempt and then repeat the upload directions from Step 2 Your latest report submission will be reviewed by the course team and then, after the due date has passed, your score will be populated from Ed Lesson into your Canvas grade. Evaluation For both component combined, there are one hundred (100) points available for this project. Result Submissions The results submission is worth 60 points. The results are auto-graded and will be evaluated on: ● 10 points – Mean and variance of feature1 for digit0 ● 10 points – Mean and variance of feature2 for digit0 ● 10 points – Mean and variance of feature1 for digit1 ● 10 points – Mean and variance of feature2 for digit1 ● 10 points – Predicting new labels for digit0testset and calculating the accuracy. ● 10 points – Predicting new labels for digit1testset and calculating the accuracy. Note: The acceptable range for parameters is [x-0.2, x+0.2]; The acceptable range for accuracy is [x-0.005, x+0.005]. It means that if one of your float-number answers falls into its corresponding range, your answer will be graded as correct. No, otherwise. 8 Report Submission The report submission is worth forty (40) points. The report will be manually graded using a rubric and evaluated on: ● 10 points – Analysis is present ● 20 points – Correct solution with no errors and documentation ● 10 points – Successful run with no errors

$25.00 View

[SOLVED] Cse 598: gans for mnist dataset

Purpose The purpose of this project is to explore and implement a Generative Adversarial Network (GAN), a popular generative neural network widely applied in computer vision tasks. You will be implementing GAN on MNIST data and generate images that resemble the digits from the MNIST dataset. Objectives Learners will be able to ● Acquire an understanding of the structure and training dynamics of GANs. ● Understand the analogy of a Discriminator and a Generator. Technology Requirements ● GPU environment ● Jupyter Notebooks ● Python3 (Python 3.8 and above) ● PyTorch ● Torchvision ● Numpy ● Matplotlib Directions Accessing ZyLabs You will complete and submit your work through zyBooks’s zyLabs. Follow the directions to correctly access the provided workspace: 1. Go to the Canvas project, “Submission: GANs for MNIST Dataset project” 2. Click the “Load Submission…in new window” button. 1 3. Once in ZyLabs, click the green button in the Jupyter Notebook to get started. 4. Review the directions and resources provided in the description. 5. When ready, review the provided code and develop your work where instructed. Project Directions The GAN works by training a pair of networks, a Generator and a Discriminator, with competing loss terms. As an analogy, we can think of these networks as an art-forger and the other, an art-expert. In GAN literature the Generator is the art-forger and the Discriminator is the art-expert. The Generator is trained to produce fake images (forgeries) to deceive the art expert (Discriminator). The Discriminator which receives both the real images and fake images tries to distinguish between them to identify the fake images. The Generator uses the feedback from the Discriminator to improve its generation. Both models are trained simultaneously and are always in competition with each other. This competition between the Generator and Discriminator drives them to improve their respective models continuously. The model converges when the Generator produces fake images that are indistinguishable from the real images. In this setup, the Generator does not have access to the real images whereas the Discriminator has access to both the real and the generated fake images. Let us define Discriminator D which takes an image as input and produces a number (0/1) as output and Generator G which takes random noise as input and outputs a fake image. In practice, G and D are trained alternately i.e., For a fixed generator G, the Discriminator D is trained to classify the training data as real (output a value close to 1) or fake(output a value close to 0). Subsequently, we freeze the Discriminator and train Generator G to produce an image (fake) that outputs a value close to 1 (real) when passed through the Discriminator D. Thus, if the Generator is perfectly trained then the Discriminator D will be maximally confused by the images generated by G and predict 0.5 for all the inputs. To implement a GAN, we require 5 components: ● Real Dataset (real distribution) ● Low dimensional random noise that is input to the Generator to produce fake images ● Generator that generates fake images ● Discriminator that acts as an expert to distinguish real and fake images. ● Training loop where the competition occurs and models better themselves. 2 Generator Architecture Define a Generator with the following architecture. ● Linear layer (noise_dim -> 256) ● LeakyReLU (works well for the Generators, we will use negative_slope=2) ● Linear Layer (256 -> 512) ● LeakyReLU ● Linear Layer (512 -> 1024) ● LeakyReLU ● Linear Layer (1024 -> 784) (784 is the MNIST image size 28*28) ● TanH (To scale the generated images to [-1,1], the same as real images) ● LeakyRELU: https://pytorch.org/docs/stable/nn.html#leakyrelu ● Fully connected layer: https://pytorch.org/docs/stable/nn.html#linear ● TanH activation: https://pytorch.org/docs/stable/nn.html#tanh Discriminator Architecture Define a Discriminator with the following architecture. ● Linear Layer (input_size -> 512) ● LeakyReLU with negative slope = 0.2 ● Linear Layer (512 -> 256) ● LeakyReLU with negative slope = 0.2 ● Linear Layer (256 -> 1) Binary Cross Entropy Loss You will need to use the Binary cross entropy loss function to train the GAN. The loss function includes sigmoid activation followed by logistic loss. This allows us to distinguish between real and fake images. Binary cross entropy loss with logits: https://pytorch.org/docs/stable/nn.html#bcewithlogitsloss Discriminator Loss 3 Define the objective function for the Discriminator. It takes as input the logits (outputs of the Discriminator) and the labels (real or fake). It uses the BCEWithLogitsLoss() to compute the loss in classification. Generator Loss Define the objective function for the Generator. It takes as input the logits (outputs of the Discriminator) for the fake images it has generated and the labels (real). It uses the BCEWithLogitsLoss() to compute the loss in classification. The Generator expects the logits for the fake images it has generated to be close to 1 (real). If that is not the case, the Generatro corrects itself with the loss GAN Training Optimizers for training the Generator and the Discriminator. Adam Optimizer: https://pytorch.org/docs/stable/optim.html#torch.optim.Adam Feel free to adjust the optimizer settings. Discriminator Optimization (D-Step) ● Clear Discriminator optimizer gradients. ● Estimate real image logits with the Discriminator. ● Generate fake images using the Generator and detach them to prevent Generator gradient computation. ● Estimate fake image logits with the Discriminator. ● Calculate Discriminator loss using the DLoss function. ● Backpropagate through the graph to compute gradients. ● Update Discriminator parameters. Generator Optimization (G-Step) ● Clear Generator gradients. ● Generate fake images with the Generator. 4 ● Estimate fake image logits with the Discriminator. ● Calculate Generator loss using the GLoss function. ● Backpropagate through the graph to compute gradients. ● Update Generator parameters. Submission Directions for Project Deliverables You must complete and submit your work through zyBooks’s zyLabs to receive credit for the project: 1. To get started, use the provided Jupyter Notebook in your workspace. 2. All necessary datasets are already loaded into the workspace. 3. Execute your code by clicking the “Run” button in top menu bar. 4. When you are ready to submit your completed work, click on “Submit for grading” located on the bottom left from the notebook. 5. You will know you have completed the project when feedback appears below the notebook. 6. If needed: to resubmit the project in zyLabs a. Edit your work in the provided workspace. b. Run your code again. c. Click “Submit for grading” again at the bottom of the screen. Your submission will be reviewed by the course team and then, after the due date has passed, your score will be populated from zyBooks into your course grade. Evaluation This project is auto-graded. Each test case has points assigned to it. Please review the notebook to see the points assigned for each test case. A percentage score will be passed to Canvas based on your score. This assignment has both auto-graded and manually-graded test cases. There are a total of five (5) test cases. ● Four (4) of the five (5) test cases are auto-graded. ● The last test case will be manually graded. 5 Please review the notebook to see the points assigned for each test case. A percentage score will be passed to Canvas based on your score.

$25.00 View