In lib280-asn6 you are provided with a fully functional 2-3 tree class called TwoThreeTree280. Recall that 2-3 trees are keyed dictionaries. As such, the TwoThreeTree280 class implements the KeyedBasicDict280 interface. This interface adds the methods obtain(k), delete(k) and has(k), and set(x) (replace the item whose key matches they key of x with the item x). Presently, TwoThreeTree280 does not implement KeyedDict280 which adds additional operations including all of the methods in KeyedLinearIterator280 which, in turn, includes all of the public operations on a cursor. Note that KeyedDict280 is the same interface that is implemented by KeyedChainedHashTable280 so you should be somewhat familiar with it from the previous assignment. The task for this question is to extend the TwoThreeTree280 to a class called IterableTwoThreeTree280 which allows linear iteration over the keyed data items stored in the two-three tree in ascending keyorder. We will achieve this by adding additional references to leaf nodes so that the leaf nodes form a bi-linked list. Note that adding this feature to a 2-3 tree results in exactly a B+ tree of order 3 (see textbook Section 17.1). We aren’t going to call it a B+ tree class though, because we are implementing specifically a B+ tree of order 3, and higher-order B+ trees will not be supported. Our IterableTwoThreeTree280 class will be exactly a B+ tree of order 3. Figure 1 in the Appendix shows the differences between a 2-3 tree (without iteration) and a B+ tree of order 3 containing the same elements, with the linking of the leaf nodes to support iteration. The algorithms for insertion and deletion are the same in both kinds of tree, except that in the case of the B+ tree, references to/from the predecessor and successor leaf nodes in key-order have to be adjusted to maintain the bi-linked list of leaf nodes. The full class hierarchy of IterableTwoThreeTree280 is shown in Figure 2 of the Appendix. The hierarchy of tree node classes is shown in Figure ?? of the Appendix. To implement the IterableTwoThreeTree280, the following tasks must be carried out: 1. Make an extension of LeafTwoThreeNode280 that adds references to its predecessor and successor leaf nodes. This has already been done for you in the class LinkedLeafTwoThreeNode280. 2. Override the TwoThreeTree280::createNewLeafNode() method by adding a new protected method in IterableTwoThreeTree280 that it returns a new LinkedLeafTwoThreeNode280 object instead of a TwoThreeNode280 object. This has already been done for you. 3. In IterableTwoThreeTree280, override the insert and delete methods of TwoThreeTree280 with modified versions that correctly maintain the additional predecessor and successor references in the LinkedLeafTwoThreeNode280. Each leaf node should always point to the the leaf node immediately to the left of it (the predecessor) and to the right of it (the successor) even if they are not siblings in the tree. Of course, the leaf node with the smallest key has no predecessor and the leaf node with the largest key has no successor. In IterableTwoThreeTree280, the insert and delete methods from TwoThreeTree280 already have been copied, and TODO comments have been inserted indicating where you need to add additional code to maintain the additional leaf node references. The comments also provide a few hints. You should not have to modify any of the existing code for insert or delete, just add new code to deal with the linking and unlinking of leaf nodes from their successors and predecessors. Maintaining these links is very similar to inserting and removing nodes from the middle of a doubly-linked list. 4. Implement the additional methods required by KeyedDict280 (and, by extension, KeyedLinearIterator280). Some of these have been done for you, others have not. TODO comments in IterableTwoThreeTree280 indicate which methods you need to implement and maybe even a hint or two. In this class, thelinear iterator interface allows positioning of the cursor along the leaf-level of the tree. The cursor can never be positioned at an internal node.5. In the main() function, write a regression test to test the methods required by KeyedDict280 (and, by extension, KeyedLinearIterator280). You to not need to explicitly test the insertion and deletion methods since testing of the methods from KeyedLinearIterator280 will reveal any problems with the new leaf node linkages. This is because you will need to insert and delete items to create test cases for those methods in KeyedLinearIterator280 You must test all of the methods listed in the interfaces that are coloured blue in Figure 2 of the Appendix.Use instances of the local class called Loot (which has been defined in the main() method) as the data items to insert into the tree for testing. This class implements the type of item depicted in Figure 1 in the Appendix consisting of the name of a magic item from a fantasy game, and its value in gold pieces. The item keys are the item names (strings).Hint: The toStringByLevel() method you’ve been given prints not only the 2-3 tree’s structure, but also displays current linear ordering of the nodes that results from following the successor links in the leaf nodes, beginning with the leftmost leaf node. This may be helpful for the debugging of step 2.For this question you will be implementing a k-D tree. We begin with introducing some algorithms that you will need. Then we will present what you must do.As we saw in class, in order to build a k-D tree we need to be able to find the median of a set of elements efficiently. The “j-th smallest element” algorithm will do this for us. If we have an array of n elements, then finding the n/2-smallest element is the same as finding the median. Below is a version of the j-th smallest element algorithm that operates on a subarray of an array specified by offsets left and right (inclusive). It places at offset j (where left ≤ j ≤ right) the element that belongs at offset j if the subarray were sorted. Moreover, all of the elements in the subarray smaller than that belonging at offset j are placed between offsets left and j − 1 and all of the elements in the subarray larger than that element are placed between offsets j + 1 and right, but there is no guarantee on the ordering of any of these elements! The only element guaranteed to be in its sorted position is the one that belongs at offset j. Thus, if we want to find the median element of a subarray of the array list bounded by offsets left and right, we can call jSmallest(list, left, right, (left+right)/2)The offset (left + right)/2 (integer division!) is always the element in the middle of the subarray between offsets left and right because the average of two numbers is always equal to the number halfway in between them. The j-smallest algorithm is presented in its entirety on the next page. Page 3 ✞ ☎ Algorithm jSmallest ( list , left , right , j ) list – array of comparable elements left – offset of start of subarray for which we want the median element right – offset of end of subarray for which we want the median element j – we want to find the element that belongs at array index j To find the median of the subarray between array indices ’left ’ and ’right ’, pass in j = ( right + left )/2.Precondition : left pivotIndex jSmallest ( list , pivotIndex +1 , right , j ) // Otherwise , the pivot ended up at list [j] , and the pivot *is* the // j-th smallest element and we ’re done . ✝ ✆Notice that there is nothing returned by jSmallest, rather, it is the postcondition that is important. The postcondition is simply that the element of the subarray specified by left and right that belongs at index j if the subarray were sorted is placed at index j and that elements between le f t and j − 1 are smaller than the j-th smallest element and the elements between j + 1 and right are larger than the j-th smallest element.There are no guarantees on ordering of the elements within these parts of the subarray except that they are smaller and larger than the the element at index j, respectively. This means that if you invoke this algorithm with j = (right + left)/2 then you will end up with the median element in the median position of the subarray, all smaller elements to its left (though unordered) and all larger elements to its right (though unordered), which is just what you need to implement the tree-building algorithm! NOTE: for this algorithm to work on arrays of NDPoint280 objects you will need an additional parameter d that specifies which dimension (coordinate) of the points is to be used to compare points.An advantage of making this algorithm operate on subarrays is that you can use it to build the k-d tree without using any additional storage — your input is just one array of NDPoint280 objects and you can do all the work without any additional arrays — just work with the correct subarrays.You may have noticed that jSmallest uses the partition algorithm partition the elements of the subarray using a pivot. The pseudocode for the partition algorithm used by the jSmallest algorithm is given below. Note that in your implementation, you will, again, need to add a parameter d to denote which dimension of the n-dimensional points should be used for comparison of NDPoint280 objects. ✞ ☎// Partition a subarray using its last element as a pivot . Algorithm partition ( list , left , right ) list – array of comparable elements left – lower limit on subarray to be partitioned right – upper limit on subarray to be partitioned Precondition : all elements in ’list ’ are unique ( things get messy otherwise !) Postcondition : all elements smaller than the pivot appear in the leftmost part of the subarray , then the pivot element , followed by the elements larger than the pivot . There is no guarantee about the ordering of the elements before and after the pivot . returns the offset at which the pivot element ended up pivot = list [ right ] swapOffset = left for i = left to right -1 if ( list [ i ] 0). • Perform a range search: given a pair of points (a1, a2, . . . ak ) and (b1, b2, . . . , bk ), ai
We have seen how we can extend the functionality of a data structure that we already have implemented by creating a subclass using the extends keyword in Java. This allows us to take an existing data structure class and add new functionality or modify existing ADT functionality (e.g. extension of ArrayedTree280 to ArrayedHeap280), or to create a specialization of a more generic ADT (e.g. extension of LinkedSimpleTree280 to ExpressionTree). Another ADT programming paradigm is restriction. Restriction is used to make an existing ADT appear to be another ADT that has less functionality. Consider the following example. Suppose we need a Stack ADT, but our data structure library, whatever it is, doesn’t have one. Further suppose that our data structure library does have a list ADT which allows insertions and deletions from either end of the list. Such a list has more than the necessary functionality of a stack. A stack is really just a list where we can only add (push) and remove (pop) items at one end. We could just use the list itself as a stack, trusting ourselves to only add and remove at one end. But this does not eliminate the possibility that the “stack” could be used in ways that a stack should not be used when all we really need is a stack. Perhaps another programmer comes along and doesn’t realize that you were using a list to get the behaviour of a stack and does something very un-stack-like to the list without realizing it! What is really needed in this situation is an ADT that has less functionality than the one we already have. We can’t get that by extending the existing list ADT, but we can use the idea of restriction to make the list appear to the outside world as nothing more than a stack! The solution is to restrict the list class by writing a stack class which includes a list as an instance variable, and methods that consist of only the interface for a stack. In other words, the list is the internal data structure for the stack ADT, but the public operations for the ADT consist of only those for a stack. These methods then “translate” the stack operations into the corresponding list operations, thus resulting in a new class that looks exactly like a stack to the outside world, but implements the stack using a list.1 If you take a look at the Stack280 class in lib280, you’ll see that this is precisely what Stack280 does. Its methods translate stack operations into operations on an internal list. Implementing this restriction is far less work than writing a new linked or array-based stack class from scratch. You will practice the concept of restriction in Question 1, below.In this assignment you will work with an implementation of a chained hash table. lib280-asn5 contains the class KeyedChainedHashTable280. You will use this class to solve a problem. KeyedChainedHashTable280 is an example of a keyed dictionary where each item is associated with a unique key. A keyed hash table computes the hash of an item from only the item’s key, even though the item itself might contain other data. Additional background and details about chained hash tables are covered in Session 11 and Tutorial 5. 1Some might call this a “wrapper” class because all it does is “wrap” one ADT in a more restrictive interface, and passes all the work on to the inner ADT.Recall from assignment 2 that a priority queue is a queue in which there is a numeric priority associated with every item in the queue. The item at the head of a priority queue is always the item with highest priority. If we think about it, a heap actually has some of the functionality of the priority queue ADT we specified in assignment 2. We can insert items into a heap which are kept organized by their size (equivalent to enqueue!), and a heap always “dispenses” the largest item (equivalent to getting the item at the front of the priority queue), and it allows us to delete the largest item (equivalent to a dequeue!). The problem we would like to solve is to implement the priority queue ADT specification from assignment 2 by re-using as much of our existing code as we can. You can find the priority queue specification in the appendix of this document. In Assignment 3 we wrote a class ArrayedHeap280 (the instructor’s ArrayedHeap280 is included in lib280-asn5), which has some of the functionality we need for a priority queue. But we cannot just extend it because: 1. it has methods that a priority queue ADT doesn’t have, like itemExists; 2. it has methods that have the right functionality, but the wrong name, like deleteItem, which, for our priority queue, is called deleteMax; and 3. it doesn’t have an iterator, which we need in order to determine the item with the smallest priority (we need to be able to inspect all of the items in the heap to find the smallest one without moving the internal cursor away from the root of the heap). The solution will be as follows: (a) Write a class ArrayedBinaryTreeIterator280 which extends ArrayedBinaryTreePostion280 and implements the LinearIterator280 interface. This will be an iterator for the ArrayedBinaryTree280 class. ArrayedBinaryTreeIterator280 should be fairly easy to implement since you can pretty much copy all of the methods required by the LinearIterator280 interface from ArrayedBinaryTreeWithCursors280, with perhaps some small modification. A mostly incomplete ArrayedBinaryTreeIterator280.java file has been provided in the tree package of lib280-asn5 to start you off. (b) Extend ArrayedHeap280 to a class IterableArrayedHeap280 and write the following methods in IterableArrayedHeap280: • Add a deleteAtPosition method to delete a specific item in the heap (which need not be the root). The item to be deleted should be specified by passing a reference to an ArrayedBinaryTreeIterator280 object. The algorithm for this is just a slight modification of the normal heap deletion algorithm where you swap the item at the end of the array with the item to be deleted, and then swap it with its larger child until it is larger than both its children. This method should be very similar to deleteItem from ArrayedHeap280. • Add a method to IterableArrayedHeap280 called iterator which returns a new ArrayedBinaryTreeIterator280 object for the tree. A mostly incomplete IterableArrayedHeap280.java file has been provided in the tree package lib280-asn5 to start you off. (c) Write a class PriorityQueue280 which is a restriction (as defined in Section 1.1) of IterableArrayedHeap280 which implements the priority queue ADT specification given in the Appendix of this document. This means that PriorityQueue280 should have an IterableArrayedHeap280 as an instance variable, and it is in this heap that the queue items are stored. In this way we can hide the functionality of IterableArrayedHeap280 that we don’t want exposed, as well as add the functionality that it lacks. Page 3 Some of the priority queue methods, like isFull and deleteMax, require identical behavior to existing methods in the IterableArrayedHeap280 and can be written as a single call to a method in the IterableArrayedHeap280 instance variable. Other methods, like minItem and deleteAllMax, require functionality that doesn’t exist in IterableArrayedHeap280 which will be up to you to implement – this will still be done by calling methods of the internal heap, but a single call won’t be enough, for example, you may need to iterate over the heap using an iterator. I have provided you with a mostly incomplete PriorityQueue280.java file in the dispenser package lib280-asn4 which contains a regression test for your convenience (you do not have to write your own). (d) Comment all of the methods in all classes that you wrote by adding a javadoc comment header, and inline comments where appropriate.Since items in the IterableArrayedHeap280 must implement Comparable, your priority queue may assume that the compareTo method of the items compares items based on their priority.Almost every modern video game in the roleplaying genre provides the player with a quest log which is essentially a list of tasks that the player’s character has to perform to advance the story or to obtain rewards.2 In this question we are going to create a data structure for a quest log based on a chained hash table. For our purposes, a quest log entry will consist of the following pieces of information: • Name of the quest. • Name of the area of the world in which the quest takes place. • Recommended minimum character level that should attempt the quest. • Recommended maximum character level that should attempt the quest. For the purposes identifying quests, quest log entries are keyed on the name of the quest. A class called QuestLogEntry that can hold these pieces of information is provided. Notice how the key() method of the QuestLogEntry class returns the name of the quest. For this question, you are provided with a complete IntelliJ module called QuestLog-Template in which you will modify one of the classes. It includes the QuestLogEntry class and some .csv (commaseparated value) files which will be used as input data. The QuestLog-Template module requires access to the lib280-asn5 project, set this up as a module dependency as per the self-guided tutorial on the class website for setting up an IntelliJ module to use lib280. You’ve already done this sort of thing previously with other assignments. The entirety of your work will be to finish implementing methods in the QuestLog class provided in the QuestLog-Template project, and to write a couple of interesting tests. Note that the QuestLog class is a specialized extension of KeyedChainedHashTable280, so it is a chained hash table. Here is a list of what you have to do: (a) Complete the implementation of the QuestLog.keys() method. This method must return an array of the keys (i.e. the quest names) of each QuestLogEntry instance in the hash table. The keys may appear in the returned array in any order. (b) Complete the implementation of the QuestLog.toString() method. This method should return a printable string consisting of the complete contents of all of the QuestLogEntry objects in the hash table in alphabetical order by the quest names, one per line. Here is an example string returned by toString() from a quest log containing four entries: ✞ ☎ Defeat Goliad : Candy Kingdom , Level Range : 20 -25 Locate the Lich ‘s Lair : Costal Wasteland , Level Range : 35 -40 Make an Amazing Sandwich : Finn ‘s Treehouse , Level Range : 1 -5 Win Wizard Battle : Wizard Battle Arena , Level Range : 2 -4 ✝ ✆ Remember that the hash table makes no promises whatsoever about the ordering of the quest log entries in the chains of the hash table. Hint: the keys() method from part (a) will be handy for this method, as will knowing that Arrays.sort() can sort the elements of an array. (c) Complete the implementation of the QuestLog.obtainWithCount() method. This method takes a quest name as input and returns a Pair280 object (found in lib280.base) which must contain the QuestLogEntry object from the quest log which matches the given quest name (if it exists) and the number of QuestLogEntry objects that were examined while searching for the desired one. The latter number must be present whether the quest name was found in the quest log or not. 2Back in ’80s and early ’90s, games didn’t have quest logs. If you wanted to remember what you were supposed to do, or something that a character in the game said, you wrote it down with an ancient mystical device called a pencil. And there was no Internet with detailed wikis for every game to get hints from if you got stuck! Page 5 Hints: A Pair280 object has two generic type parameters, the first of which specifies the type of the first element of the pair, and the second of which specifies the type of the second element in the pair. For example, if I wanted a pair consisting of an Integer and a Float I might write: ✞ ☎ Pair280 < Integer , Float > p = new Pair280 < Integer , Float >( 5 , 42.0 ); ✝ ✆ The components of the pair can be accessed using the firstItem() and secondItem() methods: ✞ ☎ System . out . println ( p . firstItem ()); // prints the integer 5 System . out . println ( p . secondItem ()); // the floating point number 42.0 ✝ ✆ (d) Now take a look at the main() program in QuestLog.java. As given, it already does the following things: 1. Creates a new, empty QuestLog instance called hashQuestLog. 2. Creates a new, empty OrderedSimpleTree280 instance (a binary search tree) called treeQuestLog that can hold items of type QuestLogEntry. 3. A .csv file containing the data for a number of QuestLogEntry objects is opened, and read in, creating a QuestLogEntry instance for each quest, and adding each such instance to both both the hashQuestLog and treeQuestLog data structures. 4. The complete contents of hashQuestLog and treeQuestLog are printed out using their respective toString() methods. You’ll know when your Questlog.toString() method from part (b) is working when its output matches that of treeQuestLog.toStringInorder(). At the end of main() are two TODO markers. For the first one, you need to write code that calls hashQuestLog.obtainWithCount() for each quest in the hashed quest log and determines the average number of QuestLogEntry objects that were examined over all such calls. Finally, for the second TODO marker, you have to do the same thing for treeQuestLog. You can do this by calling the searchCount() method of treeQuestLog for each quest stored in the log. Note that searchCount() requires that you pass in the actual QuestLogEntry object that you are looking for rather than just the quest name (you can obtain these from the hashed quest log). searchCount() returns the number of items that were examined while trying to position the tree’s cursor at the given QuestLogEntry object. Once you have computed the average number of QuestLogEntry objects examined for each of the two data structures, print out the results. Something like this will do: ✞ ☎ Avg . # of items examined per query in the hashed quest log with 4 entries : 1.25 Avg . # of items examined per query in the tree quest log with 4 entries : 2.0 ✝ ✆ (e) Run your completed main() program for each of the .csv files provided in the project (just change the filename in quotes that is passed to FileReader). Each .csv file contains in its filename the number of quest entries in the file. For each .csv file, record the reported average number of items examined per query in each data structure. In a text file called a4q2.txt (or other acceptable file format) answer the following questions: 1. List the reported averages that you recorded for each .csv input file in a table that looks something like this (filling in the rest of the table of course): Filename Avg. Queries for hashQuestLog Avg. Queries for treeQuestLog quests4.csv 1.25 2.0 quests16.csv quests250.csv quests1000.csv quests100000.csv Page 6 2. If you had to choose a simple function (i.e. from the list of functions used in big-O notation) to characterize the behaviour of the the average number of items examined per query for the hashed quest log as the number of quests (n) in the log increases, what would it be? 3. If you had to choose a simple function (i.e. from the list of functions used in big-O notation) to characterize the behaviour of the the average number of items examined per query for the tree quest log as the number of quests (n) in the log increases, what would it be? 4. If your primary use of the quest log was to display all of the quests in the log in alphabetical order, would you prefer the hashed quest log or the tree quest log? Why? 5. If your primary use of the quest log was to periodically look up the details of specific quests in no particular order, would you prefer the hashed quest log or the tree quest log? Why? 3 Files Provided lib280-asn5: A copy of lib280 which includes: solutions to assignment 3; ArrayedBinaryTreeIterator280.java A mostly incomplete implementation of a linear iterator for arrayed binary trees; IterableArrayedHeap280.java A mostly incomplete implementation of a heap which provides an iterator and allows any item to be deleted; and PriorityQueue280.java: A mostly incomplete implementation of our priority queue ADT (in the lib280 dispenser package). QuestLog-Template.zip: A complete IntelliJ module containing everything you need for Question 2.In You must submit a .ZIP file containing following files: ArrayedBinaryTreeIterator280.java Your completed implementation of a linear iterator for arrayed binary trees. IterableArrayedHeap280.java Your completed implementation of a heap which provides an iterator and allows any item to be deleted. PriorityQueue280.java: Your completed implementation of the priority queue ADT. QuestLog.java: Your completed quest log based on a hash table (parts (a), (b), and (c) of Q2), and completed additions to main() (part (d) of Q2). a4q2.txt: Your answers to the questions posed in part (e) of Q2. If you prefer, this file may be a MS Word file or a PDF file.5 Grading Rubric The grading rubric can be found on Canvas. Page 7 Appendix – Priority Queue ADT Specification This is the Priority Queue ADT specification from Assignment 2, but with the frequency operation omitted. You need to implement only the operations shown here. Name: PriorityQueue Sets: Q : set of priority queues containing elements from G. G : set of items that can be in a priority queue. B : {true, f alse} N : set of positive integers.N0 : set of non-negative integers. Signatures: newPriorityQueue: N → Q Q.insert(g): G ̸→ Q Q.isFull: → B Q.isEmpty: → B Q.count: → N0 Q.maxItem: ̸→ G Q.minItem: ̸→ G Q.deleteMax: ̸→ Q Q.deleteMin: ̸→ Q Q.deleteAllMax: ̸→ Q Preconditions: For all q ∈ Q, g ∈ G, q.insert(g): queue is not full q.maxItem: queue is not empty q.minItem: queue is not empty q.deleteMax: queue is not empty q.deleteMin: queue is not empty q.deleteAllMax: q must not be empty. (Operations without preconditions are omitted) Semantics: For all n ∈ N, g ∈ G, n ∈ N newPriorityQueue(n): create a new queue with capacity n. q.insert(g): insert item g into t in priority order with the highest number being the highest priority. q.isFull: return true if t is full, f alse otherwise q.isEmpty: return true if t is empty, f alse otherwise q.count: obtain number of items in q q.maxItem: return the largest (highest priority) item in q. q.minItem: return the smallest (lowest priority) item in q. q.deleteMax: remove the largest (highest priority) item in q from q. q.deleteMin: remove the smallest (lowest priority) item in q from q. q.deleteAllMax: all occurrences of the highest priority item are deleted from q.
A heap is a binary tree which has the following heap property: the item stored at a node must be at least as large as any of its descendents (if it has any). In a heap, when an item is removed, it is always the largest item (the one stored at the root) that gets removed. Also, the only item that is allowed to be inspected is the top of the heap, in much the same way that the only item of a stack that may be inspected is the top element. Stacks, queues, and heaps are all examples of collections of data items that we call dispensers. You can put stuff into a dispenser, but the user doesn’t get to specify where – the collection decides according to some rule(s). Likewise, you can take something out of a dispenser, but the dispenser decides what item you get. Dispensers maintain a current item using an internal cursor, but the dispenser always decides what is the current item, and thus the item that will next be dispensed when a user asks to remove or inspect the current item. Dispensers do not have public methods to control the cursor position because the user is not supposed to control this; it’s up to the dispenser. In a stack, the “current” item is always the item at the top of the stack. In a queue it is the item at the front of the queue. In a heap it is the item at the root of the heap. In question 1 you will implement a heap by writing a class called ArrayedHeap280 that extends the abstract class ArrayedBinaryTree280 and implements the Dispenser280 interface. Here are brief pseudocode sketches of the insert and deleteItem algorithms: ✞ ☎ Algoirthm insert (H , e ) Inserts the element e into the heap H . Insert e into H normally , as in ArrayedBinaryTreeWithCursors280 // ( put it in the left – most open position at the bottom level of the tree ) while e is larger than its parent and is not at the root : swap e with its parent ✝ ✆ ✞ ☎ Algorithm deleteItem ( H ) Removes the largest element from the heap H . // Since the largest element in a heap is always at the root … Remove the root from H normally , as in ArrayedBinaryTreeWithCursors280 // ( copy the right – most element in the bottom level , e, into the root , // remove the original copy of e.) while e is smaller than its largest child swap e with its largest child ✝ ✆ For AVL trees, we need to be able to identify critical nodes to determine if a rotation is required. After each recursive call to insert(), we need to check whether the current node is critical (the restoreAVLProperty algorithm). This means we need to know the node’s imbalance, which means we need to know the heights of its subtrees. If we compute the subtree heights with the recursive post-order traversal we saw in class, we are in trouble, because this algorithm costs O(n) time, where n is the number of nodes in the tree. Since insertion requires O(log n) checks for critical nodes, computing imbalance in this way makes insertion O(n log n) in the worst case. To avoid this cost, in each node of the AVL tree we have to store the heights of both of its subtrees, and update these heights locally with each insertion and rotation. The insertion algorithm from the AVL tree slides becomes: ✞ ☎ // Recursively insert data into the tree rooted at R Algorithm insert ( data , R) data is the element to be inserted R is the root of the tree in which to insert ‘data ‘ // This algorithm would only be called after making sure the tree // was non – empty . Insertion into an empty tree is a special case . if data
Question 2 is about a bounded binary tree implementation. You should remember binary trees from CMPT 145 (or similar course) – they are trees in which each node has at most two children.What you probably didn’t know is that binary trees can be stored using an array, rather than a linked structure. In such an array, the contents of the root node are stored in offset 1 of the array (offset 0 is unused). The contents of the children of the node whose contents are stored at offset i are stored at offset 2i and 2i + 1, respectively.Thus, the left child of the root is at offset 2 × 1 = 2, the right child of the root is at offset 2 × 1 + 1 = 3, the left child of the left child of the root is at offset 2 × 2 = 4, and so on. The parent of the node whose contents are at offset i, is at offset i/2 (integer division). Thus, the parent of node at offset 7 is at offset 3.Example 1 Here is the array representation of a tree storing the elements 1 through 10, in no particular order. The array – 7 1 4 3 9 10 2 8 5 6 is a representation of the tree: 7 1 3 8 5 9 6 4 10 2 Note that we do not allow any unused entries in the array between used ones. All the data items in the array are stored contiguously.This means that we can represent only a particular subset of binary trees with this representation. Namely, it is the set of trees where all levels except possibly the last level are complete (full) and the nodes in the bottom level are all as far to the left as possible. You might be thinking that this is too restrictive and not very useful because we can’t represent all binary trees with this data structure.However, as we will see on future assignments, this array-based tree data structure is highly useful and efficient as a basis for implementing certain other important data structures. Also note that if we read off the items from left to right in each level of the tree, starting from the top level, we get the items in the same order as they appear in the array.An m-ary tree is one in which a node may have up to m children. Your lib280-asn3 library has a class called BasicMAryTree280 which implements an m-ary tree. It has some similarities with LinkedSimpleTree280 in that, like LinkedSimpleTree280, you have to build up larger trees from smaller trees, rather than inserting individual elements, because since m-ary trees have no defined structure in general and thus there is no obvious algorithm for automatically deciding where a new element should go. You will use this class in Question 3. More details and some examples on how to use this class are/were provided in Tutorial 3. A priority queue is a queue where a numeric priority is associated with each element. Access to elements that have been inserted into the queue is limited to inspection and removal of the elements with smallest and largest priority only. A priority queue may have multiple items that are of equal priority. Give the ADT specification for a bounded priority queue using the specification method described in Topic 7 of the lecture notes. By “bounded”, it is meant that the priority queue has a maximum capacity specified when it is created, and it can never contain more than that number of items. Your specification must specify the following operations: newPriorityQueue: make a new queue insert: inserts an element with a certain priority isEmpty: test if the queue is empty isFull: test if the queue is full maxItem: obtain the item in the queue with the highest priority minItem: obtain the item in the queue with the lowest priority deleteMax: remove from the queue the item with the highest priority deleteAllMax: remove from the queue all items that are tied for the highest priority deleteMin: remove from the queue the item with the lowest priority frequency: obtain the number of times a certain item occurs in the queue (with any priority) Your task is to write a Java class called ArrayedBinaryTreeWithCursors280 which extends and implements the abstract class ArrayedBinaryTree280 (provided in the lib280-asn3.tree package as part of lib280-asn3). Tutorial 3 also has more about array-based trees. Your Tasks Some of the work of implementing ArrayedBinaryTreeWithCursors280 has already been done, but there is more to do. Firstly, there are several methods in ArrayedBinaryTreeWithCursors280 which are defined but not implemented. You must implement these methods. These methods can be easily identified by their “TODO” comments. Secondly, since ArrayedBinaryTreeWithCursors280 implements some other interfaces, there are several methods required by these interfaces that also need to be implemented but have not yet been finished. The method headers for these methods have already been generated but the method bodies are empty. The definitions of these methods in their respective interfaces document what these methods are supposed to do. These methods can be identified by their “TODO” commands as well as the @Override directive above the method headers. Many of these methods are defined by LinearIterator280, which is inherited through Dict280. These are the same linear iterator methods that you wrote for LinkedList280 on assignment 1. The rest of these methods are defined by Cursor280. There is already a regression test included in ArrayedBinaryTreeWithCursors280. Your completed implementation of the arrayed binary tree must pass the given regression test. If all the regression tests are successful, the only output should be: Regression test complete. You may not modify any of the existing code in the provided ArrayedBinaryTreeWithCursors280.java file (including the regression test) but you can add to it. You may also not modify any other files within lib280-asn3. Implementation Notes • You don’t need to declare an array instance variable in ArrayedBinaryTreeWithCursors280 to hold the data items. There is already one inherited from ArrayedBinaryTree280. You should look at the other inherited instance variables too! • One of your first tasks will be to start implementing the insert method and decide where the new element should be inserted. If you think about it, there’s really only one place it can go… • The algorithm for deleting an element is to replace the element to be deleted by the right-most element in the bottom level of the tree, then delete the right-most element in the bottom level of the tree. • Not sure how a linear iterator works on a tree? If you think about it, there is only one reasonable way to define a linear ordering on the elements of an array-based binary tree. • Reminder: the items array (defined in the abstract class ArrayedBinaryTree280 ) represents the nodes of the tree. You are storing the contents of nodes in the array. There is no node class. It is very important that the contents of the root are stored in offset 1 and we don’t use offset 0 of the array, otherwise the given formulae for finding the child or parent of a node at offset i will not work correctly. In video games, especially those in the roleplaying genre, it is common that characters in the game are advanced in power through the use of a skill tree. Generally, a skill tree defines the prerequisite for the various skills that your character in the game might acquire. For example, in a hypothetical game, if the Shield Bash, Defensive Stance, and Shield Ally skills all require that your character first have the skill Shield Proficiency, then this might be represented by the following skill tree: Shield Proficiency, Cost: 1 Shield Bash, Cost: 2 Defensive Stance, Cost: 1 Shield Ally, Cost : 3 More formally, a skill in the skill tree can only be gained if the character first gains all of the skills which are ancestors of that skill in the tree.1 Your task in this question is to write a class called SkillTree which extends BasicMAryTree280 (an m-ary tree of Skill objects; a complete Skill.java is provided). A template for the SkillTree class is provided. It contains a constructor and a couple of useful methods. You will add additional methods to this class in the following steps, which you should complete in order: (a) Write a main() method in the SkillTree class in which you construct your own skill tree for your own hypothetical video game. Your tree must contain at least 10 skills. However, for the sanity of everyone involved, try to keep it under 15 skills. Be creative! There is no reason why any two students should hand in exactly the same (or even very similar) skill trees, nor should you just duplicate the skill tree shown in the sample output. Print your tree to the console using the toStringByLevel() method inherited from BasicMAryTree280. (b) Write a method in the SkillTree class called skillDependencies which takes a skill name as input and returns an instance of LinkedList280 which contains all the of the skills which are prerequisites for obtaining the input skill (including the input skill itself!). A RuntimeException exception should be thrown if the tree does not contain the given skill. A good implementation approach for this method is to use a recursive traversal of the tree to find the named skill, and then add skills to the output list as the recursion unwinds. Tutorial 3 includes some discussion of recursive traversal of m-ary trees. Add to your main() program a few tests of this method, and print out the lists that is returned (you can use the list’s toString() method for this). Be sure to test the case where the named skill does not exist in the tree. (c) Write a method in the SkillTree class called skillTotalCost which takes a skill name as input and returns the total number of skill points that a player must invest to obtain the given skill. If the named skill is not in the skill tree, then the skillTotalCost method should throw a RuntimeException exception. Hint: this method is quite easy to implement if you make use of the previously implemented skillDependencies method. For example, in the above skill tree, if a character wants the Shield Ally skill they would need to spend 1 skill point to get Shield Proficiency, and then spend 3 skill points to get Shield Ally for an 1 In the video game world, the term “skill tree” sometimes refers to things that actually aren’t trees; a noteworthy example is the skill tree in the ARPG Path Of Exile, which, if you click the link, can see is clearly not a tree, even though they call it that. Here in question 3, we used the term “skill trees” to mean skill trees that are, in fact, actual trees. Page 5 overall investment of 1 + 3 = 4 points, so for the above tree, skillTotalCost(“Shield Ally”) should return 4. Note that the Skill object contains the cost of the skill. Add to your main() program a few tests of skillTotalCost, and print out the total costs returned. Be sure to test the case where the named skill does not exist in the tree. (d) Run your main() program. Cut and paste the console output to a text file and submit it with your assignment. See the sample output below. Sample Output Here is an example of what the output of your program might look like. Remember, you are expected to be creative in designing your skill tree, and your submission should not attempt to duplicate what you see here aside from the general formatting (the formatting can be the same, but the data should be different!). Note that the formatting of output of the skill tree contents is done by the toStringByLevel() method of BasicMAryTree280. ✞ ☎ My Skill Tree : 1: Slash , Cost : 1 2: Mighty Blow , Cost : 2 2: Shield Bash , Cost : 1 3: Shield Charge , Cost : 2 3: Parry , Cost : 2 4: Shield Wall , Cost : 4 4: – 4: – 4: – 3: – 3: – 2: Cleave , Cost : 2 3: Whirlwind , Cost : 3 4: Berzerk , Cost : 5 4: – 4: – 4: – 3: – 3: – 3: – 2: Mobility , Cost : 1 Dependencies for Shield Wall : Slash , Cost : 1 , Shield Bash , Cost : 1 , Parry , Cost : 2 , Shield Wall , Cost : 4 , Dependencies for Mobility : Slash , Cost : 1 , Mobility , Cost : 1 , Dependencies for Slash : Slash , Cost : 1 , Dependencies for FakeSkill : FakeSkill not found . To get Whirlwind you must invest 6 points . To get Mighty Blow you must invest 3 points . To get Slash you must invest 1 points . FakeSkill not found . ✝ ✆ Page 6 3 Files Provided lib280-asn3: A copy of lib280 which includes the ArrayedBinaryTree280 class needed for Question 1, a partially complete ArrayedBinaryTreeWithCursors280 for Question 1, and the BasicMAryTree280 class which is needed for Question 2. Skill.java : A complete implementation of the Skill class needed for Question 2. SkillTree.java : A template for your implementation of the SkillTree class in Question 2. In You must submit a .ZIP file containing following files: Assignment2.doc/docx/rtf/pdf/txt – your answer to Question 1. Acceptable file formats are Word (.doc or .docx), PDF (.pdf), rich text (.rtf), or plain text (.txt). Digital images of handwritten pages are also acceptable, provided that they are clearly legible and that they are in JPEG (.jpg or .jpeg) or PNG (.png) format. Other image formats are not accepted and will receive a grade of zero. ArrayedBinaryTreeWithCursors280.java – Your completed class for Question 2. SkillTree.java – Your completed implementation of the skill tree and associated tests. q3-output.txt – The console output from your SkillTree::main() test program to Question 3.
For this assignment you’ll be working with linked list classes from the data structure library lib280. lib280 is a library of data structures that we will build up over the duration of the course. We will start with a version of lib280 that has very few data structures in it and add more with each assignment.Each assignment will come with a new version of lib280 which contains the correct implementations of ADTs that were the subject of the previous assignment.For this assignment the first thing you’ll need to do is to obtain a copy of lib280-asn1. It is provided along with this assignment description on the class webpage. Download the lib280-asn1.zip file and expand its contents somewhere in your filesystem.The class website provides a self-guided tutorial that explains how to import lib280 into an IntelliJ project once you have downloaded it; it is located in the “Modules” page under “Week 2” and is called and “Self-Guided Tutorial: Setting up lib280 in IntelliJ”. First complete part 1 of the tutorial to create an empty IntelliJ project. Then complete part 2 of the tutorial to import lib280-asn1 into your project.For question 1 of this assignment, complete part 3 of the tutorial to create a module for your question 1 solution that can access the classes from lib280-asn1. For questions 2 and 3 you don’t need to complete part 3 again because you’ll just be working within the lib280-asn1 module.The lib280-asn1 module contains several packages.The classes of interest to use for this assignment are in the lib280.list package. Find the lib280-asn1 module in your “project” tab, normally located on the left side of the IntelliJ window. Expand it by clicking the little triangle beside it. This should reveal a folder called “src”. Expand that as well. Now you will see a list of java packages that contain the various classes in the lib280-asn1 library.For this assignment, the classes we are interested in are in the lib280.list package, so click the triangle to expand it. You should now see classes like LinkedList280 and BilinkedList280.The UML diagram below shows the class hierarchy you’ll be working with in this assignment. It may look a bit daunting at first, but you’ll soon see it’s not that complicated. There are four pairs of classes/interfaces (surrounded by light blue boxes1 ). In each pair, there is one class for a singly-linked list and one for a doubly-linked list.The class/interface of each pair that pertains to doubly linked lists extends the class/interface related to singly linked lists. 1The light blue boxes in the UML diagram are only to show the pairs of classes that serve the same roles for singly-/doubly-linked lists and do not represent any actual grouping within lib280. All of the pictured classes are in the same package within lib280.List classes Iterator/cursor interfaces Iterator classes List node classes * * BilinkedList280 LinkedList280 ≪interface≫ BilinearIterator280 ≪interface≫ LinearIterator280 LinkedIterator280 BilinkedIterator280 LinkedNode280 BilinkedNode280 LinkedNode280: The node class used for a singly-linked list. BilinkedNode280: An extension of LinkedNode280 that adds the “previous node” reference required for nodes in a doubly linked list. LinearIterator280: An interface that defines the methods that must be supported by cursors and iterators that can step forwards over a linear structure, such as goFirst(), goForth(), after(), etc. BilinearIterator280: An interface that extends LinearIterator280 by adding methods that allow stepping backwards, such as goBack() and goLast(). LinkedIterator: An implementation of LinearIterator280 which is an iterator object for a singlylinked list. It is used by the LinkedList280 class to provide iterators. BiinkedIterator: An implementation of BilinearIterator280, and an extension of LinkedIterator280, which is an iterator object for a doubly-linked list. It is used by the BilinkedList280 class to provide iterators. LinkedList280: A singly-linked list class. It provides a cursor by implementing the LinearIterator280 interface. The Nodes of the list are LinkedNode280 objects, and it can provide iterators of of type LinkedIterator. BiLinkedList280: A doubly-linked list class. It provides a cursor that can move both forwards and backwards by implementing the BilinearIterator280 interface. The nodes of the list are BilinkedNode280 objects, and it can provide iterators of of type BilinkedIterator. Take a moment to familiarize yourself with these classes and their methods, particularly the LinkedList280 and LinkedIterator280 classes as you will be working on coding extensions of these classes. This section describes a bit more about how iterators work. Iterators provide the same functionality as a container ADT that has a cursor, but they are separate objects from the container. This allows us to record a cursor position that is different and independent from the position recorded by the container’s internal cursor. The list objects, LinkedList280 and BilinkedList280 both have methods called iterator. The iterator method in the LinkedList280 class returns a new cursor position encapsulated in an instance of the LinkedIterator280 class. This instance will have references directly to the nodes of the LinkedList280 instance that created it. In essence, the LinkedIterator280 contains its own copies of the position and prevPosition fields that appear in LinkedList280 – i.e. another cursor that is external to the list! This cursor can be manipulated in exactly the same was as the internal cursor of the list. If you compare the methods in LinkedIterator280 to the methods of the same name in LinkedList280, you’ll see that they are almost identical. Thus, each time we want a new cursor that is independent of the list’s internal cursor, we can call the iterator method and get a new one. This adds additional flexibility. If we can get away with just using the lists internal cursor for our purposes, then we can do so, but we have the option to create more cursors in the form of iterators should we so desire. Tractor Jack is a notorious pirate captain who sails the Saskatchewan River plundering farms for wheat, barley, and all the other grains. You may remember him from his exploits in CMPT 141 or CMPT 214. Jack wants to simulate the loading of cargo onto his ships so he can track how much of one type of grain is on a given ship, and to make sure that his ships are not overloaded.2 In this problem you will be given a list-of-lists data structure. It will be a list of Ship objects, each of which contains a list of Sack objects. Each Sack object represents one sack of a particular type of grain. The type of grain in a sack is represented by a value of type Grain, where Grain is an enumeration. In this question use a data type in Java called an enumeration to represent the type of grain in a sack. Enumerations define a fixed set of named constant values. The grains Jack most commonly plunders are wheat, barley, oats and rye so he wants to count the amount of those four grains separately. Any other types of grain he wants to count together. We can us an enumeration to define five constants to denote what type of grain is in a sack: ✞ ☎ enum Grain { WHEAT , BARLEY , OATS , RYE , OTHER } ✝ ✆ This declaration defines a data type called Grain and five values which we can assign to variables of type Grain. You can find it at the top of Sack.java. Now we can then write in Java: ✞ ☎ Grain g = Grain . WHEAT ; // Assign value WHEAT to the variable g ✝ ✆ You’ll need to use one of the values from the Grain enumeration in task 3, below. You are provided with three Java files: Sack.java: Contains the class Sack which is an object that represents a sack of grain. A sack of grain has a grain type, and a weight (in pounds). This object is complete and you will not need to edit this file, but you should familiarize yourself with its data and methods. This file also contains the definition of the enumerated type Grain. Ship.java: Contains the class Ship which represents a ship in Jack’s fleet. Each ship has a name, a capacity (weight in pounds), and contains a list of Sack objects which represent the ship’s cargo. This class contains two unfinished methods that you will write (see below). Familiarize yourself with the other methods and instance variables of this class. CargoSimulator.java This file contains the CargoSimulator class. This class generates the data you’ll be working with for this question. Its constructor generates a list of ships and fills them with sacks of grain. You’ll be writing some code in the main() method of this class (see below). 2This question was inspired by this song (click to link). Arrrr! 1. In Ship.java, complete the isOverloaded method. This method must return a boolean value true if the ship is overloaded, and false otherwise. The ship is overloaded if the total weight of all sacks of grain in its cargo exceeds the ship’s capacity. 2. In Ship.java, complete the sacksOfGrainType method. This method must return the number of sacks of grain of the grain type indicated by its parameter that are in the ship’s cargo. That is, if the parameter type is Grain.WHEAT and the ship’s cargo contains 42 sacks of wheat, the method should return 42. 3. In the main() method of CargoSimulator.java, there is an instance of a CargoSimulator object called sim. As described above, this object contains a list of ships, and each ship contains its list of cargo. At the location indicated, print out how many sacks of wheat each ship in sim is carrying. 4. In the main() method of CargoSimulator.java, print out a message for each ship in the CargoSimulator instance sim that is overloaded. If a ship is not overloaded, print nothing. Upon inspection of the constructor for CargoSimulator, it may appear that the data is being randomly generated. The data is “randomly” generated, but a fixed random seed is used to ensure that the same random instance of the data is generated every time the program is run. Thus, you should expect that the data will always be the same, and that you will get the same answer every time. That said, it is possible that different computers and/or operating systems will generate different data. However, on the same machine within the same operating system, the same data will be generated every time. Here is what the output might look like. This demonstrates the form of the output only, and not the expected numbers. The exact numbers may depend on the random number generator for your operating system. ✞ ☎ The Icebreaker is carrying 37 sacks of wheat . The Salty Farmer is carrying 46 sacks of wheat . The Bunnyhug is carrying 40 sacks of wheat . The Blackstrap is carrying 37 sacks of wheat . The Prairie Onion is carrying 40 sacks of wheat . The Icebreaker is overloaded ! The Salty Farmer is overloaded ! The Blackstrap is overloaded ! ✝ ✆ For each of the following functions, give the tightest upper bound chosen from among the usual simple functions listed in Section 3.5 of the course readings. Answers should be expressed in big-O notation. (a) f1(n) = n log2 n + n 4 log280 n + 2 n 42 (b) f3(n) = 0.4n 4 + n 2 log n 2 + log2 (2 n ) (c) f2(n) = 4n 0.7 + 29n log2 n + 280 Suppose the exact time required for an algorithm A in both the best and worst cases is given by the function TA(n) = 1 280n 2 + 42 log n + 12n 3 + 280√ n (a) (2 points) For each of the following statements, indicate whether the statement is true or false. 1. Algorithm A is O(log n) 2. Algoirthm A is O(n 2 ) 3. Algoirthm A is O(n 3 ) 4. Algoirthm A is O(2 n ) (b) (1 point) What is the time complexity of algorithm A in big-Θ notation. If possible, simplify the following expressions. Hint: See slide 11 of topic 4 of the lecture slides! (a) O(n 2 ) + O(log n) + O(n log n) (b) O(2 n ) · O(n 2 ) (c) 42O(n log n) + 18O(n 3 ) (d) O(n 2 log2 n 2 ) + O(m) (yes, that’s an ‘m’, not a typo; note that m is independent of n) Consider the following pseudocode: ✞ ☎ Algorithm roundRobinTournament ( a) This algorithm generates the list of matches that must be played in a round – robin pirate – dueling tournament ( a tournament where each pirate duels each other pirate exactly once ). a is an array of strings containing names of pirates in the tournament n = a . length for i = 0 to n -1 for j = i +1 to n -1 print a [ i ] + ” duels ” + a [ j ] + “, Yarrr !” ✝ ✆ (a) (6 points) Use the statement counting approach to determine the exact number of statements that are executed by this pseudocode as a function of n. Show all of your calculations.. (b) (1 point) Express the answer you obtained in part a) in big-Θ notation. Using the active operation approach, determine the time complexity of the pseudocode in question 5. Show all your work and express your final answer in Big-Θ notation. Your Tasks The BilinkedList280 and BilinkedIterator280 classes in lib280-asn1 are incomplete. There are missing method bodies in each class. Each missing method body is tagged with a // TODO comment. Write code to implement each of these unfinished method. Implementation Notes The javadoc headers for each method explain what each method is supposed to do3 . Many of the methods you must implement override methods of the LinkedList280 superclass. Add your code right into the existing files within the lib280-asn1 module. When implementing the methods, consider carefully any special cases that might cause need to update the cursor position, or ensure that it remains in a valid state. You are not permitted to modify any existing code in the .java files given. You may only fill in the missing method bodies. Your Tasks Write a regression test for the BilinkedList280 class. You only need to test the methods that you had to write in question 7. You may generate test cases using white-box, black-box, or a combination of both methods. Comment your regression test code. Each test case should be clearly identifiable from the comments, and the comments should indicate which method(s) you are testing, the purpose of the test, and the condition(s) under which you are testing it/them. Implementation Notes Again, write the code for this question within the existing BiLinkedList280.java within the lib280-asn1 project. A function header for the regression test (main() function) has already been provided. Marks for this question are earned for generating and coding good tests, not whether or not the methods being tested actually work. This means that you can still get full marks on this question even if the methods you were supposed to code in Question 7 don’t work. 3The javadoc comments in these files are also good examples of how we will expect you to document methods that you write yourself in future assignments. Page 8 Files Provided lib280-asn1: A copy of lib280. Sack.java Object representing a sack of grain for question 1. Ship.java Object representing one of Tractor Jack’s pirate ships for question 1. CargoSimulator.java Object for simulating the loading of cargo onto Tractor Jack’s ships for question 1. Ship.java Your completed Ship object for question 1. CargoSimulator.java Your completed CargoSimulator object for question 1. assignment1.doc/docx/rtf/pdf/txt – your answers to questions 2 to 6. Acceptable file formats are Word (.doc or .docx), PDF (.pdf), rich text (.rtf), or plain text (.txt). Digital images of handwritten pages are also acceptable, provided that they are clearly legible and that they are in JPEG (.jpg or .jpeg) or PNG (.png) format. Other image formats are not accepted and will receive a grade of zero. BilinkedList280.java: Your completed doubly linked list class from question 7 and its regression test that you wrote for question 8. BilinkedIterator280.java: Your completed iterator class from question 7. Grading Rubric The grading rubric can be found on Canvas.
WEB504 Introduction to Web development Part 1: Database Selection and Setup Task 1: Research and document the core concepts of Google Firebase, focusing on its NoSQL structure and benefits. concepts of Google Firebase Google Firebase is a powerful backend-as-a-service platform. that offers a suite of tools and services to help developers build, scale, and maintain web and mobile applications. It provides developers with easy-to-use features like real-time databases, authentication, hosting, storage, and machine learning capabilities (What Is Google Firebase? Everything You Need to Know in 2023, n.d.). Core Services Real-time data: With Firebase's real-time databases, data is updated instantly, ensuring a seamless user experience. Databases: Firebase offers two cloud-hosted databases, Cloud Firestore and Realtime Database, for data storage and synchronization. Authentication: Firebase Authentication provides easy-to-use UI libraries, backends, and SDKs for user authentication, supporting various providers like Google, Facebook, and Twitter. Hosting: Firebase Hosting offers scalable hosting solutions for web applications and microservices. Cloud Storage: This service allows developers to store and manage application resources and user-generated content securely. (What Is Google Firebase? Everything You Need to Know in 2023, n.d.) NoSQL structure and benefits Firebase's NoSQL database is a document-based database, rather than a traditional table structure. This makes it very suitable for storing unstructured or semi-structured data and has high scalability. Firebase Firestore is Google’s serverless NoSQL database which makes storing and retrieving data super simple with minimal configuration. NoSQL databases sure have their allure of being flexible, schemeless and familiar to work with. Since most NoSQL databases are document oriented, the learning curve is quite low when you know JSON and objects in most programming languages (Bitton, 2024). Benefits: 1. Realtime Instead of typical HTTP requests, the Firebase Realtime Database uses data synchronization—every time data changes, any connected device receives that update within milliseconds. Provide collaborative and immersive experiences without thinking about networking code. 2. High scalability Firebase automatically scales with your application, making it easy to accommodate user growth. Firebase's NoSQL databases, including Realtime Database and Firestore, can scale horizontally. Efficient read and write performance can be maintained even when the amount of data grows, or the number of concurrent users increases. Instead of scaling up by adding more servers, NoSQL databases can scale out by using commodity hardware. This has the ability to support increased traffic in order to meet demand with zero downtime. By scaling out, NoSQL databases can become larger and more powerful, which is why they have become the preferred option for evolving data sets(Why Do Developers Prefer Nosql Databases?, 2020). 3. Flexibility With SQL databases, data is stored in a much more rigid, predefined structure. But with NoSQL, data can be stored in a more free-form. fashion without those rigid schemas. This design enables innovation and rapid application development. Developers can focus on creating systems to better serve their customers without worrying about schemas. NoSQL databases can easily handle any data format, such as structured, semi-structured, and non-structured data in a single data store (Why Do Developers Prefer Nosql Databases?, 2020). Task 2: Justify the selection of Firebase for the project, explaining how it supports the web solution's goals. 1. Reasons to choose Firebase Real-time requirements: Firebase's real-time database function is suitable for scenarios that require data interaction. For example, the user comment function on your personal website can be realized through Firebase to display the comments in real time. Firebase can provide this function through the real-time database. 2. User authentication Firebase's user authentication module can easily implement functions such as user registration and login. It provides multiple authentication methods, such as email, mobile phone number, etc. This ensures data security. 3. convenience Firebase provides a Local Emulator Suite for integrating and testing various features without incurring additional costs (What Is Google Firebase? Everything You Need to Know in 2023, n.d.). 4. compatibility Firebase is very compatible and can be easily used with technologies such as HTML, CSS, and JavaScript. This reduces barriers. This is perfect for my web solution. Task3: Set up Firebase in your project and provide detailed documentation of the integration process Task 4: Provide code snippets and screenshots demonstrating the Firebase connection and successful integration Create a Firebase project: Log in to the Firebase console and click the "Add Project" button. Enter a project name and click "Create Project" according to your needs. Install Firebase SDK: In the terminal, install the Firebase SDK via npm. Firebase Configuration Created firebaseconfig.js file to store configuration information Part 2: Database Integration and Real-Time Data Operations Task1
GR5242 HW01 Problem 1: Basics Instructions: This problem is an individual assignment -- you are to complete this problem on your own, without conferring with your classmates. You should submit a completed and published notebook to Courseworks; no other files will be accepted. Description: The goal of this problem is to get your familiar with neural network training from end to end. Our main tool is torch , especially torch.nn and torch.optim , that helps us with model building and automatic differentiation / backpropagation. There are 4 questions in this notebook, including 3 coding quesitons and 1 text question. Each coding question expects 1~3 lines of codes, and the text question expects just 1 sentence of explanation. In [ ]: # PyTorch imports: # # torch is the base package, nn gives nice classes for Neural Networks, # F contains our ReLU function, optim gives our SG method, # DataLoader allows us to do batches efficiently, # and torchvision is for downloading MNIST data directly from PyTorch import torch from torch import nn from torch.nn import functional as F import torch.optim as optim from torch.utils.data import DataLoader import torchvision.transforms as transforms import torchvision.datasets as datasets # Helper libraries import numpy as np import matplotlib.pyplot as plt print(torch.__version__) Dataset We will working on mnist dataset, which contain images of written digits of 0-9 and corresponding labels. We have it set up to download the data directly from the torch library. In [ ]: # First, we will define a way of transforming the dataset automatically # upon downloading from pytorch # first convert an image to a tensor and then scale its values to be between -1 transform. = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)),]) # Next, we fetch the data mnist_train = datasets.MNIST(root='./data', train=True, download=True, transform=transform) mnist_test = datasets.MNIST(root='./data', train=False, download=True, transform=transform) # and define our DataLoaders train_loader = DataLoader(mnist_train, batch_size=32, shuffle=True) test_loader = DataLoader(mnist_test, batch_size=32, shuffle=True) Each image is represented as a 28x28 matrix of pixel values, and each label is the corresponding digit. Let's show an image of a random one! Try running the below cell a few times to see different examples and how the DataLoaders will be shuffling batches. Note: Why is this random, when there is no random code in the next cell? The randomness comes from shuffle=True in the train_loader ! In [ ]: inputs, classes = next(iter(train_loader)) plt.imshow(inputs[23].squeeze()) plt.title('Training label: '+str(classes[23].item())) plt.show() Let's now show 25 of them in black and white: In [ ]: plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(inputs[i].squeeze(), cmap=plt.cm.binary) plt.xlabel(classes[i].item()) plt.show() By printing out the shapes, we see there are 60,000 training data and 10,000 test data. Each image is represented as a 28x28 matrix of pixel values, and each label is the corresponding digit. In [ ]: # For training data train_data_example, train_label_example = mnist_train[0] print("Shape of a single training image:", train_data_example.shape) # For test data test_data_example, test_label_example = mnist_test[0] print("Shape of a single test image:", test_data_example.shape) # The total number of images in each dataset print("Total number of training images:", len(mnist_train)) print("Total number of test images:", len(mnist_test)) Recap of classification task In a classification task with K classes, suppose the predicted logits for an image are s1, ..., sK. The predicted probabilities are then The CrossEntropy (CE) loss is defined as where ti = 1 if the image belongs to the th class or otherwise ti = 0. Model Now, we will build a model to predict the logits of images for the classificaiton task. Question 1: Building the Model In the following, we will write a class for a basic one-hidden-layer, ReLU, feedforward network. There are a few components to a model in Pytorch, and we will break them down step by step. First, we need to define the class. As with any class definition, we start with an __init__ method. Since Pytorch provides us with many useful features within the torch.nn.Module class, we will use inheritence to pass these down to our Net class. This involves putting nn.Module inside the parenthesis in the class definition, and a super().__init__() call in the __init__() method. Within the initialization, we then define two layers: one hidden layer with 128 neurons, and one output layer with 10 class logits. The hidden layer should take an input of size 28 x 28 and give an output of size 128 , while the output layer takes input of size 128 and gives output of size 10 . It is suggested to use the nn.Linear() object to accomplish this, which applies a transformation z = xWT + b. Next, we define a special method called forward(), which defines how data propagate through the model. This method will be called either by model.forward(x) or by model(x) , and is where Pytorch looks for the information for its automatic derivative computation capabilities. In the forward method, we first will reshape our image img using img.view() . Then, we will apply the hidden layer (the one we defined) and the ReLU function F.relu . Finally, we apply the output layer and return our output. Importantly, do not apply SoftMax to the output just yet. We will handle that part later In [ ]: class Net(nn.Module): def __init__(self): super(Net, self).__init__() ### YOUR CODE HERE ### # define hidden layer and output layer below: ###################### def forward(self, img): x = img.view(-1, 28*28) # reshape the image to be a single row # pass x through both layers, with ReLU in between ### YOUR CODE HERE ### ###################### return x model = Net() Question 2: Defining the Loss and Optimizer When training a torch model, typically you need to specify the following two items: optimizer: specifies a way to apply gradient descent update of model parameters. We will use the optim.Adam optimizer with a learning rate of 0.001 in this example. loss_fn: the objective function to minimize over. In classification task, the cross-entropy loss is used. Please fill in the optimizer with an appropriate learning rate lr , and choose an appropriate number of epochs (number of passes through the data) in the following code. Note: remember that the neural network outputs the logits instead of the class probabilities (why? answer the question below), and make sure to specify this in the loss function . In [ ]: loss_fn = nn.CrossEntropyLoss() ### YOUR CODE HERE ### ###################### Question 3: The neural network specified above does not output class probabilities, because the last layer of the neural network is a linear layer which outputs value ranging from (-∞, ∞). Your choice of loss function above should take care of that, but what mathematical function maps these logit values to class probabilities? # YOUR ANSWER HERE # Training Now let's train the model for your chosen number of epochs. By the end of the training, you should expect an accuracy above 0.98. In each step, we need to: 1.) grab x and y from the batch (note that each batch is a tuple of x and y ) 2.) zero the optimizer's gradients 3.) make a prediction y_pred 4.) call the loss_fn between y and y_pred 5.) backpropagate 6.) make the approprite step calculated by the optimizer In [ ]: epochs = 10 for epoch in range(epochs): losses = [] accuracies = [] for batch in train_loader: correct, total = 0, 0 x_batch, y_batch = batch optimizer.zero_grad() ### YOUR CODE HERE ### ###################### for index, output in enumerate(y_logit): y_pred = torch.argmax(output) if y_pred == y_batch[index]: correct += 1 total += 1 ### YOUR CODE HERE ### ###################### loss.backward() optimizer.step() losses.append(loss.item()) accuracies.append(correct/total) avg_loss = np.mean(np.array(losses)) avg_accuracy = np.mean(np.array(accuracies)) print('epoch ' + str(epoch+1) + ' average loss: ', avg_loss, '-- average accuracy: ', avg_accuracy) Test Evaluation Finally, we evaluate our model on the test set. You could expect the test accuracy to be slightly lower than the training accuracy. In [ ]: with torch.no_grad(): correct = 0 total = 0 for batch in test_loader: x_batch, y_batch = batch y_logit = model(x_batch) for index, output in enumerate(y_logit): y_pred = torch.argmax(output) if y_pred == y_batch[index]: correct += 1 total += 1 print('testing accuracy:', correct/total) Make Prediction Question 4: fill in the following code block to estimate class probabilities and make predictions on test images. The results should be stored in class_probabilities and predicted_labels . Compare to the true labels, stored in true_labels by computing the accuracy. It should be the same as above. (Hint: you can use much of the same structure from the cell above. You can use F.softmax to calculate probabilities from the logits, and store the results however you please.) In [ ]: ### YOUR CODE HERE ### ######################## print('accuracy verification: ', sum(true_labels==predicted_labels)/len(true_la
CMPE1250 - ICA #8, Serial Communication Interface (SCI) Built a new library (compilation unit), according to the header “sci.h” provided. You are required to provide an implementation of the basic functions described in the header file. This assignment verifies the functions in the library and introduces you to SCI operation and interfacing with a terminal. Part 1 Locate and run a Terminal. To begin with, use the 19200 baud communication rate and default settings for the rest (8 data bits, no parity, 1 stop bit, no flow control). Write a program that that repeatedly receives a character from the terminal (i.e. the computer keyboard) and echoes it back to the terminal (i.e. the computer monitor). This will involve the use of sci0_rxByte() (non-blocking) and sci0_txByte(). If you are using Tera Term, goto setup->Terminal and make sure the Local echo function is enabled. Can you run your code and explain what is happening and why? Try this other baud rates as an exercise: 38400, 115200. Can they all work at 8[MHz] bus speed? Explain why yes or why not. If one of them cannot run at 8[MHz], make sure you set your bus speed to 20[Mhz]. Part 2 For this part, keep the baud rate at 19,200. Add code that will perform. the following: - Pressing the LEFT switch will turn the RED LED ON and send the message: "LEFT pressedr " to the sci0. Releasing the LEFT switch will turn the RED LED OFF and send the message: "LEFT releasedr " to the sci0. - Pressing the CENTER switch will turn the YELLOW LED ON and send the message: "CENTER pressedr " to the sci0. Releasing the CENTER switch will turn the YELLOW LED OFF and send the message: "CENTER releasedr " to the sci0. - Pressing the RIGHT switch will turn the GREEN LED ON and send the message: "RIGHT pressedr " to the sci0. Releasing the CENTER switch will turn the GREEN LED OFF and send the message: " RIGHT releasedr " to the sci0. Please note hat for this part to work properly you need to track the “state” of the switch, so it performs the operation only once per press and once per release. Why do we add “r ” at the end of the message? what happens if we do not add those or if we add only one of them? Experiment and explain your conclusions.
PPSY1PAC Self-Reflection Video Assignment Assignment 1: Self-Reflection Video (12% of final grade) Due date: See Moodle > PPSY1PAC > Assessments Relevant SILOs: 1. Apply an understanding of socio-cultural perspectives of psychology to human behaviour and experiences. 3. Demonstrate sensitivity and knowledge of diversity in cultural beliefs, practices, and communication styles. 4. Critically reflect on psychological assessment tools within a socio-cultural context. 5. Apply ethical guidelines governing appropriate academic conduct. Rationale: The sense of self is the foundation of individual psychology—how we function and relate to other people in social situations and subjectively experience life events. Psychologists have studied much about people’s self-concepts in different sociocultural contexts. They found self-concepts are both stable and flexible—shifting from one social context to another. Different methods are available to measure self-concepts. In PPSY1PAC, we will use the Twenty Statement Test (TST; Kuhn & McPartland, 1954) to measure your independent and interdependent (relational and collective) self-orientations. As discussed in Week 2’s lecture, researchers have found implications for strong self-orientation of a particular type. Also, cultural and gender differences have been found in one’s most prominent self-orientation (e.g., Kashima et al., 1995; Markus & Kitayama, 1991). This exercise will increase your awareness of the psychological implications of self-orientation that shape your experiences, including your social relationships and personal goals. Tasks: You will be required to submit a self-reflection video (5 minutes max). The Twenty Statement Test (TST) that you will complete in the first tutorial (Week 1) and subsequent discussions of the relevant materials in lectures and tutorials in Weeks 1-3 will facilitate your self-analysis and development of a video. Your presentation must use only one PowerPoint slide, be presented in a professional manner, and be recorded with Zoom. You may use a second PowerPoint slide for your reference list. General guidelines: • The presentation will be assessed on both content (80%) and manner (20%). The expectation is a “professional presentation”—how you may present yourself at a job interview. Give some thoughts on the impression of you that you want to achieve. • The content should include an answer to each question provided in the guideline, which will be discussed in tutorials (Weeks 1-3). The suggested readings listed below will provide you some empirical evidence to support your claims in the video. • You are encouraged to reflect on yourself in some depth and develop a self-narrative. Your confidentiality will be protected. Regardless, do not feel pressured to share aspects of the self you feel unprepared to share or deal with. It is important that you feel safe in making your video and submitting this as an assignment. This assignment likely gives you a novel opportunity to think about yourself from a new angle. • The length of your presentation is recommended to be 4-5 minutes (5 minutes max). Any content over 5 minutes will not be marked. Write your pitch before you start recording by using the Self-Reflection Video Worksheet on page 4. • Although your video is pre-recorded, your presentation will be evaluated as if it were presented live in the classroom. Your marker will not pause or rewind the video. Make sure you present at an appropriate pace to effectively communicate the content. • Start your Zoom presentation with a video camera. Introduce yourself and the aim of this assignment. Then, share your single PowerPoint slide during the whole presentation. How you use this slide is up to you. • If you refer to a publication in your presentation, you are expected to include a reference list on an extra slide at the end (formatted according to APA 7th style). Avoid using another author’s words exactly (i.e., a direct quote); use your own expression (i.e., paraphrasing) instead. Reference the source (both in-text citation and in the reference list) if you mention any idea(s) that are not your own. Refer to the APA style. on referencing. • The Marking Rubric can be found on pages 5-7. • Submit your video early to allow enough time to resolve any technical issues. Make sure you check your video after submission as what you submit is what we mark. The coordination team is not responsible for any wrong file uploaded by the student. Suggested readings: Kashima, E. S., Hardie, E. A., Wakimoto, R., & Kashima, Y. (2011). Culture-and gender-specific implications of relational and collective contexts on spontaneous self-descriptions. Journal of Cross-Cultural Psychology, 42(5), 740-758. https://doi.org/10.1177/0022022110362754 (Read introduction and the description of the TST) Hardie, E., Kashima, E. S., & Pridmore, P. (2005) The Influence of relational, individual and collective self-aspects on stress, uplifts and health. Self and Identity, 4(1), 1-24. http://dx.doi.org/10.1080/13576500444000146 (Read introduction) Cross, S. E., Hardin, E. E., & Gercek-Swing, B. (2011). The what, how, why, and where of self-construal. Personality and Social Psychology Review, 15(2), 142-179. https://doi.org/10.1177/1088868310373752 Markus, H. R., & Kitayama, S. (1991). Culture and the self: Implications for cognition, emotion, and motivation. Psychological Review, 98(2), 224-253. https://doi.org/10.1037/0033-295X.98.2.224 Submission: Your Zoom video (.MP4 file) will need to be uploaded to the Moodle Assignment 1 Submission Link Feedback: Written feedback is normally available within 3 weeks of the due date. General information: • If you have any questions about the assignment, please post them to the Discussion Forum on Moodle. For specific questions, please ask your teacher during class, consultation time, or via email. • Please refer to the Subject Learning Guide or the Assessment tile on the subject Moodle for: • Academic integrity • Extension request and special consideration • Penalties for late submission • Refer to https://www.latrobecollegeaustralia.edu.au/about/policies-procedures-forms/ for specific policy.
BUMAN201A Business Maths and Statistics Semester 2, 2024 Assessment 4: Case Study Business report/Case study You have been provided data on the salaries of a random sample of staff at a large organisation. Complaints have been made that there is unfair pay of the staff who work in section A of the company. The random sample contains salary information, the section they work in (A or B), if they have been promoted recently and what grade they received in the recent performance review (A, B or C). You are to perform. an investigation into this claim of unfair pay using techniques that have been learned in this course. This includes hypothesis testing and linear models (i.e. everything from weeks 7 to 11 can be used). No outside methods should be used. Data location: All data is provided on the Moodle in an excel spreadsheet title “Data Assessment 4” . There are 2 parts to this assessment. 1. Compute the average salary of your sample for section A and B. You should see there is a difference. Use a two-sample Welsh test or a linear model with a factor as a predictor to show that this difference is statistically significant. Whichever method you use is up to you, however all assumptions and the full hypothesis test much be shown. Note, your test should show that there is a difference. If you believe there is no difference you have made a mistake. 2. Now that you know that there is a difference, if the only significant reason for this difference is the section the staff work in, then that would be an indication of unfair pay. However, there may be a reason that explains this difference that would not be considered unfair. Using hypothesis tests or linear models or whatever else you like from the course, show that either; there is no other explanation and therefore the pay is unfair between the sections, or show what you believe to be the factors that led the pay discrepancy. Tips for part 2: An example of what you may want to do is, consider if there is a difference in pay between people with difference performance levels, and if this difference is statistically significant with a hypothesis test. Or check if there is a difference in promotion rates between section A and B and if this is statistically significant, and soon … There are multiple ways you can perform. your analysis, so there is not one correct method. You will be awarded partial marks for your attempt even if your final conclusion is not correct so long as your procedure is partially correct and valid and your understanding of the analysis you provided is good. Important information about your report. Structure Your report must be structured as a business report. This means it needs references, a conclusion, an executive summary, and the main body. However, as this is a mathematical business report, you need to include your mathematics in the main body as it is part of your argument. Your mathematics must be typeset using the equation editor or similar. You do not need to show all the detail of your working, you only need to show to important calculations, such as hypothesis statements, test statistics, standard error, degrees of freedom, p-values and soon. That is, only the main computation relevant to your argument (your hypothesis tests) in the body. Any working out not relevant to your argument should be placed in an appendix to keep the main body readable, such as data means, variances raw data tables etc. Graphs and tables are important to use where possible as they help the readability to the average person. This is a report to be read by business managers, not by mathematicians. The key part of your writing is to explain the meaning and interpretation of the analysis you have shown. Excel You do not need to submit your excel working, however you should place a screenshot of some of your excel in an appendix if you wish to refer to it in your report. You will only submit your report as a word doc or pdf. Use of AI The use of AI to help you with understanding is of course proper usage of AI. However the use of procedurally generated text is the assessment is not. You will be heavily penalised for use of AI generated text in your assessment.
GR5242 HW01 Problem 2: Dropout as a form. of regularization Instructions: This problem is an individual assignment -- you are to complete this problem on your own, without conferring with your classmates. You should submit a completed and published notebook to Courseworks; no other files will be accepted. Description: In this exercise we will try to understand regularizing effects of Dropout method which was initially introduced in the deep learning context to mitigate overfitting, though we intend to study its behavior. as a regularizer in a rather simpler settting. Regression Indeed, linear models correspond to a one layer neural networks with linear activation. Denote to represent the output of such network. Given n samples we want to regress the response onto the observed covariates using the following MSE loss: In the current atmosphere of deep learning practice, it is rather popular to have moderately large networks in order to learn a task (we will see more on this later in the course). This corresponds to having in our setting which allows more flexibility in our linear model. However, in these cases where the model can be too complicated one can use explicit regularization to penalize complex models. One way to do so is ridge regression: Question 1: Show that and Dropout We now present the connection between dropout method and ridge regression (outlined in more detail in Wager et al.) To recap, dropout randomly drops units along with their input/output connections. We now want to apply this method to our simple setting. Let us define the indicator random variable Iij to be whether the 'th neuron is present or not in predicting the response of the i'th sample. More explicitly, the ouput of the network for 'th sample becomes where drawn independently from the training dataset. Note that E[Iij] = 1, thus the output of the network is fβ(xi) on average. Question 2: Write down explicitly the loss function after using the dropout as a function of denoted by L(β, I). It can be shown that SGD + Dropout is in some sense equivalent to minimizing the loss function L(β, I) on average. Related to this point, the following problem justifies why dropout can be thought as a form. of regularization. Question 3: Suppose the feature matrix have standardized features (norm of each column is one). Show that the solution to the following problem corresponds to a ridge regression with where the expectation is over the randomness of indicator random variables. Hint: You can assume that taking derivative can pass through expectation.
GGR 315 Environmental Remote Sensing, 2024 Assignment 4: Radiometric Correction and Remote Sensing Applications Questions (100 marks, 10% of the final grade) 1. (15 marks) In ArcGIS, using the customized minimum/maximum linear stretch to enhance the image Toronto_2011. tiff. Provide images before (5 marks) and after (5 marks) customized linear stretch (you can take screenshots). Discuss your understanding of the linear stretch in displaying the surface features (5 marks). 2. (50 marks, 10 for each) Conversion from DN to reflectance. The reflectance of ground objects at the surface level, ρ, can be derived using the following equation: where E is solar irradiance on the object and is expressed as E0*cosθs, where θs is the solar zenith angle at the time of image acquisition; T is the transmissivity of the atmosphere; Lp is the path radiance (radiance due to atmosphere); Ltot is the radiance measured by a sensor. Ltot can be converted from the DN (digital number) of each band of a TM image using the equation: L = A0 + A1*DN. (2) tot Image Toronto_2011_band3.tiff and Toronto_2011_band4.tiff have the DN values of bands 3 and 4 of a TM image. Given A0, A1, T, Lp and E are constants for the whole image (see the following table), please convert the DN values of bands 3 and 4 into reflectance using the Raster Calculator in ArcGIS. 0(W/(m.m))L 2μ s(degree)3-4.50.6396080.781060.02.041.44-4.50.6352940.80714.01.541.4
CAES1000 Core University English Writing Task 3 Report – Topic and Question Task 3: Writing a Well-structured and Well-argued Report (Assessed – Writing 35%; Annotations 5%) This task aims to provide an opportunity for you to apply the academic writing skills learnt in the course. These skills include: (i) expressing a clearly argued and critical stance and (ii) using the ideas from quality sources to support your stance through citation and referencing. Submission Deadline: 4 December 2024 (Wednesday 5:00 pm) (Upload your work to Turnitin on the Central Course Moodle) Submission Guidelines: • When you prepare for the assignment, you should read the Policy on Academic Integrity, Plagiarism, and GenAI use on the Central Course Moodle (click here). You may also refer to GenAI Module Unit 1on the Avoiding Plagiarism Moodle (Part II). • You must submit a soft copy to Turnitin on the Central Course Moodle by 5:00 pm on the due date of the assignment. Submission on Turnitin by the deadline will be treated as the final version. The annotations should be shown properly on Turnitin and the uploaded file. Your teacher may require a hard copy of your submission. Please check with your class teacher. • You should leave enough time for plagiarism check and all kinds of technical problems or human errors (if any). Technical problems or human errors leading to any submission issues (e.g., late submission, unreadable text, wrong submission, or non-submission) cannot be used as a reason for penalty exemption. Any file (e.g., wrong submission, unreadable text, or incomplete submission) submitted by the deadline will be treated as the final version. It is your responsibility to check your submission very carefully. • You must NOT submit screenshots or image files of your writing. Penalty will be applied to these formats. • You can resubmit multiple times before the deadline for plagiarism check. After the deadline, you are only allowed to submit ONCE. If you cannot submit your file, you need to contact your class teacher as soon as possible on the due date. • Following CAES rules, assignments which are handed in up to four days late without any medical certificate/legitimate reason will have one full letter grade deducted each day (e.g., a B- becomes a C- after one day late). If the assignment is submitted four days after the deadline without a medical certificate/a legitimate reason, it will be treated as a non-submission (N = 0 mark). It is up to the programme coordinator to decide whether such students should be given feedback on this assignment. • Students who do not submit an assignment at all or miss an assessment without a medical certificate should be given an N (= 0 mark). • If students are sick and unable to hand in an assignment, they must contact their teacher immediately before, NOT after, the deadline. Extension request AFTER the deadline shall NOT be entertained. No work after the deadline will be accepted without a legitimate reason. • You are given sufficient time to complete this assignment. Please manage your time well and check your submission file very carefully. Instructions: 1. You should write 1000-1200 words for this assignment (including all in-text citations). Anything beyond 1200 words will not be read. Write the number of words for your report at the end of the text. This does not include the words in the reference list and the words in annotations. 2. You should cite and reference the reading texts given to you (4 in total) in your report. You must also use TWO quality sources of your own choice to support your stance. One subgrade in C2 will be deducted if you do NOT use two quality sources of your own choice. Your reference list should include a maximum of 6 entries only. All extra or additional entries will be ignored. 3. You are NOT supposed to cite/quote any non-English materials in this assignment. CAES1000 is an English language course. Only English materials should be used. 4. Include a reference list at the end of your writing which conforms to the CUE APA Citation and Referencing Style. Guide (7th ed.) (click here to download the guide; also available on the Central Course Moodle, under the ‘Writing Assessments’ section). Whenever you have any doubts about citation and referencing, this style. guide should serve as your first and major reference point. 5. Complete the Turnitin Independent Learning Task (a video on how to check for plagiarism) on the Central Course Moodle using the report you have just written. Analyse the Turnitin report and keep doing the task (e.g., effective paraphrasing and proper citation) until the document is plagiarism-free. 6. You should not solely rely on GenAI tools. You should critically evaluate the GenAI output, develop your ideas, and use your own words to express them. You should keep a proper record of your reading materials, notes, drafts and so on so that you have evidence to show how you arrive at the final submission. 7. Once the text is plagiarism-free, write 8-12 annotations on your text using ‘insert comments ’. These annotations should highlight where you have applied your learning from this course. Each annotation must relate to a different feature of academic writing. You have to provide clear and enough details concerning what skills you have applied and the reasons for doing so in each annotation. Any one-word answers or very short phrases will not be sufficient. 8. Upload your work to Turnitin on the Central Course Moodle before the deadline. Teachers will only mark the submission on Turnitin by the deadline. All other channels or forms of submission (e.g., email submission / submission to an online drive) to your class teacher will not be accepted and will be treated as a non-submission (N = 0 mark). Assessment Criteria: You will be assessed on the quality of your report which is worth 35% of your final grade. The assessment criteria are on the Central Course Moodle, under the ‘Writing Assessments’ section. You will also be assessed on the quality of your annotations. You must tell us in your annotations WHAT academic writing skills you have applied and WHY you have applied them. The skills may include the use of citation and referencing, cohesive devices, corpus for vocabulary, Generative AI, etc. This will be worth 5% of your final grade. The assessment criteria for annotations are as follows: anceSatisfactory (5%) good understanding ofthe academicwritingskillsbeingpractisedinthecourse.Youhaveprovidedclearandfor eachannotation. true:Only a ofyour annotations show a good under academic writing skills being practised in the fewer than 8 annotatio.Your annotations .Mostofyourannotationsareunclearandwithoutenoughdetails copied your annotations otherstudent.Non-submission(0%)You did not annotate your Task 3R Below is an example of a paragraph / section with annotations: Older Adults
Cloud architecture for holiday search CP2422 case study group presentation This assignment is a group project in which you get to help a company transition to the cloud. Legacy systems need adapting to cloud paradigms, security needs to be considered, and a disaster recovery strategy is needed. You will get the opportunity to design an architecture, estimate its costs, ensure compliance and try to anticipate/mitigate potential problems. The work will be submitted as a group presentation, with Q&A, in the subject’s final tutorial session. There will also be two tutorial sessions allocated for students to work on the assignment, although additional time is expected to be put in outside of these. Subject Learning Outcomes “SLO2: Discuss and apply industry knowledge and best practices into specific case studies” is the main focus of this assignment, although it may also contribute to other SLOs. Prerequisites You will be using free online tools, teamwork and your own research to complete this assignment, so there are no technical prerequisites. You won’t need to deploy any cloud resources to complete this assignment. Groups will be assigned by the lecturer. Structure The assignment is presented as a case study that groups of up to five students will work together on. The case study describes a holiday booking company, its existing IT architecture and its challenges, along with the desired outcomes it has from making the transition to cloud. You are then expected to work on four things: 1. Re-defining their architecture for cloud and estimating costs. 2. Addressing security concerns using cloud tools and best practices. 3. Ensure compliance with prevailing regulations regarding personal data and payment processing. 4. Consider resilience through disaster recovery and other availability-preserving measures. Your work will be documented in a slideshow presentation, which you then present to the class and lecturer, then you will be questioned on it and expected to defend your decisions. Submission It’s mandatory for all group members to be in attendance for their group presentation Q&A, although who and how many members actually present is at the group’s discretion. The group must also submit a digital copy of the presentation slides before the presentation session, via the assignment section on LearnJCU. Pre-recorded presentations are required and must be submitted as an mp4 file, along with a copy of your slide deck in PDF format. DO NOT ZIP THESE FILES. Each group will have 15 minutes for their presentation. There is no slide/page limit on the presentation, but only content that is successfully covered in the presentation will be considered for marking, so be mindful of the time limit. A five-minute question and answer (Q&A) session follows the presentation, in which the lecturer will ask questions and the group must answer in defense of their work. Marking Marks will be allocated based on the grading rubric, holistically, meaning all three parts are taken into consideration together when determining the mark. The defense of the submission, through the group’s answers to questions from the lecturer, will also contribute to the final mark. It is expected that each group member plays an equal role in the work. While people’s roles and contributions may be different, the effort should be similar. Marks will not be peer assessed or adjusted per individual, unless any formal concerns are raised about uneven contributions to the work. Ethics Please remember not to copy directly from other groups, past or present. If you use external sources, you must indicate clearly what they are and where you have used them. While good research will help you achieve a good mark, it is essential that you document your sources properly. See the student handbook for more details of JCU's ethics guidelines. Be particularly mindful of JCU’s guidelines on the use of Artificial Intelligence. The specifics of assignments are updated from term-to-term. Be advised that if you include context from an earlier version of the assignment that is no longer relevant, it will be considered academic misconduct and your team risks receiving zero marks or further proceedings. Support If you are having problems completing the assignment, there are various ways to get help: • Work closely with your group and share your problems with each other. • Ask in a lecture or tutorial session. • Send the lecturer an e-mail or a message via LearnJCU. • Schedule a consultation with the lecturer. • Use the conversations feature which is enabled for this assignment. People who ask more questions tend to achieve higher marks, so don't be afraid to use any and all of the above options! Case study Situation • You are a consultant for book.lah, a Singapore holiday booking site that helps users find rooms in hotels around the world. • You are tasked with helping the organization modernize its ICT infrastructure. • Somehow, book.lah has managed for years with simple co-located infrastructure that connects to various hotel chains in order to find rooms and prices for customers. • Different chains have different systems for handling availability and pricing, so book.lah maintains multiple ways for interfacing with these external systems. • Book.lah stores its customer data as well, allow customers to setup searches, alerts, and quickly book & pay for rooms. • The booked hotel later settles the bill with book.lah, so book.lah is an intermediary handler of the money. Legacy Architecture A single diagram is provided to explain the architecture of the current setup (Figure 1). The key details are: • The book.lah service is a monolith, meaning a single web-application handles everything, from collecting availability data, to servicing customer search requests, through to handling customer and later hotelier payments. • Two servers are co-located in a datacenter, with one acting as a standby stand-by and backup. If the active server encounters an issue, a load balancer directs traffic to the stand- by, which is then promoted to the active role. The failed server must be restored and re-synced with the new active server before it can enter its new role as stand-by. Figure 1: The architecture of the current co-located book.lah system On occasion, book.lah has encountered problems with this setup, and as post-COVID travel demand has surged, they have become concerned with sustainability of their businesses, as well as their competitiveness compared to other, larger services providing similar features. Some of these problems include: • Integrating with new hotel groups requires adding new capabilities to the monolith, which is slow to do, expensive to test, and disruptive to deploy as it requires restarting the whole application. It is essentially infeasible to cater to individual independent hotels unless they already use a well-defined interface that is already implemented. • While one active server is generally adequate for site performance, there is a noticeable drop when scheduled activities happen, such as payment processing to hotels and periodic updates to room availability and prices. This can adversely affect end-user experience. • The reliability of the servers has been good, but they are approaching 5 years old and so more likely to fail, and are near the limit of traffic that they can handle. • In a single incident, the active server encountered an error during a software update, leading to the standby server taking over. However, it took several days to restore the failed server into a useful state, leaving book.lah at significant risk of a second issue taking them offline. • The Chief Information Officer has expressed concern at the lack of separation between customer data and hotelier data, as well as the lack of a proper backup plan, which currently just assumes that data is stored in duplicate on both servers. At the behest of the CIO, and with the blessing of the CEO, you have been engaged to see if the impending replacement of the ageing servers can be used as an opportunity to embrace the cloud, and realise additional benefits that may make the business more competitive and resilient. Instructions There are four equal parts to this work. Your group will collect a lot of information and ideas, but must refine this down into concise, well-visualised slides, while still referencing your sources. Part 1 : Adoption journey [25%] Choose a cloud adoption framework [1] to follow and then explain how it can help the company adopt the cloud. Explain in a single slide how the key pillars/principles/etc of the chosen framework can be applied to the company. Part 2 : Service selection [25%] Identify the cloud components needed to implement book.lah’s system in the cloud. • First, be general. Specify the type of service (compute, database, security, etc) and the service model that is used for is (IaaS, PaaS, SaaS). • Then, provide a table of equivalent products for three or more cloud providers, for each of the services you have decided to include in the new implementation. One of the providers should be the creator of the adoption framework you followed. • Finally, using the same provider who produced your adoption framework, draw a cloud architecture diagram usingdiagrams.net(or another of your preference) that includes all of these services. Highlight security features and design choices that support any of the three aspects of CIA. Explain how the architecture differs from Figure 2. • Provide sizing/quantity specifications for the services, and cost projections using your cloud provider’s cost estimation tool. Part 3: Compliance [25%] Based on the architecture from part two, visualise where the boundaries of responsibility are, between book.lah and their cloud service provider, for each of the services in your proposed architecture. Show how the CSP can comply any relevant regulations, such as PDPA regarding customer data and payment processing, referencing appropriate vendor and regulator documentation. Highlight areas that book.lah must continue to adhere to themselves. Part 4: Disaster recovery [25%] Supported by your architecture diagram any information available from the CSP, explain how your design is resilient to failure. It is recommended to examine each individual component of the architecture and ask, “what happens if this piece fails?”, then see if the CSP provides a solution or if you must deal with it yourself. Secondly, define a step-by-step process for dealing with a catastrophe in the cloud. Choose one of: • A prolonged outage of the cloud provider in the company’s home region of Singapore. • A successful ransomware on one or more critical application’s data sets. • A configuration update resulting in a critical database going offline. Briefly explain how you would respond to it, plausible RTO/RPOs, (Recovery Time/Point Objectives) and what architectural choices you made that may help make this response easier. Notes • Use the IEEE style. of referencing [2]. • Include relevant references at the footnote on each slide, as well as a complete references list as a final slide. • Remember that you have 15 minutes for your video plus 5 minutes of questions. Be sure to rehearse before your recording, and be prepared to answer questions. • Avoid walls of text in your presentation. The assignment instructions have asked for diagrams and tables in places. There are lots of other places where graphs, figures and other visuals will be very useful too. Be creative! • For best marks, distribute work fairly, consult with each other, ask the lecturer questions, and consult the marking rubric (available in LearnJCU). • The three marking criteria will be evaluated in turn for each of the parts of the assignment. Note that this includes the slide content, your presentation performance, and any Q&A. • For the presentation video, remember: less is more. Do not cram content in or artificially speed up the video. It will make your presentation less comprehensible and harm your overall mark. References [1] V. Shreenivos and S. Kerrison, Lectures on Cloud and Data Center Security: Cloud Adoption Journey, James Cook University, 2021. [2] James Cook University, “IEEE Style. Guide,” [Online]. Available: https://libguides.jcu.edu.au/IEEE. [Accessed July 2023].
Quantitative Analysis of Hydrological Processes Coursework Assessment Changes in land use and climate require regulators and river basin managers to develop experimental observational networks that are capable of monitoring the variability in meteorological and hydrological processes controlling the runoff generation processes relevant for water resources management. The design of monitoring networks is always a balance between scientific rationale, technological capabilities and financial possibilities (or restrictions). This practical assessment requires the development of an integrated hydro-meteorological and water quality monitoring network for the River Eden (Cumbria),which covers: • Precipitation • Evapotranspiration • Soil moisture • Stream discharge and water levels in the river and groundwater levels • Nutrient concentrations in surface waters and groundwater Catchment of the River Eden, Cumbria (fig. ALFA-project website) Your task is to develop a catchment-wide monitoring system that enables for a qualified assessment of the spatial patterns and temporal dynamics of the most important factors controlling the catchment water balance. The design of the monitoring for the different variables is to be coordinated so that measurements can ideally inform. each other. The general design of this network, including a description of the rationale and alignment of the monitoring of all different hydro-meteorological parameters should be described at the beginning of your report. In your first part (e.g., within 250 words), should explain how the monitoring of hydro-meteorological and water quality parameters fits in with the overall network design. In your second part of report (e.g., within 750 words), the specific design of the respective monitoring sub-systems shall be described and discussed with respect to efficiency and expected outcomes. If you are intending to use existing data from the existing synoptic networks, please justify their use and associated efforts. Use the lecture notes and information from the suggested reading to define the set of parameters that need to be monitored (including time intervals and spatial resolution) and describe the appropriate methodologies you have decided for but also discuss the advantages/benefits of alternative methods/approaches. Make sure you take into consideration the expected variability of the catchment processes and how this could interfere with patterns and dynamics of the processes you want to monitor. The report should be max 1000 words, appropriately illustrated with graphs, charts, and tables. Although group discussion is allowed, this course project has to be completed individually. SOME FINAL ADVICE: The Experimental Hydrology WIKI (http://www.experimental- hydrology.net) represents a fantastic resource full of practical information on sensing technologies from a wide range of manufacturers and expert opinions on their capabilities and limitations.
CSCI 1230 Project 5: Lights, Camera 1. Introduction Figure 1: Realtime Pipeline In the Ray assignments, you implemented a ray tracer that projects a 3- dimensional scene on a 2-dimensional plane. Ray tracing, as you probably have experienced, can be very slow. Trimeshes, VAOs, and Shaders. 2. Requirements 2.1. Parsing the scene Similarly to Ray, you will use the same scene parser from Lab 5: Parsing to read in scenefi les. You are expected to call your scene parser to get your metadata and set up the scene as you see fi t when new scenes are loaded in. Note that you will no longer be using .ini fi les and will rather just use .json fi les directly! Refer to section 3.1 for more information on how to work with the parser and deal with scene changes in the codebase. 2.2. Shape tessellation In Lab 8: Trimeshes , you should have implemented tessellation for two shapes: cube and sphere. In this project, you are expected to also include tessellation for cone and cylinder. The descriptions of these shapes remain the same as in Project 3: Intersect : Cube: A unit cube (i.e. sides of length 1) centered at the origin. Sphere: A sphere centered at the origin with radius 0.5. Cone: A cone centered at the origin with height 1 whose bottom cap has radius 0.5. Cylinder: A cylinder centered at the origin with height 1 whose top and bottom caps have radius 0.5. Figure 3: Tessellated Cone Parameters Figure 4: Tessellated Cone with Parameters (1, 3). Note the cone tip normals are not perpendicular to the face controls tessellation along the latitude direction while parameter 2 controls tessellation along the longitude direction. Be especially careful when calculating normals for the tip of the cone: make sure they are always inline with one on the implicit cone, but NOT any face normal, which would result in fl at shading, nor pointing straight up, which can be the edge case for your ray tracer. Correct Cone Tip Normal Diagram Hint about cone tip normal calculation We recommend working on these shapes in your lab 8 stencil prior to porting them into Project 5: Lights, Camera . This will allow you to use the visualizer to debug position and normals. While the specifi cs of your tessellation code are up to you, you are expected to design your program in an extensible, object-oriented way. This means minimal code duplication and no 400 line branch structures (such as if... else... statements). You will lose points if you do not follow these guidelines. Your shapes should never disappear when the tessellation parameters are too low! Be sure to set minimum tessellation parameters appropriate to each shape accordingly. 2.3. Camera Your camera for the raytracer only needed to produce a view matrix. For Project 5: Lights, Camera , you must produce both view and projection matrices given the scene fi le's camera parameters. The projection matrix is needed to convert from camera space to clip space for OpenGL to render the scene correctly. To implement your projection matrix, you may not use glm::perspective . Keep in mind you are able to edit the near & far plane distances in real-time. These are seen in the parameters: settings.nearPlane and You will be expanding on your camera's functionality in the next project, so be sure to keep this in mind when implementing your camera. 2.4. Data Handling Welcome to the meat and potatoes of this project! You will use everything you have learned from lecture and labs to use the OpenGL pipeline to manipulate and keep track of scene data. You will take your parsed scene metadata and use it to construct all necessary VAO/VBO objects. Then you will use these in the main render loop of paintGL to fi nally render the scene, while integrating materials, global data, and light data as uniforms. As far as the design goes, some questions you might want to ask are: How will I represent my shape data in OpenGL? What do my VAOs and VBOs need to be able to do/How can I generalize what I did in lab 9? How will I use my parsed RenderData to draw my scene in paintGL ? How many VBO/VAOs will I need in each scene? Is it dependent on the scene? Please do not include more VBOs and VAOs than necessary! For example, two separate VBOs should not store the same data. Points will be deducted for excess memory usage. In general, fi lling realtime.cpp with all of your gl_____ calls is likely bad code design and will make debugging MUCH MORE DIFFICULT! 2.5. Shaders For this project, your shader program should have the following features: Support for directional lights Ambient, diff use, and specular intensity computation Final color computation integrating both object and light color Support for up to 8 simultaneous lights. (See subsection below) computation. Think about how all of your scene data will integrate with your shaders! Which parts can you do in the Fragment shader, which parts can you do in the Vertex shader? As such, it is important to have completed Lab 10: Shaders before attempting this part of the project. 2.5.1. Arrays and Structs in GLSL In Lab 10: Shaders , you learned about various uniforms to use CPU data in a shader. When dealing with multiple identical objects, the common approach is to immediately think about arrays. In GLSL, an array of vec3 s looks like this: uniform. vec3 myVectors[8]; Notice how this array is of fi xed size 8, specifi ed explicitly in code. This is because GLSL does not support dynamically sized arrays! And it's also why we require you to support an explit number of lights. You can access element i in this array as follows: vec3 myithElement = myVectors[i]; The next question you may have is how to actually pass data into a uniform. array. For example, to fi ll in the jth element of the array with the vector (x, y, z) , you would write the following: GLint loc = glGetUniformLocation(shaderHandle, "myVectors[" + std:: glUniform3f(loc, x, y, z); If you wish to get fancy, you can try using struct s as well. They have to be fi rst defi ned in the shader then declared as a uniform. in the following manner: struct AwesomeStruct { int favoriteNumber; uniform. AwesomeStruct myStruct; Accessing the member favoriteColor from the uniform. is done as such: vec3 coolestColor = myStruct.favoriteColor; To set the color data to a vector (r, g, b) , you would write the following: GLint loc = glGetUniformLocation(shaderHandle, "myStruct.favoriteCo glUniform3f(loc, r, g, b); If you wish to read more about uniform. in GLSL, check out this link! 2.5.2. Special tip about GLSL In GLSL, pow(x, y) is undefi ned for x < 0 or if x = 0 and y ≤ 0, so be careful of these cases as there may be times where shininess = 0 ! Check the offi cial document to learn more. 2.6. Results Here are some sample images of what your realtime renderer should be capable of by the end of this assignment. Figure 5: phong_total.json Figure 6: recursiveCones4.json with far plane distance of 100 Figure 7: recursiveCones4.json with far plane distance of 20 Figure 8: recursiveCones4.json with far plane distance of 15 Figure 9: recursive_sphere_7.json (in real time!) with tessellation parameters of (12, 12) 3. Stencil Code You may notice the stencil code provided is minimal--that is by design. To complete this assignment, you will need to have a good understanding of the OpenGL pipeline. We have provided for you the following fi les which you will interact with: Realtime : A fi le containing the initialization of an OpenGL context as well as functions that are automatically called on certain events. These include: initializeGL , paintGL , and resizeGL . And you have already written the following: A scene parser (lab 5) A basic camera class (Ray projects) Cube and sphere classes (lab 8) Working with GLEW and OpenGL in Qt: If you are working in a fi le and need to use OpenGL functions, make sure to use #include ! Also, Qt creator gives you access to the OpenGL context when calls stem from any of: Realtime::initializeGL() , Realtime::paintGL() , and Realtime::resizeGL() . If you wish to make OpenGL calls stemming from outside these functions (for example Realtime::sceneChanged() ), you must call makeCurrent() fi rst. 3.1. Loading Scenes As stated before, you will need to handle the loading of scenes using your scene parser from lab 5. We have provided for you a helper function in realtime.cpp for you to use for this purpose titled sceneChanged() . This function will be called whenever the "Upload Scene File" button is pressed and a .json fi le is selected. To get access to the current scenefi le, you can use the settings object's sceneFilePath parameter. Important: We will not be working with .ini fi les in this project! Given the real-time nature of this project, settings and parameters will be controlled by interactive UI buttons and sliders instead. 3.2. Realtime::initializeGL() , Realtime::paintGL() , & Realtime::resizeGL() These functions are the "core" of a rendering system. They are overriden from the parent QOpenGLWidget class if you are interested. intializeGL() is called once near the start of the program aft er the constructor of Realtime has been called. It also is called before the related information you may need, aft er the GLEW initialization calls as commented. Note that you cannot use any OpenGL-related functions in the constructor of this class as they are only available once GLEW has been initialized. paintGL() is called whenever the OpenGL context changes, i.e. when you make some state-altering OpenGL call. You won't have to worry about this for this project, but keep in mind this behavior. when you add interactivity in Project 6: Action! . resizeGL() is called whenever the window is resized. You will need to use the input width and height to correctly update your camera. 3.3. Realtime::finish() In OpenGL, we oft en use calls of the form. glGen______ . Just as with using the keyword new , we must delete this generated memory as well. This function, finish() , will be called just before the program exits so be sure to use it to your advantage to avoid memory leaks! 3.4. Realtime::settingsChanged() In general, this function will be called any time a parameter of the settings is changed (via interacting with the left GUI bar) other than settings.sceneFilePath . For this project, the settings you will have to worry about are: settings.nearPlane : Should control your camera's near clipping plane. settings.farPlane : Should control your camera's far clipping plane. settings.shapeParameter1 : Should control the tessellation parameter 1 as described in section 2.2 above. settings.shapeParameter2 : Should control the tessellation parameter 2 as described in section 2.2 above. 3.5. Realtime::____event() Functions These functions are all for handling interactivity which you will implement in Project 6: Action! . So do not worry about these for now! To assist with creating and modifying scene fi les, we have made a web viewer called Scenes. From this site, you are able to upload scenefi les or start from a template, modify properties, then download the scene JSON to render with your raytracer. We hope that this is a fun and helpful tool as you implement the rest of the projects in the course which all use this scenefi le format! For more information, here is our published documentation for the JSON scenefi le format and a tutorial for using Scenes. 5. TA Demos Demos of the TA solution are available in this Google Drive folder titled projects_lightscamera_min . macOS Warning: "____ cannot be opened because the developer cannot be verifi ed." 6. Submission Your repo should include a submission template fi le in Markdown format with the fi lename submission-lights-camera.md . We provide the exact scenefi les you should use to generate the outputs. You should also list some basic information about your design choices, the names of students you collaborated with, any known bugs, and the extra credit you've implemented. For extra credit, please describe what you've done and point out the related part of your code. We have provided for you 4 diff erent booleans in Settings for you to use for extra credit: extraCredit1 , extraCredit2 , extraCredit3 , and extraCredit4 . These are activated by their respective GUI checkboxes. If you implement any extra features using a GUI "Extra Credit #" checkbox to be activated, please also document it accordingly so that te TAs won't miss anything when grading your assignment.
A Reference Guide to Data Visualization in Excel Box Plots and Scatter Plots: Analyzing Cereal Data Box Plot A box plot (or box-and-whisker plot) is a visual tool that shows the distribution of data across different quartiles, highlighting the median, spread, and outliers. It is especially helpful in identifying patterns, spread, and any unusual data points. Using a Box Plot with Cereal Data: • Example Variable: Sugar Content (grams) • Purpose: The box plot can show the distribution of sugar content across different types of cereals. • Interpretation: o Median: The line within the box shows the median sugar content of the cereals. o Interquartile Range (IQR): The box itself represents the middle 50% of the data, from the first quartile (Q1) to the third quartile (Q3). o Whiskers: These lines extend from the box to show the data range, excluding outliers. o Outliers: Points outside the whiskers indicate cereals with unusually high or low sugar content compared to the rest of the dataset. Using a box plot for sugar content can quickly reveal which cereals are high in sugar and if there are any brands that stand out with extreme values. Scatter Plot A scatter plot displays individual data points on an X-Y axis to show relationships between two variables. It is useful for identifying trends, clusters, and correlations between variables. Using a Scatter Plot with Cereal Data: • Example Variables: Sugar Content (X-axis) vs. Calories (Y-axis) • Purpose: To explore if there’s a correlation between sugar content and calorie count in cereals. • Interpretation: o Trend: Observe whether the points trend upwards or downwards. An upward trend would suggest that cereals with higher sugar content also tend to have higher calories. A downward trend or no clear trend might suggest there isn’t a strong relationship between sugar content and calorie count. o Clusters: Groups of points may indicate certain types of cereals (e.g., children’s vs. adult cereals). o Outliers: Points that fall far from the trend line could indicate cereals that are exceptions to the general pattern (e.g., low-calorie, high-sugar cereals). A scatter plot allows you to visually analyze if sugar content correlates with calories, helping to identify trends and potential health impacts. Creating a Box Plot in Excel 1. Prepare Data: Make sure your data for sugar content is in a single column, with the header in the first cell (e.g., Column A with "Sugar Content" as the header). 2. Select Data: Highlight the column with the sugar content data. 3. Insert Box Plot: o Go to the Insert tab on the Excel ribbon. o In the Charts group, click on Insert Statistic Chart. o Select Box and Whisker from the dropdown menu. 4. Customize the Box Plot (Optional): o Click on the chart to bring up the Chart Design and Format tabs. o Use Chart Design to add chart titles, labels, and customize the look of your box plot. o Add a title such as "Box Plot of Sugar Content in Cereals" to make the chart clear. Creating a Scatter Plot in Excel 1. Prepare Data: Ensure you have two columns with headers, such as Column A for "Sugar Content" and Column B for "Calories". 2. Select Data: o Highlight both columns, including headers. 3. Insert Scatter Plot: o Go to the Insert tab. o In the Charts group, click on Insert Scatter (X, Y) or Bubble Chart. o Choose Scatter from the options (the first icon). 4. Customize the Scatter Plot: o Click on the chart to bring up Chart Design and Format tabs. o Add titles and labels to help interpret the chart (e.g., "Scatter Plot of Sugar Content vs. Calories in Cereals"). o You can also add a Trendline by right-clicking on any data point, selecting Add Trendline, and choosing the best fit line option.
Mobile Technologies Coursework 2 Wireless Sensor Network Technology in Smart Farming Background This coursework is building on the skills and knowledge acquired from the instructional plans of Weeks 1 to 10, students will apply the principles of wireless communications, network infrastructure,IoT and mobile communication architectures to a practical scenario. The coursework is designed to align with topics covered such as mobile technologies, networking, connectivity, radio propagation, multiple access schemes, and the evolution of wireless standards from 2G to 5G. Furthermore, considerations for sensor networks as explored in wireless sensor networks and IoT applications will be a significant component of this coursework. Scenario In the role of anIoT engineer, you are required to design a robust and efficient wireless sensor network for avast smart farm measuring approximately 1000 mx 3000 m. The network's design must facilitate the collection of environmental data from numerous sensors strategically placed throughout the farm to ensure the health of crops and efficiency of farm operations. A minimum of 200 environmental sensors, each with a 50 kbps data rate, are needed to achieve comprehensive coverage. Despite an existing Wi-Fi network, its reach is inconsistent over the large area. Furthermore, 5G coverage is only available at the farm's periphery. The network should operate autonomously for at least one year, and data must be accessible in the cloud. Learning Outcome To decide and discuss the selection of an appropriate wireless technology for the smart farm's sensor network, considering the size of the farm, signal strength variability, and the partial availability of 5G coverage. Tasks: (a) Technology Evaluation and Selection • Evaluate various wireless technologies covered in the course, considering the specific challenges of the smart farm's size and remote location. • Discuss each technology's suitability for sensor data rates, energy consumption, and long- term autonomous operation. • Decide on the most appropriate technology for this scenario, with a comprehensive justification referencing course material. (b) Network Design and Justification • Propose a detailed network design using the selected technology, accounting for the extensive area and uneven terrain of the farm. • Justify the design based on connectivity principles, data transmission requirements, and environmental sensor distribution. • Address the potential connectivity issues due to the varying Wi-Fi signal strength and limited 5G access. (c) Critical Analysis and Implementation • Conduct a critical analysis of the proposed technology, considering the scalability, reliability, and maintenance of the network. • Outline a clear implementation strategy that reflects the course's teachings, including deployment phases, necessary infrastructure, and testing protocols. • Develop a plan for integrating the sensor data with cloud services to ensure data availability and the possibility of real-time analytics. Submission Guidelines • The coursework report should not exceed 5 pages and must include diagrams, schematics, and tables where appropriate. • References to the instructional material, concepts, and specific technologies discussed during the course are expected. • The report must be formatted according to the provided template and submitted via the designated online portal by the deadline. • Deadline: Monday, 27th May 2024 Marking Rubric Technical Accuracy (30%): Essential, as the students must demonstrate their grasp of the wireless technologies, network infrastructure, and IoT architectures covered in Weeks 1 to 10. The application of theoretical knowledge to a real-world scenario is central to this module, and the grading bands offer clear differentiation according to the students'level of understanding. Innovation and Creativity (20%): The scenario demands that students design a network for a large smart farm, presenting unique challenges. This criterion encourages them to develop creative and innovative solutions that contribute functional enhancements. Clarity and Organisation (20%): The clarity with which students communicate their ideas and the organisation of their report are crucial. The structure must be logical, aiding in the clear presentation of their network design and rationale. Implementation Feasibility (15%): The students' proposals must be pragmatic, reflecting the real- world constraints of the smart farm environment. This element of the rubric underscores the necessity for plans that are not only theoretical but also actionable within the context of a large and remote farm. Evaluation Depth (15%): A critical skill for engineers is the ability to evaluate their work thoroughly and devise a comprehensive performance assessment plan. This section of the rubric stresses the importance of self-evaluation and the measurement of the network's effectiveness in practical terms.