合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

        COMP52715 代做、代寫 Python設計編程

        時間:2024-04-22  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



        COMP52715 Deep Learning for Computer Vision & Robotics (Epiphany Term, 202**4)
        Summative Coursework - 3D PacMan
        Coursework Credit - 15 Credits Estimated Hours of Work - 48 Hours Submission Method - via Ultra
        Release On: February 16 2024 (2pm UK Time)
        Due On: March 15 2024 (2pm UK Time)
        – All rights reserved. Do NOT Distribute. –
          Compiled on November 16, 2023 by Dr. Jingjing Deng

        1
        1.
        2.
        3.
        4.
        5.
        6.
        Coursework Specification
        This coursework constitutes **% of your final mark for this module, where there are two mandatory tasks: Python programming and report writing. You must upload your work to Ultra before the deadline specified on the cover page.
        The other 10% will be assessed separately based on seminar participation. There are 3 seminar sessions in total, the mark awarding rule is as such: (A) participating in none=0%, (B) participating in 1 session=2%, (C) participating in 2 sessions=5%, (D) participating in all sessions=10%.
        This coursework is to be completed by students working individually. You should NOT ask for help from your peers, lecturer, and lab tutors regarding the coursework. You will be assessed on your code and report submissions. You must comply with the University rules regarding plagiarism and collusion. Using external code without proper referencing is also considered as breaching academic integrity.
        Code Submission: The code must be written in Jupyter Notebook with appropriate comments. For constructing deep neural network models, use PyTorch1 library only. Zip Jupyter Note- book source files (*.ipynb), your dataset (if there is any new), pretrained models (*.pth), and a README.txt (code instruction) into one single archive. Do NOT include the original “Pac- Man Helper.py”, “PacMan Helper Demo.ipynb”, “PacMan Skeleton.ipynb”, “TrainingImages.zip”, “cloudPositions.npy” and “cloudColors.npy” files. Submit a single Zip file to GradeScope - Code entry on Ultra.
        Report Submission: The report must NOT exceed 5 pages (including figures, tables, references and supplementary materials) with a single column format. The minimum font size is 11pt (use Arial, Calibri, Times New Roman only). Submit a single PDF file to GradeScope - Report entry on Ultra.
        Academic Misconduct is a major offence which will be dealt with in accordance with the University’s General Regulation IV – Discipline. Please ensure you have read and understood the University’s regulations on plagiarism and other assessment irregularities as noted in the Learning and Teaching Handbook: 6.2.4: Academic Misconduct2.
                    Figure 1: The mysterious PhD Lab.
         1 https://pytorch.org/
        2 https://durhamuniversity.sharepoint.com/teams/LTH/SitePages/6.2.4.aspx
        1

        2 Task Description (**% in total)
        2.1 Task 1 - Python Programming (40% subtotal)
        In this coursework, you are given a set of 3D point-clouds with appearance features (i.e. RGB values). These point-clouds were collected using a Kinect system in a mysterious PhD Lab (see Figure.1). Several virtual objects are also positioned among those point clouds. Your task is to write a Python program that can automatically detect those objects from an image and use them as anchors to collect the objects and navigate through the 3D scene. If you land close enough to the object it will be automatically captured and removed from the scene. A set of example images that contain those virtual objects are provided. These example images are used to train a classifier (basic solution) and an object detector (advanced solution) using deep learning approaches in order to locate the targets. You are required to attempt both basic and advance solutions. “PacMan Helper.py” provides some basic functions to help you complete the task. “PacMan Helper Demo.ipynb” demonstrates how to use these functions to obtain a 2D image by projecting 3D point-clouds onto the camera image-plane, and how to re-position and rotate the camera etc. All the code and data are available on Ultra. You are encouraged to read the given source codes, particularly “PacMan Skeleton.ipynb”.
        Detection Solution using Basic Binary Classifier (10%). Implement a deep neural network model that can classify the image patch into two categories: target object and background. You can use the given images to train your neural network. It then can be used in a sliding window fashion to detect the target object in a given image.
        Detection Solution using Advance Object Detector (10%). Implement a deep neural network model that can detect the target object from the image. You may manually or automatically create your own dataset for training the detector. The detector will predict bounding boxes that contain the object from a given image.
        Navigation and Collection Task Completion (10%). There are 11 target objects in the scene. Use the trained models to perform scene navigation and object collection. If you land close enough to the object it will be automatically captured and removed from the scene. You may compare the performance of both models.
        Visualisation, Coding Style, and Readability (10%). Visualise the data and your experimental results wherever is appropriate. The code should be well structured with sufficient comments for the essential parts to make the implementation of your experiments easy to read and understand. Check the “Google Python Style Guide”3 for guidance.
        2.2 Task 2 - Report Writing (50% subtotal)
        You will also write a report (maximum five pages) on your work, which you will submit to Ultra alongside your code. The report must contain the following structure:
        Introduction and Method (10%). Introduce the task and contextualise the given problem. Make sure to include a few references to previously published work in the field, where you should demon- strate an awareness of the relevant research works. Describe the model(s) and approaches you used to undertake the task. Any decisions on hyper-parameters must be stated here, including motivation for your choices where applicable. If the basis of your decision is experimentation with a number of parameters, then state this.
        Result and Discussion(10)%). Describe, compare and contrast the results you obtained on your model(s). Any relationships in the data should be outlined and pointed out here. Only the most important conclusions should be mentioned in the text. By using tables and figures to support the section, you can avoid describing the results fully. Describe the outcome of the experiment and the conclusion that you can draw from these results.
        Robot Design (20%). Consider designing an autonomous robot to undertake the given task in the real scene. Discuss the foreseen challenges and propose your design, including robot mechanic configuration, hardware and algorithms for robot sensing and controlling, and system efficiency etc. Provide appropriate justifications for your design choices with evidence from existing literature. You may use simulators such as “CoppeliaSim Edu” or “Gazebo” for visualising your design.
        3 https://google.github.io/styleguide/pyguide.html
        2
         
        Format, Writing Style, and Presentation (10%). Language usage and report format should be in a professional standard and meet the academic writing criteria, with the explanation appropriately divided as per the structure described above. Tables, figures, and references should be included and cited where appropriate. A guide of citation style can be found at library guide4.
        3 Learning Outcome
        The following materials from lectures and lab practicals are closely relevant to this task:
        1. Basic Deep Neural Networks - Image Classification.
        2. Generic Visual Perception - Object Detection.
        3. Deep Learning for Robotics Sensing and Controlling - Consideration for Robotic System Design.
        The following key learning outcomes are assessed:
        1. A critical understanding of the contemporary deep machine learning topics presented, and how these are applicable to relevant industrial problems and have future potential for emerging needs in both a research and industrial setting.
        2. An advanced knowledge of the principles and practice of analysing relevant robotics and computer vision deep machine learning based algorithms for problem suitability.
        3. Written communication, problem solving and analysis, computational thinking, and advanced pro- gramming skills.
        The rubric and feedback sheet are attached at the end of this document.
         4 https://libguides.durham.ac.uk/research_skills/managing_info/plagiarism 3

        請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp

        掃一掃在手機打開當前頁
      1. 上一篇:菲律賓申請中國探親簽證流程 入華探親簽辦理材料
      2. 下一篇:EEE-6512 代寫、代做 java/c++編程語言
      3. 無相關信息
        合肥生活資訊

        合肥圖文信息
        挖掘機濾芯提升發動機性能
        挖掘機濾芯提升發動機性能
        戴納斯帝壁掛爐全國售后服務電話24小時官網400(全國服務熱線)
        戴納斯帝壁掛爐全國售后服務電話24小時官網
        菲斯曼壁掛爐全國統一400售后維修服務電話24小時服務熱線
        菲斯曼壁掛爐全國統一400售后維修服務電話2
        美的熱水器售后服務技術咨詢電話全國24小時客服熱線
        美的熱水器售后服務技術咨詢電話全國24小時
        海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
        海信羅馬假日洗衣機亮相AWE 復古美學與現代
        合肥機場巴士4號線
        合肥機場巴士4號線
        合肥機場巴士3號線
        合肥機場巴士3號線
        合肥機場巴士2號線
        合肥機場巴士2號線
      4. 幣安app官網下載 短信驗證碼 丁香花影院

        關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

        Copyright © 2024 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
        ICP備06013414號-3 公安備 42010502001045

        主站蜘蛛池模板: 久久久国产精品一区二区18禁| 日韩在线一区视频| 国产天堂一区二区综合| 亚洲一区二区三区免费在线观看| 视频一区在线播放| 中文字幕日韩丝袜一区| 无码人妻品一区二区三区精99| 亚洲第一区精品观看| 精彩视频一区二区三区| 亚洲码欧美码一区二区三区| 亚洲av福利无码无一区二区| 真实国产乱子伦精品一区二区三区 | 国产成人久久一区二区不卡三区| 亚洲国产一区在线| 亚洲电影一区二区三区| 免费精品一区二区三区第35| 精品久久一区二区| 国精品无码一区二区三区在线| 中文国产成人精品久久一区| 国产免费伦精品一区二区三区| 精品国产一区二区三区在线观看 | 亚洲国产一区二区三区青草影视| 三上悠亚亚洲一区高清| 亚洲成AV人片一区二区密柚| 久久久精品一区二区三区| 高清一区二区三区免费视频| 一区高清大胆人体| 99精品一区二区三区无码吞精| 51视频国产精品一区二区| 国产福利酱国产一区二区| 日韩视频一区二区在线观看| 波霸影院一区二区| 伊人精品视频一区二区三区| 久久久久久人妻一区精品| 亚洲一区二区三区播放在线| 无码人妻av一区二区三区蜜臀| 动漫精品一区二区三区3d| 国产在线观看一区精品| 无码精品视频一区二区三区| 亚洲日本va一区二区三区| 无码8090精品久久一区|