合肥生活安徽新聞合肥交通合肥房產(chǎn)生活服務(wù)合肥教育合肥招聘合肥旅游文化藝術(shù)合肥美食合肥地圖合肥社保合肥醫(yī)院企業(yè)服務(wù)合肥法律

        代寫CS-UY 4563、Python程序語(yǔ)言代做
        代寫CS-UY 4563、Python程序語(yǔ)言代做

        時(shí)間:2024-12-12  來(lái)源:合肥網(wǎng)hfw.cc  作者:hfw.cc 我要糾錯(cuò)



        Final Project
        CS-UY 4563 - Introduction to Machine Learning
        Overview
        • Partner with one student and select a machine learning problem of your choice.
        • Apply the machine learning techniques you’ve learned during the course to
        your chosen problem.
        • Present your project to the class at the semester’s end.
        Submission Requirements on Gradescope
        Submit the following on Gradescope by the evening before the first presentation (exact
        date to be announced):
        • Presentation slides.
        • Project write-up (PDF format).
        • Project code as a Jupyter Notebook. If necessary, a GitHub link is acceptable.
        • If using a custom dataset, upload it to Gradescope (or provide a GitHub link, if
        necessary).
        1Project Guidelines
        Write-Up Requirements
        Your project write-up should include the following:
        1. Introduction: Describe your data set and the problem you aim to solve.
        2. Perform some unsupervised analysis:
        • Explore pattern or structure in the data using clustering and dimensionality (e.g
        PCA).
        • Visualize the training data
        1
        :
        – Plot individual features to understand their distribution (e.g., histograms
        or density plots).
        – Plot individual features and their relationship with the target variable.
        – Create a correlation matrix to analyze relationships between features.
        • Discuss any interesting structure is present in the data. If you don’t find any
        interesting structure, describe what you tried.
        3. Supervised analysis: Train at least three distinct learning models
        2 discussed in
        the class (such as Linear Regression, Logistic Regression, SVM, Neural Networks,
        CNN).
        3
        For implementation, you may:
        • Use your own implementation from homework or developed independently.
        • Use libraries such as Keras, scikit-learn, or TensorFlow.
        For each model,
        4 you must:
        • Try different feature transformations. You should have at least three transformations.
         For example, try the polynomial, PCA, or radial-basis function kernel.
        For neural networks, different architectures (e.g., neural networks with varying
        numbers of layers) can also be considered forms of feature transformations, as
        they learn complex representations of the input data.
        • Use different regularization techniques. You should have at least 6 different
        regularization values per model
        1Do not look at the validation or test data.
        2You can turn a regression task into a classification task by binning, or for the same dataset, select a
        different feature as the target for your model. Or you can use SVR.
        3
        If you wish to use a model not discussed in class, you must discuss it with me first, or you will not
        receive any points for that model.
        4Even if you get a very high accuracy, perform these transformations to see what happens.
        24. Table of Results:
        • Provide a table with training accuracy and validation metrics for every model.
        Include results for the different parameter settings (e.g., different regularization
        values).
        – For classification include metrics such as precision/recall.
        – For regression modes, report metrics like MSE, R2
        . For example, suppose
        you’re using Ridge Regression and manipulating the value of λ. In that
        case, your table should contain the training and validation accuracy for
        every lambda value you used.
        • Plot and analyze how performance metrics (like accuracy, precision, recall, MSE)
        change with different feature transformations, hyperparameters (e.g.regularization
        settings, learning rate).
        5. Analytical Discussion:
        • Analyze the experimental results and explain key findings. Provide a chart of
        your key findings.
        • Highlight the impact of feature transformations, regularization, and other hyperparameters
         on the model’s performance. Refer to the graphs provide in earlier
        sections to support your analysis. Focus on interpreting:
        – Whether the models overfit or underfit the data.
        – How bias and variance affect performance, and which parameter choices
        helped achieve better generalization.
        Presentation Guidelines
        • You and your partner will give a six-minute presentation to the class.
        • Presentations will be held during the last 2 or 3 class periods and during the final
        exam period for this class. You will be assigned a day for your presentation. If we
        run out of time the day you are to present your project, you will present the next
        day reserved for presentations.
        • Attendance during all presentations is required. A part of your project grade
        will be based on your attendance for everyone else’s presentation.
        Important Notes on Academic Integrity
        • Your submission will undergo plagiarism checks.
        • If we suspect you of cheating, you will receive 0 for your final project grade. See the
        syllabus for additional penalties that may be applied.
        3Dataset Resources
        Below are some resources where you can search for datasets. As a rough guideline, your
        dataset should have at least 200 training examples and at least 10 features. You
        are free to use these resources, look elsewhere, or create your own dataset.
        • https://www.kaggle.com/competitions
        • https://www.openml.org/
        • https://paperswithcode.com/datasets
        • https://registry.opendata.aws/
        • https://dataportals.org/
        • https://en.wikipedia.org/wiki/List_of_datasets_for_machine-learning_research
        • https://www.reddit.com/r/datasets/
        • https://www.quora.com/Where-can-I-find-large-datasets-open-to-the-public
        Modifications
        • If you have a project idea that doesn’t satisfy all the requirements mentioned above,
        please inform me, and we can discuss its viability as your final project.
        • If you use techniques not covered in class, you must demonstrate your understanding
        of these ideas.
        Brightspace Submissions Guidelines
        • Dataset and Partner: Submit the link to your chosen dataset and your partner’s
        name by October 30th.
        • Final Submissions: Upload your presentation slides, project write-up, and code to
        Gradescope by the evening before the first scheduled presentation. The exact date
        will be announced once the total number of projects is confirmed. (I expect the due
        date to be December 4th or December 9th.)
        Potential Challenges and Resources
        As you work with your dataset, you may encounter specific challenges that require additional
         techniques or tools. Below are some topics and resources that might be useful.
        Please explore these topics further through online research.
        4• Feature Reduction: Consider using PCA (which will be covered in class). PCA is
        especially useful when working with SVMs, as they can be slow with high-dimensional
        data.
        If you choose to use SelectKBest from scikit-learn, you must understand why it works
        before you use it.
        • Creating Synthetic Examples: When using SMOTE or other methods to generate
        synthetic data, ensure that only real data is used in the validation and test sets.
        - If using synthetic data, make sure your validation set and test set mirrors the true
        class proportions from the original dataset. A balanced test set for naturally unbalanced
         data can give misleading impressions of your model’s real-world performance.
        For more details, see: Handling Imbalanced Classes
        • Working with Time Series Data: For insights on working with time series data,
        visit: NIST Handbook on Time Series
        • Handling Missing Feature Values:
        – See Lecture 16 at Stanford STATS 306B
        – Techniques to Handle Missing Data Values
        – How to Handle Missing Data in Python
        – Statistical Imputation for Missing Data
        • Multiclass Classification:
        – Understanding Softmax in Multiclass Classification
        – Precision and Recall for Multiclass Metrics
        • Optimizers for Neural Networks: You may use Adam or other optimizers for
        training neural networks.
        • Centering Image Data with Bounding Boxes: If you are working with image
        data, you are allowed to use bounding boxes to center the objects in your images. You
        can use libraries like OpenCV (‘cv2’).
        Tips
        Don’t forget to scale your data as part of preprocessing. Be sure to document any modifications
         you made, including the scaling or normalization techniques you applied.
        The following resource might be helpful. Please stick to topics we discussed in class or
        those mentioned above: CS229: Practical Machine Learning Advice

        請(qǐng)加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp


         

        掃一掃在手機(jī)打開(kāi)當(dāng)前頁(yè)
      1. 上一篇:菲律賓萊特省旅游經(jīng)濟(jì)好嗎(景點(diǎn)推薦)
      2. 下一篇:ENG6編程代寫、代做MATLAB語(yǔ)言程序
      3. 無(wú)相關(guān)信息
        合肥生活資訊

        合肥圖文信息
        出評(píng) 開(kāi)團(tuán)工具
        出評(píng) 開(kāi)團(tuán)工具
        挖掘機(jī)濾芯提升發(fā)動(dòng)機(jī)性能
        挖掘機(jī)濾芯提升發(fā)動(dòng)機(jī)性能
        戴納斯帝壁掛爐全國(guó)售后服務(wù)電話24小時(shí)官網(wǎng)400(全國(guó)服務(wù)熱線)
        戴納斯帝壁掛爐全國(guó)售后服務(wù)電話24小時(shí)官網(wǎng)
        菲斯曼壁掛爐全國(guó)統(tǒng)一400售后維修服務(wù)電話24小時(shí)服務(wù)熱線
        菲斯曼壁掛爐全國(guó)統(tǒng)一400售后維修服務(wù)電話2
        美的熱水器售后服務(wù)技術(shù)咨詢電話全國(guó)24小時(shí)客服熱線
        美的熱水器售后服務(wù)技術(shù)咨詢電話全國(guó)24小時(shí)
        海信羅馬假日洗衣機(jī)亮相AWE  復(fù)古美學(xué)與現(xiàn)代科技完美結(jié)合
        海信羅馬假日洗衣機(jī)亮相AWE 復(fù)古美學(xué)與現(xiàn)代
        合肥機(jī)場(chǎng)巴士4號(hào)線
        合肥機(jī)場(chǎng)巴士4號(hào)線
        合肥機(jī)場(chǎng)巴士3號(hào)線
        合肥機(jī)場(chǎng)巴士3號(hào)線
      4. 上海廠房出租 短信驗(yàn)證碼 酒店vi設(shè)計(jì)

        主站蜘蛛池模板: 亚洲成AV人片一区二区密柚| 亚洲AV综合色区无码一区爱AV| 波多野结衣中文一区二区免费| 国产精品99无码一区二区| 亚洲AV综合色区无码一区| 国产亚洲综合精品一区二区三区| 无码国产精品一区二区免费16| 波多野结衣一区二区三区高清在线 | 亚洲一区二区三区免费观看| 人妻无码一区二区三区四区| 精品亚洲AV无码一区二区三区 | 国产一区二区三区手机在线观看| 国产一区二区福利| 一区在线免费观看| 福利一区在线视频| 精品乱码一区二区三区四区| 国产成人AV一区二区三区无码 | 香蕉久久av一区二区三区| 亚洲一区二区高清| 亚洲福利精品一区二区三区| 国产日韩精品一区二区在线观看| AV无码精品一区二区三区| 日本精品一区二区三区在线观看| 亚洲一区二区三区在线播放| 国产一区二区三区不卡在线观看| 亚洲乱码一区二区三区国产精品| 中文字幕精品一区影音先锋| 国产成人精品一区二区秒拍| 久久综合精品不卡一区二区| 国产品无码一区二区三区在线蜜桃| 久久国产香蕉一区精品| 99久久精品国产免看国产一区| 人妻内射一区二区在线视频| 一区免费在线观看| 国产精品无码一区二区三级| 国产熟女一区二区三区四区五区| 国产一区二区内射最近更新| 日本一区二区三区精品中文字幕| 中文字幕VA一区二区三区| 无码人妻精品一区二区三18禁| 亚洲av无码一区二区乱子伦as|