合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

        代寫COMP9417、代做Python設計程序

        時間:2024-02-28  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



        COMP9417 - Machine Learning Homework 1: Regularized Optimization & Gradient Methods
        Introduction In this homework we will explore gradient based optimization. Gradient based algorithms have been crucial to the development of machine learning in the last few decades. The most famous exam ple is the backpropagation algorithm used in deep learning, which is in fact just a particular application of a simple algorithm known as (stochastic) gradient descent. We will first implement gradient descent from scratch on a deterministic problem (no data), and then extend our implementation to solve a real world regression problem.

        Points Allocation There are a total of 30 marks.

        Question 1 a): 2 marks
        Question 1 b): 4 marks
        Question 1 c): 2 marks
        Question 1 d): 2 marks
        Question 1 e): 6 marks
        Question 1 f): 6 marks
        Question 1 g): 4 marks
        Question 1 h): 2 marks
        Question 1 i): 2 marks
        What to Submit
        A single PDF file which contains solutions to each question. For each question, provide your solution in the form of text and requested plots. For some questions you will be requested to provide screen shots of code used to generate your answer — only include these when they are explicitly asked for.
        .py file(s) containing all code you used for the project, which should be provided in a separate .zip
        This code must match the code provided in the report.
        You may be deducted points for not following these instructions.
        You may be deducted points for poorly presented/formatted work. Please be neat and make your solutions clear. Start each question on a new page if necessary.
        1

        You cannot submit a Jupyter notebook; this will receive a mark of zero. This does not stop you from developing your code in a notebook and then copying it into a .py file though, or using a tool such as nbconvert or similar.
        We will set up a Moodle forum for questions about this homework. Please read the existing questions before posting new questions. Please do some basic research online before posting questions. Please
        nly post clarification questions. Any questions deemed to be fishing for answers will be ignored and/or deleted.
        Please check Moodle announcements for updates to this spec. It is your responsibility to check for announcements about the spec.
        Please complete your homework on your own, do not discuss your solution with other people in the course. General discussion of the problems is fine, but you must write out your own solution and acknowledge if you discussed any of the problems in your submission (including their name(s) and zID).
        As usual, we monitor all online forums such as Chegg, StackExchange, etc. Posting homework ques- tions on these site is equivalent to plagiarism and will result in a case of academic misconduct.
        You may not use SymPy or any other symbolic programming toolkits to answer the derivation ques- tions. This will result in an automatic grade of zero for the relevant question. You must do the derivations manually.
        When and Where to Submit
        Due date: Week 4, Monday March 4th, 2024 by 5pm. Please note that the forum will not be actively monitored on weekends.
        Late submissions will incur a penalty of 5% per day from the maximum achievable grade. For ex- ample, if you achieve a grade of 80/100 but you submitted 3 days late, then your final grade will be
        3× 5 = 65. Submissions that are more than 5 days late will receive a mark of zero.
        Submission must be made on Moodle, no exceptions.
        Question 1. Gradient Based Optimization
        The general framework for a gradient method for finding a minimizer of a function f : Rn → R is defined by

        x(k+1) = x(k) − αk∇f(xk),

        k = 0, 1, 2, . . . ,

        (1)

        where αk > 0 is known as the step size, or learning rate. Consider the following simple example of minimizing the function g(x) = 2 √ x3 + 1. We first note that g′(x) = 3x2(x3 + 1)−1/2. We then need to choose a starting value of x, say x(0) = 1. Let’s also take the step size to be constant, αk = α = 0.1. Then

        we have the following iterations:

        x(1) = x(0) − 0.1× 3(x(0))2((x(0))3 + 1)−1/2 = 0.78**9656440357 x(2) = x(1) − 0.1× 3(x(1))2((x(1))3 + 1)−1/2 = 0.6**6170**300827 x(3) = 0.5272505146487**7
        and this continues until we terminate the algorithm (as a quick exercise for your own benefit, code this up and compare it to the true minimum of the function which is x∗ = −11). This idea works for functions that have vector valued inputs, which is often the case in machine learning. For example, when we minimize a loss function we do so with respect to a weight vector, β. When we take the step size to be constant at each iteration, this algorithm is known as gradient descent. For the entirety of this

        question, do not use any existing implementations of gradient methods, doing so will result in an automatic mark of zero for the entire question.

        (a) Consider the following optimisation problem:

        x∈Rn min f(x),

        f(x) = 2 1 ‖Ax− b‖22 + γ 2 ‖x‖22,

        where

        and where A ∈ Rm×n, b ∈ Rm are defined as

        A =   −1 0 3

        3 2 0 0 −1 2 ?**7; −4 −2 7 ?**9; ,

        b =   −4 3 1 ?**9; ?**7; ,

        and γ is a positive constant. Run gradient descent on f using a step size of α = 0.01 and γ = 2 and starting point of x(0) = (1, 1, 1, 1). You will need to terminate the algorithm when the following condition is met: ‖∇f(x(k))‖2 < 0.001. In your answer, clearly write down the version of the gradient steps (1) for this problem. Also, print out the first 5 and last 5 values of x(k), clearly

        indicating the value of k, in the form:

        1Does the algorithm converge to the true minimizer? Why/Why not?

        What to submit: an equation outlining the explicit gradient update, a print out of the first 5 (k = 5 inclusive) and last 5 rows of your iterations. Use the round function to round your numbers to 4 decimal places. Include a screen shot of any code used for this section and a copy of your python code in solutions.py.

        Consider now a slightly different problem: let y, β ∈ Rp and λ > 0. Further, we define the matrix
        where blanks denote zero elements. 2 Define the loss function:

        L(β) = 2p 1 ‖y − β‖22 + λ‖Wβ‖22.

        The following code allows you to load in the data needed for this problem3:

        Note, the t variable is purely for plotting purposes, it should not appear in any of your calculations. (b) Show that

        ˆβ = arg min β L(β) = (I + 2λpWTW )−1y.

        Update the following code4 so that it returns a plot of ˆβ and calculates L( ˆβ). Only in your code implementation, set λ = 0.9.

        def create_W(p):

        ## generate W which is a p-2 x p matrix as defined in the question

        W = np.zeros((p-2, p)) b = np.array([1,-2,1]) for i in range(p-2): W[i,i:i+3] = b return W

        def loss(beta, y, W, L):

        ## compute loss for a given vector beta for data y, matrix W, regularization parameter L (lambda) # your code here

        2If it is not already clear: for the first row of W : W11 = 1,W12 = −2,W13 = 1 and W1j = 0 for any j ≥ 4. For the second row of W : W21 = 0,W22 = 1,W23 = −2,W24 = 1 and W2j = 0 for any j ≥ 5 and so on.

        3a copy of this code is provided in code student.py 4a copy of this code is provided in code student.py
        return loss_val

        ## your code here, e.g. compute betahat and loss, and set other params..

        plt.plot(t_var, y_var, zorder=1, color=’red’, label=’truth’) plt.plot(t_var, beta_hat, zorder=3, color=’blue’,

        linewidth=2, linestyle=’--’, label=’fit’) plt.legend(loc=’best’) plt.title(f"L(beta_hat) = {loss(beta_hat, y, W, L)}") plt.show()

        What to submit: a closed form expression along with your working, a single plot and a screen shot of your code along with a copy of your code in your .py file.

        Write out each of the two terms that make up the loss function ( 2p‖y−β‖22 1 and λ‖Wβ‖22) explicitly using summations. Use this representation to explain the role played by each of the two terms. Be as specific as possible. What to submit: your answer, and any working either typed or handwritten.
        Show that we can write (2) in the following way:
        L(β) = p 1 j=1 p∑ Lj(β),

        where Lj(β) depends on the data y1, . . . , yp only through yj . Further, show that

        ∇Lj(β) =   −(yj 0 0 0 0 . . . . . . − βj) ?**8;?**8;?**8;?**8;?**8;?**8;?**8;?**8;?**8;?**8;?**9; ?**7; + 2λWTWβ,

        j = 1, . . . , p.

        Note that the first vector is the p-dimensional vector with zero everywhere except for the j-th index. Take a look at the supplementary material if you are confused by the notation. What to submit: your

        answer, and any working either typed or handwritten.

        (e) In this question, you will implement (batch) GD from scratch to solve the (2). Use an initial estimate β(0) = 1p (the p-dimensional vector of ones), and λ = 0.001 and run the algorithm for 1000 epochs

        (an epoch is one pass over the entire data, so a single GD step). Repeat this for the following step sizes:

        α ∈ {0.001, 0.005, 0.01, 0.05, 0.1, 0.3, 0.6, 1.2, 2}

        To monitor the performance of the algorithm, we will plot the value

        ∆(k) = L(β(k))− L( ˆβ),

        where ˆβ is the true (closed form) solution derived earlier. Present your results in a single 3× 3 grid plot, with each subplot showing the progression of ∆(k) when running GD with a specific step-size.

        State which step-size you think is best in terms of speed of convergence. What to submit: a single

        plot. Include a screen shot of any code used for this section and a copy of your python code in solutions.py.

        We will now implement SGD from scratch to solve (2). Use an initial estimate β(0) = 1p (the vector
        f ones) and λ = 0.001 and run the algorithm for 4 epochs (this means a total of 4p updates of β. Repeat this for the following step sizes:
        α ∈ {0.001, 0.005, 0.01, 0.05, 0.1, 0.3, 0.6, 1.2, 2}

        Present an analogous single 3 × 3 grid plot as in the previous question. Instead of choosing an index randomly at each step of SGD, we will cycle through the observations in the order they are stored in y to ensure consistent results. Report the best step-size choice. In some cases you might observe that the value of ∆(k) jumps up and down, and this is not something you would have seen using batch GD. Why do you think this might be happening?

        What to submit: a single plot and some commentary. Include a screen shot of any code used for this section and a copy of your python code in solutions.py.

        An alternative Coordinate Based scheme: In GD, SGD and mini-batch GD, we always update the entire p-dimensional vector β at each iteration. An alternative approach is to update each of the p parameters individually. To make this idea more clear, we write the loss function of interest L(β) as L(β1, β2 . . . , βp). We initialize β(0) , and then solve for k = 1, 2, 3, . . . ,

        β(k) 1 = arg min β1 L(β1, β(k−1) 2 , β(k−1) 3 , . . . , β(k−1) p ) β(k) 2 = arg min β2 L(β(k) 1 , β2, β(k−1) 3 , . . . , β(k−1) p )

        .

        .

        .

        β(k) p = arg min βp L(β(k) 1 , β(k) 2 , β(k) 3 , . . . , βp).

        Note that each of the minimizations is over a single (**dimensional) coordinate of β, and also that as as soon as we update β(k) j , we use the new value when solving the update for β(k) j+1 and so on. The idea is then to cycle through these coordinate level updates until convergence. In the next two parts we will implement this algorithm from scratch for the problem we have been working on (2).

        (g) Derive closed-form expressions for ˆβ1, ˆβ2, . . . , ˆβp where for j = 1, 2, . . . , p:

        ˆβj = arg min βj L(β1, . . . , βj−1, βj , βj+1, . . . , βp).

        What to submit: a closed form expression along with your working.

        Hint: Be careful, this is not as straight-forward as it might seem at first. It is recommended to choose a value for p, e.g. p = 8 and first write out the expression in terms of summations. Then take derivatives to get the closed form expressions.

        Implement both gradient descent and the coordinate scheme in code (from scratch) and apply it to the provided data. In your implementation:
        Use λ = 0.001 for the coordinate scheme, and step-sizeα = 1 for your gradient descent scheme.
        Initialize both algorithms with β = 1p, the p-dimensional vector of ones.
        For the coordinate scheme, be sure to update the βj ’s in order (i.e. 1,2,3,...)
        For your coordinate scheme, terminate the algorithm after 1000 updates (each time you update a single coordinate, that counts as an update.)
        For your GD scheme, terminate the algoirthm after 1000 epochs.
        Create a single plot of k vs ∆(k) = L(β(k)) − L( ˆβ), where ˆβ is the closed form expression derived earlier.
        Your plot should have both the coordinate scheme (blue) and GD (green)
        displayed and should start from k = 0. Your plot should have a legend.
        What to submit: a single plot and a screen shot of your code along with a copy of your code in your .py file.

        (i) Based on your answer to the previous part, when would you prefer GD? When would you prefer the coordinate scheme? What to submit: Some commentary.

        Supplementary: Background on Gradient Descent As noted in the lectures, there are a few variants of gradient descent that we will briefly outline here. Recall that in gradient descent our update rule is

        β(k+1) = β(k) − αk∇L(β(k)),

        k = 0, 1, 2, . . . ,

        where L(β) is the loss function that we are trying to minimize. In machine learning, it is often the case that the loss function takes the form

        L(β) = n 1 n∑ Li(β),

        i=1

        i.e. the loss is an average of n functions that we have lablled Li, and each Li depends on the data only through (xi, yi). It then follows that the gradient is also an average of the form

        ∇L(β) = n 1 n∑ ∇Li(β).

        i=1

        We can now define some popular variants of gradient descent .

        (i) Gradient Descent (GD) (also referred to as batch gradient descent): here we use the full gradient, as in we take the average over all n terms, so our update rule is:

        β(k+1) = β(k) − αk n∑ ∇Li(β(k)),

        n

        k = 0, 1, 2, . . . .

        i=1

        (ii) Stochastic Gradient Descent (SGD): instead of considering all n terms, at the k-th step we choose an index ik randomly from {1, . . . , n}, and update

        β(k+1) = β(k) − αk∇Lik(β(k)),

        k = 0, 1, 2, . . . .

        Here, we are approximating the full gradient∇L(β) using ∇Lik(β).

        (iii) Mini-Batch Gradient Descent: GD (using all terms) and SGD (using a single term) represents the two possible extremes. In mini-batch GD we choose batches of size 1 < B < n randomly at each step, call their indices {ik1 , ik2 , . . . , ikB}, and then we update

        β(k+1) = β(k) − αk B B∑ ∇Lij (β(k)),

        j=1

        k = 0, 1, 2, . . . ,

        so we are still approximating the full gradient but using more than a single element as is done in SGD.
        請加QQ:99515681  郵箱:99515681@qq.com   WX:codehelp 

        掃一掃在手機打開當前頁
      1. 上一篇:莆田鞋子批發市場進貨渠道(推薦十個最新進貨地方)
      2. 下一篇:CSC173代做、Java編程設計代寫
      3. 無相關信息
        合肥生活資訊

        合肥圖文信息
        挖掘機濾芯提升發動機性能
        挖掘機濾芯提升發動機性能
        戴納斯帝壁掛爐全國售后服務電話24小時官網400(全國服務熱線)
        戴納斯帝壁掛爐全國售后服務電話24小時官網
        菲斯曼壁掛爐全國統一400售后維修服務電話24小時服務熱線
        菲斯曼壁掛爐全國統一400售后維修服務電話2
        美的熱水器售后服務技術咨詢電話全國24小時客服熱線
        美的熱水器售后服務技術咨詢電話全國24小時
        海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
        海信羅馬假日洗衣機亮相AWE 復古美學與現代
        合肥機場巴士4號線
        合肥機場巴士4號線
        合肥機場巴士3號線
        合肥機場巴士3號線
        合肥機場巴士2號線
        合肥機場巴士2號線
      4. 幣安app官網下載 短信驗證碼

        關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

        Copyright © 2024 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
        ICP備06013414號-3 公安備 42010502001045

        主站蜘蛛池模板: 亚洲欧美成人一区二区三区| 国产精品视频一区| 清纯唯美经典一区二区| 亚洲av片一区二区三区| 亚洲人AV永久一区二区三区久久| 亚洲性日韩精品一区二区三区 | 果冻传媒董小宛一区二区| 果冻传媒董小宛一区二区| 亚洲美女一区二区三区| 欧美日韩国产免费一区二区三区| 亚洲国产一区二区视频网站| 国产福利电影一区二区三区,日韩伦理电影在线福 | 极品人妻少妇一区二区三区| 亚洲乱码av中文一区二区 | 日本一区免费电影| 中文无码精品一区二区三区| 99在线精品一区二区三区| 糖心vlog精品一区二区三区| 国产成人精品一区二区三区免费 | 国产熟女一区二区三区四区五区| 亚洲第一区香蕉_国产a| 精品一区二区三区AV天堂| 中文字幕无码免费久久9一区9| 日韩人妻无码一区二区三区99| 成人精品视频一区二区三区尤物| 亚洲国产老鸭窝一区二区三区| 一区二区免费在线观看| 伊人激情AV一区二区三区| 中文字幕av人妻少妇一区二区| 香蕉久久一区二区不卡无毒影院| 后入内射国产一区二区| 亚洲国产成人一区二区三区| 精品国产亚洲一区二区三区在线观看 | 亚洲熟妇av一区二区三区下载| 国产成人av一区二区三区在线| 亚洲性色精品一区二区在线| 熟妇人妻AV无码一区二区三区| 久久se精品一区精品二区国产| 无码乱人伦一区二区亚洲一 | 亚洲午夜精品第一区二区8050| 国产成人av一区二区三区在线|