合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

        CEG5304代做、代寫Java/c++編程語言

        時間:2024-04-11  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



        Project #2 for CEG5304: Generating Images through Prompting and Diffusion-based Models.
        Spring (Semester 2), AY 202**024
        In this exploratory project, you are to explore how to generate (realistic) images via diffusion-based models (such as DALLE and Stable Diffusion) through prompting, in particular hard prompting. To recall and recap the concepts of prompting, prompt engineering, LLVM (Large Language Vision Models), and LMM (Large Multi-modal Models), please refer to the slides on Week 5 (“Lect5-DL_prompt.pdf”).
        Before beginning this project, please read the following instructions carefully, failure to comply with the instructions may be penalized:
        1.This project does not involve compulsory coding, complete your project with this given Word document file by filling in the “TO FILL” spaces. Save the completed file as a PDF file for submission. Please do NOT modify anything (including this instruction) in your submission file.
        2.The marking of this project is based on how detailed the description and discussion are over the given questions. To score, please make sure your descriptions and discussions are readable, and adequate visualizations are provided.
        3.The marking of this project is NOT based on any evaluation criteria (e.g., PSNR) over the generated image. Generating a good image does NOT guarantee a high score.
        4.You may use ChatGPT/Claude or any online LLM services for polishing. However, purely using these services for question answering is prohibited (and is actually very obvious). If it is suspected that you generate your answers holistically with these online services, your assignment may be considered as committing plagiarism.
        5.Submit your completed PDF on Canvas before the deadline: 1759 SGT on 20 April 2024 (updated from the slides). Please note that the deadlines are strict and late submission will be deducted 10 points (out of 100) for every 24 hours.
        6.The report must be done individually. You may discuss with your peers, but NO plagiarism is allowed. The University, College, Department, and the teaching team take plagiarism very seriously. An originality report may be generated from iThenticate when necessary. A zero mark will be given to anyone found plagiarizing and a formal report will be handed to the Department/College for further investigation.

        Task 1: generating an image with Stable Diffusion (via Huggingface Spaces) and compare it with the objective real image. (60%)
        In this task, you are to generate an image with the Stable Diffusion model in Huggingface Spaces. The link is provided here: CLICK ME. You can play with the different prompts and negative prompts (prompts that instructs the model NOT to generate something). Your objective is to generate an image that looks like the following image:

        1a) First, select a rather coarse text prompt. A coarse text prompt may not include a lot of details but should be a good starting prompt to generate images towards our objective. An example could be “A Singaporean university campus with a courtyard.”. Display your generated image and its corresponding text prompt (as well as the negative prompt, if applicable) below: (10%)
        TO FILL
        TO FILL
        1b) Describe, in detail, how the generated image is compared to the objective image. You may include the discussion such as the components in the objective image that is missing from the generated image, or anything generated that does not make sense in the real world. (20%)
        TO FILL
        TO FILL
        Next, you are to improve the generated image with prompt engineering. Note that it is highly likely that you may still be unable to obtain the objective image. A good reference material for prompt engineering can be found here: PROMPT ENGINEERING. 
        1c) Describe in detail how you improve your generated image. The description should include display of the generated images and their corresponding prompts, and detailed reasoning over the change in prompts. If the final improved image is generated with several iterations of prompt improvement, you should show each step in detail. I.e., you should display the result of each iteration of prompt change and discuss the result of each prompt change. You should also compare your improved image with both the first image you generated above, as well as the objective image. (30%)
        TO FILL
        TO FILL
        TO FILL
        Task 2: generating images with another diffusion-based model, DALL-E (mini-DALL-E, via Huggingface Spaces). (40%)
        Stable Diffusion is not the only diffusion-based model that has the capability to generate good quality images. DALL-E is an alternative to Stable Diffusion. However, we are not to discuss the differences over these two models technically, but the differences over the generated images qualitatively (in a subjective manner). The link to generating with mini-DALL-E is provided here: MINI-DALL-E.
        2a) You should first use the same prompt as you used in Task 1a and generate the image with mini-DALL-E. Display the generated image and compare, in detail, the new generated image with that generated by Stable Diffusion. (10%)
        TO FILL
        TO FILL
        2b) Similar to what we performed for Stable Diffusion; you are to again improve the generated image with prompt engineering. Describe in detail how you improve your generated image. Similarly, if the final improved image is generated with several iterations of prompt improvement, you should show each step in detail. The description should include display of the generated images and their corresponding prompts, and detailed reasoning over the change in prompts. You should compare your improved image with both the first image you generated above, as well as the objective image.
        In addition, you should also describe how the improvement is similar to or different from the previous improvement process with Stable Diffusion. (10%)
        TO FILL
        TO FILL
        2c) From the generation process in Task 1 and Task 2, discuss the capabilities and limitations over image generation with off-the-shelf diffusion-based models and prompt engineering. You could further elaborate on possible alternatives or improvements that could generate images that are more realistic or similar to the objective image. (20%)
        TO FILL
        TO FILL

        請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp









         

        掃一掃在手機打開當前頁
      1. 上一篇:MCD4700代做、Python/c++編程語言代寫
      2. 下一篇:怎么申請菲律賓移民達沃?價格多少
      3. 無相關信息
        合肥生活資訊

        合肥圖文信息
        挖掘機濾芯提升發動機性能
        挖掘機濾芯提升發動機性能
        戴納斯帝壁掛爐全國售后服務電話24小時官網400(全國服務熱線)
        戴納斯帝壁掛爐全國售后服務電話24小時官網
        菲斯曼壁掛爐全國統一400售后維修服務電話24小時服務熱線
        菲斯曼壁掛爐全國統一400售后維修服務電話2
        美的熱水器售后服務技術咨詢電話全國24小時客服熱線
        美的熱水器售后服務技術咨詢電話全國24小時
        海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
        海信羅馬假日洗衣機亮相AWE 復古美學與現代
        合肥機場巴士4號線
        合肥機場巴士4號線
        合肥機場巴士3號線
        合肥機場巴士3號線
        合肥機場巴士2號線
        合肥機場巴士2號線
      4. 幣安app官網下載 短信驗證碼

        關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

        Copyright © 2024 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
        ICP備06013414號-3 公安備 42010502001045

        主站蜘蛛池模板: 国产一区二区福利久久| 亚洲日本一区二区| 精品日产一区二区三区手机| 无码精品黑人一区二区三区| 久久精品无码一区二区app| 亚洲熟女乱色一区二区三区| 国产精品区一区二区三在线播放 | 亚洲无线码一区二区三区| 亚洲AV无码一区二区大桥未久| 久久精品国产一区二区三| 中文字幕一区视频一线| 一区二区三区中文字幕| 精品一区二区AV天堂| 免费无码AV一区二区| 国产suv精品一区二区6| 国产精品亚洲专区一区| 国产伦精品一区二区| 美女视频一区二区三区| 日本一道一区二区免费看| 国产福利一区视频| 波多野结衣在线观看一区 | 性色A码一区二区三区天美传媒 | 国产色欲AV一区二区三区| 亚洲AV无码片一区二区三区| 亚洲精品精华液一区二区| 无码丰满熟妇浪潮一区二区AV | 中文字幕一区二区三区永久 | 精品人妻一区二区三区毛片| 国产精品小黄鸭一区二区三区 | 日韩免费观看一区| 日韩精品无码免费一区二区三区| 久久综合九九亚洲一区| 91视频一区二区| 一本久久精品一区二区| 高清一区二区在线观看| 中文字幕aⅴ人妻一区二区 | 久久中文字幕无码一区二区| 亚洲AV网一区二区三区| 一区二区免费电影| 无码国产精品一区二区免费vr| 亚洲国产美女福利直播秀一区二区|