Image Processing

Learning goals.

Welcome to the multimedia processing module. This handout contains information for the image process tutorial, individual assignment, and group assignment. This module has the following learning goals:

  • Goal 1 : Understand how image processing can be applied to inform the design of product-service systems or to be integrated into such systems (i.e., what can we do using image processing techniques?)
  • Goal 2 : Be able to critically reflect on model capability by understanding how the input image quality and conditions can affect model performance (i.e., why image processing techniques work or do not work as expected?)
  • Goal 3 : Understand how to automate the image processing pipeline using Python, document how the code works, and have good code quality
  • Goal 4 : Understand how to use Teachable Machine to create a machine learning model and design based on the generated model

Table of contents

  • Individual Assignment
  • Group Assignment
  • Grading Rubric (Group Assignment)

CS231n: Deep Learning for Computer Vision

Stanford - spring 2023, assignments.

There will be three assignments which will improve both your theoretical understanding and your practical skills. All assignments will contain programming parts and written questions. For practical reasons, in office hours, TAs have been asked to not look at students’ code.

  • Assignment 1 (10%): Image Classification, kNN, SVM, Softmax, Fully-Connected Neural Network
  • Assignment 2 (20%): Fully-Connected Nets, Batch Normalization, Dropout, Convolutional Nets, Network Visualization
  • Assignment 3 (15%): Image Captioning with Vanilla RNNs, LSTMs, Transformers, Generative Adversarial Networks

All assignments are due at 11:59 PM Pacific Time. All deadlines will be posted on Ed and on the Schedule page.

Assignments are submitted via Gradescope . You will be automatically added to the course on Gradescope before the start of the quarter. If that is not the case, please email us to sort it out. If you need to sign up for a Gradescope account, please use your @stanford.edu email address. Further instructions are given in each assignment handout. Do not email us your assignments.

For submission instructions, follow the steps listed on the appropriate assignment handout.

Late Policy

See the late policy on the home page .

Collaboration Policy

Study groups are allowed and students may discuss in groups. However, we expect students to understand and complete their own assignments. Each student must write down the solutions independently (without referring to written notes from the joint session) and hand in one assignment per student. If you worked in a group, please put the names of your study group at the top of your assignment. When in doubt about collaboration details, please ask us on Ed .

Honor Code : There are a number of solutions to assignments from past offerings of CS231n that have been posted online. We are aware of this, and expect that all work submitted by students will be their own. Like all other classes at Stanford, we take the student Honor Code very seriously.

Codeshive

  • $ 0.00 0 items

assignment 1 image

COMP4901L Homework Assignment 1 Image Filtering and Hough Transform solved

$ 35.00   $ 21.00

Description

In this assignment you will be implementing some basic image processing algorithms and putting them together to build a Hough Transform based line detector. Your code will be able to find the start and end points of straight line segments in images. We have included a number of images for you to test your line detector code on. Like most vision algorithms, the Hough Transform uses a number of parameters whose optimal values are (unfortunately) data dependent, that is, a set of parameter values that works really well on one image might not be best for another image. By running your code on the test images you will learn about what these parameters do and how changing their values effects performance. Many of the algorithms you will be implementing as part of this assignment are functions in the Matlab image processing toolbox. You are not allowed to use calls to functions in this assignment. You may however compare your output to the output generated by the image processing toolboxes to make sure you are on the right track. Instructions 1. Integrity and collaboration: Students are encouraged to work in groups but each student must submit their own work. If you work as a group, include the names of your collaborators in your write up. Code should NOT be shared or copied. Please DO NOT use external code unless permitted. Plagiarism is strongly prohibited and may lead to failure of this course. 2. Start early! Especially those not familiar with Matlab. 3. Write-up: Your write-up should mainly consist of three parts, your answers to theory questions, the resulting images of each step, that is, the output of houghScript.m, and the discussions for experiments. Please note that we DO NOT accept handwritten scans for your write-up in this assignment. Please type your answers to theory questions and discussions for experiments electronically. 4. Submission: Your submission for this assignment should be a zip file, —tt ¡ust login id.zip¿, composed of your write-up, your Matlab implementations (including any helper functions), and your implementations, results for extra credit (optional). Please make sure to remove the data/ and result/ folders, the houghScript.m and drawLine.m scripts, and any other temporary files you generated. Your final upload should have the files arranged in this layout: .zip • – .pdf – matlab ∗ myImageFilter.m ∗ myEdgeFilter.m ∗ myHoughTransform.m ∗ myHoughLines.m 1 ∗ any helper functions you need – ec ∗ myHoughLineSegments.m ∗ ec.m ∗ your own images ∗ your own results 5. File paths: Please make sure that any file paths that you use are relative and not absolute. Not imread(’/name/Documents/subdirectory/hw1/data/xyz.jpg’) but imread(’../data/xyz.jpg’). 1 Theory questions Type down your answers for the following questions in your write-up. Each question should only take a couple of lines. In particular, the proofs do not require any lengthy calculations. If you are lost in many lines of complicated algebra you are doing something much too complicated (or wrong). Q1.1 Hough Transform Line Parametrization (20 points) 1. Show that if you use the line equation ρ = x cos θ + y sin θ each image point (x, y) results in a sinusoid in (ρ, θ) Hough space. Relate the amplitude and phase of the sinusoid to the point (x, y). 2. Why do we parametrize the line in terms (ρ, θ) instead of the slope and intercept (m, c)? Express the slope and intercept in terms of (ρ, θ). 3. Assuming that the image points (x, y) are in an image of width W and height H, that is, x ∈ [1, W], y ∈ [1, H], what is the maximum absolute value of ρ, and what is the range for θ? 4. For point (10, 10) and points (20, 20) and (30, 30) in the image, plot the corresponding sinusoid waves in Hough space, and visualize how their intersection point defines the line. What is (m, c) for this line? Please use Matlab to plot the curves and report the result in your write-up. 2 Implementation We have included a main script named houghScript.m that takes care of reading in images from a directory, making function calls to the various steps of the Hough transform (the functions that you will be implementing) and generates images showing the output and some of the intermediate steps. You are free to modify the script as you want, but note that TAs will run the original houghScript.m while grading. Please make sure your code runs correctly with the original script and generates the required output images. Every script and function you write in this section should be included in the matlab/ directory. Please include resulting images in your write-up. Q2.1 Convolution (20 points) Write a function that convolves an image with a given convolution filter function [img1] = myImageFilter(img0, h) 2 As input, the function takes a greyscale image (img0) and a convolution filter stored in matrix h. The output of the function should be an image img1 of the same size as img0 which results from convolving img0 with h. You can assume that the filter h is odd sized along both dimensions. You will need to handle boundary cases on the edges of the image. For example, when you place a convolution mask on the top left corner of the image, most of the filter mask will lie outside the image. One solution is to output a zero value at all these locations, the better thing to do is to pad the image such that pixels lying outside the image boundary have the same intensity value as the nearest pixel that lies inside the image. You can call Matlab’s function to pad array. However, your code cannot call on Matlab’s imfilter, conv2, convn, filter2 functions, or any other similar functions. You may compare your output to these functions for comparison and debugging. This function should be vectorized. Examples and meaning of vectorization can be found here. Specifically, try to reduce the number of for loops that you use in the function as much as possible. Q2.2 Edge detection (20 points) Write a function that finds edge intensity and orientation in an image. Display the output of your function for one of the given images in the handout. function [img1] = myEdgeFilter(img0, sigma) The function will input a greyscale image (img0) and scalar (sigma). sigma is the standard deviation of the Gaussian smoothing kernel to be used before edge detection. The function will output img1, the edge magnitude image. First, use your convolution function to smooth out the image with the specified Gaussian kernel. This helps reduce noise and spurious fine edges in the image. Use fspecial to get the kernel for the Gaussian filter. The size of the Gaussian filter should depend on sigma (e.g., hsize = 2 * ceil(3 * sigma) + 1). The edge magnitude image img1 can be calculated from image gradients in the x direction and y direction. To find the image gradient imgx in the x direction, convolve the smoothed image with the x-oriented Sobel filter. Similarly, find image gradient imgy in the y direction by convolving the smoothed image with the y-oriented Sobel filter. You can also output imgx and imgy if needed. In many cases, the high gradient magnitude region along an edge will be quite thick. For finding lines its best to have edges that are a single pixel wide. Towards this end, make your edge filter implement non-maximum suppression, that is for each pixel look at the two neighboring pixels along the gradient direction and if either of those pixels has a larger gradient magnitude then set the edge magnitude at the center pixel to zero. Map the gradient angle to the closest of 4 cases, where the line is sloped at almost 0◦ , 45◦ , 90◦ , and 135◦ . For example, 30◦ would map to 45◦ . For more details about non-maximum suppression, please refer to the last page of this handout. Your code cannot call on Matlab’s edge function, or any other similar functions. You may use edge for comparison and debugging. A sample result is shown in Figure 1. Q2.3 The Hough transform (20 points) Write a function that applies the Hough Transform to an edge magnitude image. Display the output for one of the images in the write-up. function [H, rhoScale, thetaScale] = myHoughTransform(Im, threshold, rhoRes,thetaRes) Im is the edge magnitude image, threshold (scalar) is a edge strength threshold used to ignore pixels with a low edge filter response. rhoRes (scalar) and thetaRes (scalar) are the resolution of 3 Figure 1: Edge detection result. the Hough transform accumulator along the ρ and θ axes respectively. H is the Hough transform accumulator that contains the number of “votes” for all the possible lines passing through the image. rhoScale and thetaScale are the arrays of ρ and θ values over which myHoughTransform generates the Hough transform matrix H. For example, if rhoScale(i) = ρi and thetaScale(j) = θi , then H(i,j) contains the votes for ρ = ρi and θ = θi . First, threshold the edge image. Each pixel (x, y) above the threshold is a possible point on a line and votes in the Hough transform for all the lines it could be a part of. Parametrize lines in terms of θ and ρ such that ρ = x cos θ + y sin θ, θ ∈ [0, 2π] and ρ ∈ [0, M]. M should be large enough to accommodate all lines that could lie in an image. Each line in the image corresponds to a unique pair (ρ, θ) in this range. Therefore, θ values corresponding to negative ρ values are invalid, and you should not count those votes. The accumulator resolution needs to be selected carefully. If the resolution is set too low, the estimated line parameters might be inaccurate. If resolution is too high, run time will increase and votes for one line might get split into multiple cells in the array. Your code cannot call Matlab’s hough function, or any other similar functions. You may use hough for comparison and debugging. A sample visualization of H is shown in Figure 2. Figure 2: Hough transform result. 4 Q2.4 Finding lines (15 points) Write a function that uses the Hough transform output to detect lines, function [rhos, thetas] = myHoughLines(H, nLines) where H is the Hough transform accumulator, and nLines is the number of lines to return. Outputs rhos and thetas are both nLines ×1 vectors that contain the row and column coordinates of peaks in H, that is, the lines found in the image. Ideally, you would want this function to return the ρ and θ coordinates for the nLines highest scoring cells in the Hough accumulator. But for every cell in the accumulator corresponding to a real line (likely to be a locally maximal value), there will probably be a number of cells in the neighborhood that also scored high but should not be selected. These non maximal neighbors can be removed using non maximal suppression. Note that this non maximal suppression step is different from the one performed earlier. Here you will consider all neighbors of a pixel, not just the pixels lying along the gradient direction. You can either implement your own non maximal suppression code or find a suitable function on the Internet (you must acknowledge and cite the source in your write- up, as well as hand in the source in your matlab/ directory). Another option is to use Matlab function imdilate. Once you have suppressed the non maximal cells in the Hough accumulator, return the coordinates corresponding to the strongest peaks in the accumulator. Your code cannot call on Matlab’s houghpeaks function or other similar functions. You may use houghpeaks for comparison and debugging. Q2.5 Fitting line segments for visualization (5 points) Now you have the parameters ρ and θ for each line in an image. However, this is not enough for visualization. We still need to prune the detected lines into line segments that do not extend beyond the objects they belong to. This is done by houghlines and drawLines.m. See the script houghScript.m for more details. You can modify the parameters of houghlines and see how the visualizations change. As shown in Figure 3, the result is not perfect, so do not worry if the performance of your implementation is not good. You can still get full credit as long as your implementation makes sense. Figure 3: Line segment result. Q2.5x Implement houghlines yourself (extra: 10 points) In Q2.5, we used the Matlab built-in function houghlines to prune the detected lines into line segments that do not extend beyond the objects they belong to. Now, its our turn to implement one ourselves! Please write a function named myHoughLineSegments and then compare your results with the Matlab built-in function in your write-up. Show at least one image for each and briefly describe the differences. 5 function [lines] = myHoughLineSegments(lineRho, lineTheta, Im) Your function should output lines as a Matlab array of structures containing the pixel locations of the start and end points of each line segment in the image. The start location of the i th line segment should be stored as a 2 × 1 vector lines(i).start and the end location as a 2 × 1 vector in lines(i).stop. Remember to save your implementation in the ec/ directory. Your code cannot call on Matlab’s houghlines function, or any other similar functions. You may use houghlines for comparison and debugging. 3 Experiments Q3.1 (15 points) Use the script included to run your Hough detector on the image set and generate intermediate output images. Include the set of intermediate outputs for one image in your write-up. Did your code work well on all the image with a single set of parameters? How did the optimal set of parameters vary with images? Which step of the algorithm causes the most problems? Did you find any changes you could make to your code or algorithm that improved performance? In your write-up, you should describe how well your code worked on different images, what effect do the parameters have and any improvements you made to your code to make it work better. 4 Try your own images! Q4.1x Implement houghlines yourself (extra: 10 points) Take five pictures, either with a camera of your own, or from the Internet. Write a script ec.m to take care of reading in your images (use a relative path here, not absolute), making function calls to the various steps of the Hough transform, and generating images showing the output and some of the intermediate steps (like houghScript.m). Submit your own images and ec.m in ec/. Please include resulting images in your write-up. 5 Non-maximum suppression Non-maximum suppression (NMS) is an algorithm used to find local maxima using the property that the value of a local maximum is greater than its neighbors. To implement the NMS in 2D image, you can move a 3 × 3 (or 7 × 7, etc.) filter over the image. At every pixel, the filter suppresses the value of the center pixel (by setting its value to 0) if its value is not greater than the value of the neighbors. To use NMS for edge thinning, you should compare the gradient magnitude of the center pixel with the neighbors along the gradient direction instead of all the neighbors. To simplify the implementation, you can quantize the gradient direction into 8 groups and compare the center pixel with two of the 8 neighbors in the 3 × 3 window, according to the gradient direction. For example, if the gradient angle of a pixel is 30◦ , we compare its gradient magnitude with the north-east and south-west neighbors and suppress its magnitude if it is not greater than these two neighbors. 6

Related products

assignment 1 image

COMP4901L Homework Assignment 7 Tracking Objects in Videos solved

Comp4901l homework assignment 6 digit recognition with convolutional neural networks solved, comp4901l homework assignment 2 augmented reality with planar homographies solved.

POPULAR SERVICES

C programming assignment help Computer networking assignment help Computer science homework help Database management homework help Java programming help Matlab assignment help Php assignment help Python programming assignment help SQL assignment help Html homework help

The Codes Hive believes in helping students to write clean codes that are simple to read and easy to execute.Based in New York, United States, we provide assignment help, homework help, online tutoring and project help in programming to the students and professionals across the globe.

Disclaimer : The reference papers/tutorials provided are to be considered as model papers only and are not to submitted as it is. These papers are intended to be used for research and reference purposes only.

assignment 1 image

This page contains links to programming assignments. Reference material is available on the Lectures page.

A PHP Error was encountered

Severity: Notice

Message: Undefined variable: exercises_base_url

Filename: views/assignments.php

Line Number: 58

Line Number: 63

Line Number: 68

Line Number: 73

Line Number: 78

Line Number: 83

Line Number: 88

Assignment 1: Analyzing Parallel Program Performance on an Eight-Core CPU

Assignment 2: A Simple CUDA Renderer

Assignment 3: Processing Big Graphs on the Xeon Phi

Assignment 4: A Simple, Parallel Webserver

Assignment 1: Building a Better Contact Sheet ¶

In the lectures for this week you were shown how to make a contact sheet for digital photographers, and how you can take one image and create nine different variants based on the brightness of that image. In this assignment you are going to change the colors of the image, creating variations based on a single photo. There are many complex ways to change a photograph using variations, such as changing a black and white image to either "cool" variants, which have light purple and blues in them, or "warm" variants, which have touches of yellow and may look sepia toned. In this assignment, you'll be just changing the image one color channel at a time

Your assignment is to learn how to take the stub code provided in the lecture (cleaned up below), and generate the following output image:

assignment 1 image

From the image you can see there are two parameters which are being varied for each sub-image. First, the rows are changed by color channel, where the top is the red channel, the middle is the green channel, and the bottom is the blue channel. Wait, why don't the colors look more red, green, and blue, in that order? Because the change you to be making is the ratio, or intensity, or that channel, in relationship to the other channels. We're going to use three different intensities, 0.1 (reduce the channel a lot), 0.5 (reduce the channel in half), and 0.9 (reduce the channel only a little bit).

For instance, a pixel represented as (200, 100, 50) is a sort of burnt orange color. So the top row of changes would create three alternative pixels, varying the first channel (red). one at (20, 100, 50), one at (100, 100, 50), and one at (180, 100, 50). The next row would vary the second channel (blue), and would create pixels of color values (200, 10, 50), (200, 50, 50) and (200, 90, 50).

Note: A font is included for your usage if you would like! It's located in the file readonly/fanwood-webfont.ttf

Need some hints? Use them sparingly, see how much you can get done on your own first! The sample code given in the class has been cleaned up below, you might want to start from that.

HINT 1 ¶

Check out the PIL.ImageDraw module for helpful functions

HINT 2 ¶

Did you find the text() function of PIL.ImageDraw ?

HINT 3 ¶

Have you seen the PIL.ImageFont module? Try loading the font with a size of 75 or so.

HINT 4 ¶

These hints aren't really enough, we should probably generate some more.

  • Staff Directory
  • Workshops and Events
  • For Students

Creative Assignments: Teaching with Images – Part 1

by Cosette Bruhns | Dec 17, 2019 | Instructional design , Services | 1 comment

assignment 1 image

Photo by Christopher Flynn on Unsplash

Images (e.g. photographs, illustrations, and visual metaphors) can facilitate student engagement and understanding in classroom assignments by making abstract concepts tangible and providing a different way of illustrating arguments to students. Instructors can assign images as submission requirements in order to encourage students to draw connections across boundaries and disciplines through a visual lens. Used with care, images can also support inclusive teaching practices by inviting students to engage with course content through different points of view, facilitating student access to remote objects or collections, and increasing opportunities for students who excel at visual learning to participate fully in assignments. In these cases, images serve as a portal for engaging with course material through a different framework (i.e., not text- or audio-based). Finally, using images in assignments can invite students to exercise different aspects of critical thinking skills, like visual literacy and lateral thinking, by encouraging students to develop an argument about or relating to some aspect of an image.

A Few Examples

Here are some examples of how images can be incorporated into student assignments to help you get started. The assignment types are listed in order of shallow to steep learning curve.

Image Discussion Board Posts

Discussion board posts are often assigned by instructors in order to invite students to expand their thoughts on a course reading or discussion. One way instructors can continue to broaden student learning about a topic outside of the classroom creatively is by assigning an image submission in a discussion board. By assigning an image as a submission requirement instead of text, instructors can stimulate student imagination and facilitate student ability to make visual connections between different ideas.

For example, in a literature course on Ovid’s Metamorphoses , an instructor could assign an image submission as a way to invite students to think about how to visualize an allegorical theme or passage from the text. Students could submit images in response to the selected theme or passage, along with a short one- to two-sentence explanation for why the image is related to the original theme. When the class next meets, instructors can draw on their image responses to engage students visually and creatively by asking students to further explain their reasons for submitting their image and why they think it is related to the original theme or passage.

For this type of assignment, Canvas-supported tools like Discussion Board can help achieve this goal. Follow the instructions on the Canvas resource page for more on how to create assignments using Canvas Discussion Board.

Example of an image post in a Canvas discussion board

Tip: When creating a discussion post, remember to select “Allow threaded replies” under Options, in order to let students respond to each other’s comments.

Image Annotation

Image annotation is the ability to mark up an image with text or visual symbols in order to highlight some aspect within the image. Applied to an assignment, instructors can use the idea of image annotation to introduce skills like visual literacy or visual analysis, by asking students to annotate images in order to make an argument about or pertaining to an image based on close analysis of an object or aspect of the original image. By emphasizing a specific aspect of an image, instructors can encourage students to think critically about the relationship between the image and concepts or themes addressed in class.

For example, in a class addressing early modern Italian art, an instructor could ask students to individually or collaboratively annotate an image of Duccio’s Maestà in order to analyze different historical, political, and theological themes represented in the painting. The instructor could create an assignment asking students to isolate specific elements of the painting, using annotation methods, in order to identify main themes to explore further through individual projects or in-class discussion, strengthening the relationship between the assignment and the course. It might be a useful exercise to create a working list of objects, ideas, or concepts identified through the image annotation assignment that students can build on during the course. In a course that examines multiple images, instructors could return to that set of student-produced themes to see how they are represented in other images representing the Madonna. By drawing connections between concepts and images, instructors can begin to introduce students to skills like visual literacy, which is important for interpreting, understanding, and making meaning from images.

There are a number of easy-to-use tools for image annotation that are readily available. Google Jamboard is an interactive whiteboard that can be shared with multiple students. Features of Google Jamboard include real-time collaboration and a number of creative drawing tools for visualizing ideas. In an image annotation assignment, students could share a Google Jamboard file with the class that creatively isolates an aspect of the original image in order to share an observation or build an argument about that image.

Duccio's Maesta annotated with Google Jamboard

Ex. Duccio di Buoninsegna, Maestà , c. 1308-1311. Google Jamboard can be used to add simple mark-ups to an image as an assignment or in real-time. The bottom tool in the tool bar is a digital laser pointer that can be used during a presentation to highlight an aspect of an image. As part of the Google Suite, Google Jamboard files can be easily shared with multiple collaborators.

Where to To Find Image Resources

A number of images are available for use in teaching and student assignments through fair use laws. There is a list of resources for finding fair use images on the UChicago wiki tools page . Many images are also easily searchable on databases such as LUNA , the University of Chicago’s image collections database, the Getty Search Gateway , and the Met Collection , to name a few. Several museums participate in open access policies, allowing their public domain images to be downloaded, used, and reproduced for scholarly and educational purposes. For further information on fair use policies, reach out to the University of Chicago’s Copyright Information Center , or the Visual Resource Center , which provides support in researching images or digitizing and developing a collection of images for research and teaching.

Getting Help and Next Steps

If you are interested in using image exercises in your classroom or as assignments, contact Academic Technology Solutions for help. ATS instructional designers can help you create exercises that support your broader learning objectives and select the appropriate software tools to use in your class.

Stay tuned for Part 2, in which we will discuss digital exhibitions!

' src=

Thanks, Cosette, this is great. Take a look at WeVu for this too. Images and pdfs, with group annotation, with private and public replies to annotations. Can be used for whole-class dialogue about parts of images, or for assignments where students’ annotations are only seen by instructors.

Search Blog

Subscribe by email.

Please, insert a valid email.

Thank you, your email will be added to the mailing list once you click on the link in the confirmation email.

Spam protection has stopped this request. Please contact site owner for help.

This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Recent Posts

  • Explore “Hallucinations” to Better Understand AI’s Affordances and Risks: Part One
  • Instructors: Participate in Our Academic Technology Survey
  • Digital Tools and Critical Pedagogy: Compassionate Pedagogy as a Classroom Practice
  • Unlock the Capabilities of the UChicago Lightboard
  • Search for a Syllabus in Canvas
  • A/V Equipment
  • Accessibility
  • Canvas Features/Functions
  • Digital Accessibility
  • Faculty Success Stories
  • Instructional design
  • Multimedia Development
  • Surveys and Feedback
  • Symposium for Teaching with Technology
  • Uncategorized
  • Universal Design for Learning
  • Visualization

Assignment 1: Static Web: HTML/CSS

Due Sunday, February 7 11:59pm ET

Accept the Github Classroom assignment and clone this repo that contains stencil code for Assignment 1.

Introduction

This is a multi-part assignment with the objective of making you comfortable working with HTML and CSS. By the end of this assignment, you will have styled some rectangular blocks and created a simple version of Twitter's home page.

If this assignment seems overwhelming to you, please come see a TA at TA hours to talk through some strategies for tackling it. We expect this assignment to be a time-consuming assignment as we cover a lot of fundamental techiniques. But with a good strategy, it can be finished in a reasonable amount of time.

Note: Only CSS and HTML will be used for this assignment. If you want to use JavaScript (or libraries such as jQuery) then feel free to, but we will only be grading correctness on your CSS and HTML.

If you can, Start Early!

Specifications

Now that you understand some of the basics of HTML and CSS, let’s take a look at how to align HTML elements. There are multiple ways to align HTML elements, but in this part, we recommend using flexboxes as they are widely used in modern web development (for example BootstrapV4 is built on top of flexboxes).

Refer to this great webpage on how to use flexboxes: CSS Flexbox Guide .

Also feel free to use online resources such as Stack Overflow, MDN, W3, and Google for reference.

Screenshot of Part1 at the beginning

As you can see, there are 9 rectangles. The styling and makeup of the first two rectangles are already built for you. Your task is to apply stylings and add div elements inside of the next 7 green rectangular blocks to create a webpage that looks like this:

Screenshot of Part1 when finished

For the third row, the red and blue end rectangles should remain the same width, and the green space should shrink.

Possible Approach: Have a div with a red background and a div with a blue background, both with fixed width. Use an appropriate value for Justify Content .

For the fourth row, the blue end rectangle should remain the same width, and the red rectangle should shrink.

Possible Approach: Have a div with a red background and a div with a blue background. Have a fixed width on the blue div. Use Flex Grow .

For the fifth row, the red square should remain the same size, but always remain in the center of the green rectangle.

Hint: Think about how to keep a div fixed size and how to align something in the absolute center of the parent element.

For the sixth row, the blue rectangle should remain the same size, while the red rectangles should shrink. The blue rectangle should remain in the center of the row.

Hint: Use two red divs.

For the seventh row, the red rectangle should remain the same width.

Hint: Nest divs and use background-color: transparent

For the eighth row, the orange rectangles should remain the same size while the green space between them shrinks.

For the ninth row, the green space between the orange rectangles should remain the same width while the orange rectangles narrow.

The examples we provided with the first two rectangular blocks use flexboxes. You are not required to use flexboxes for the next 7 rows, but we recommend it as it will also be useful in part 2 of this assignment.

You should only have to use the div html element to complete this assignment. Also, none of the divs you create inside of the provided wrapper divs should have background-color: green; . But it is valid to specify non-green background colors for any divs, including the wrapper.

  • The color of the boxes we used are background-color: red , blue , and orange
  • Some width/height values we used are 20px, 40px, 80px

You are not required to use Bootstrap in this part. You can use if you want, but we actually recommend writing plain CSS. Just for this part, inline CSS is acceptable, but you should generally avoid using inline CSS in the future.

Any images you'll need can be found in the part2/images folder, which can be referenced as ./images (when CSS is in its own file, URLs are relative to the CSS file, not the page it is loaded on). All of your HTML should go in the index.html file and all of your CSS should go in the index.css file.

Twitter page overview

Feel free to go on Twitter and use your browser’s inspect element to see how they do font sizes, font weights, margins, paddings, text colors, and background colors (use inspector). Our solution is a bit different than Twitter’s architecture because twitter’s HTML/CSS setup is way too complicated for a simple web mockup. If you try to copy Twitter’s code instead of creating the HTML elements yourself, you’ll end up spending way more time trying to figure out what each div does and how to decipher their massive styling code base.

Ethics Requirements

A screen reader needs to know in advance what language your website is in in order to function properly.

To help it out, make sure to declare the language of your website in the lang= attribute of the html tag.

Blind and low-sighted users often can’t see images on a site.

  • To help them enjoy your site’s content, all images must have alt text.
  • The alt text goes in the alt="..." attribute of the image element.
  • You should give a basic description of what is in the image. Putting image in the alt attribute does not count!

Blind or low-sighted users may want to “skim through” a page using their screen reader. To make that easier, the page should have a logical hierarchy using different headings to designate different levels of importance.

Note: your Twitter page won’t have that many headings. Just don’t use headings to style things!

If you want a piece of text that isn’t a heading to be big or bold, use HTML elements like em and b tag or CSS to style it rather than the heading attribute.

For people using screen readers to navigate the page, ARIA landmarks are a big help – they can help users skim the page, or to quickly find the content they need. These are attributes that can be added to any element on the page and appear as role= attributes within a div’s opening tag. The ARIA landmarks you are required to include are:

  • role=navigation (to designate the navigation menu): add this to the navigation bar.
  • role=main (main page content, i.e. the tweets): add this to the div you use to contain your tweets.

Look here for more tips and examples.

Finally, your page should have a skip link (think <a> !) somewhere at the top of the navigation. Skip links are links at the top of the page which allow a user to skip to the main content of the page. They’re important for older browsers and screen readers that may not support ARIA landmark navigation.

  • This can be styled any way you like! However, for this project, hide them using display: none; .
  • To do this, you’ll have to give the div you will be jumping to an ID, and have the link href="..." attribute point to that div’s ID. For example, if I wanted to jump to a div with the ID myDiv, I would have the following link: <a href=”#myDiv”>Jump to myDiv</a>
  • In our case, this means skipping to content-wrapper or content-center , depending on your implementation. More tips and examples can be found here .

We recommend running your page through WAVE’s accessibility checker, which we asked you to add to your Firefox and/or Chrome browsers during lab 1. We’ll be using that tool to test whether your ARIA landmarks and general hierarchy are logical, as well as whether you’ve implemented alt text in your image descriptions.

Note: The Chrome WAVE extension has been a little finicky lately. If you’re having trouble, try running your code on a department machine and/or using Firefox.

For help, take a look at our Accessibility Resource Sheet in Docs or come to TA hours!

Functionality Requirements

In the following, we put together some hints on how to accomplish the functionality requirements. We also encourage you to refer to online resources like MDN and CSSTricks for HTML and CSS properties.

Note: Don't worry about getting the font sizes or font colors exact. That being said, #4AB3F4 is the blue color used in the mockup and #E6ECF0 is the light gray background color.

Twitter page parts dimensions

Twitter's header is fixed which means when you scroll down, the header remains at the top of the webpage. We will require you to implement your header in a similar manner. To do this, use:

  • position: fixed; Adding this to an element makes it stick to whatever position you specify
  • top: 0; left: 0; These are the positions for the fixed element that will keep the element fixed at the top
  • z-index: 100; Adding this to an element makes it positioned above other elements (You could probably make it work with z-index: 5, but we put 100 just to make sure). Elements without a specified z-index have a default z-index of 0. Elements with higher z-indexes are placed over elements with lower z-indexes.

If you decide to use Bootstrap, you may find Navbar Placement to be useful.

Twitter how Navbar Link looks like

Lastly, we require you to have the Twitter logo stay in the middle of the header when you resize the window.

  • Home <i class="fas fa-home"></i>
  • Moments <i class="fas fa-hashtag"></i>
  • Notifications <i class="far fa-bell"></i>
  • Messages <i class="far fa-envelope"></i>

The file path of the twitter logo is ./images/twitter-logo.png

content-wrapper

  • max-width: 1190px; This sets the maximum width of the element.
  • margin: 56px auto; This sets the vertical margins to 56px so that it is below the header and the horizontal margins to automatically center the element.

content-left

How the left content will look like

  • Cover picture (purple)
  • Profile picture (orange)
  • Profile stats (green)

We require you to create the overlapping effect between the profile picture and cover picture. Usually to sepcify priority in stacked display (think it as layers), you will use z-index .

Bootstrap section for positioning

  • The filepath of the cover picture is ./images/ratatouille-banner.png while ./images/ratatouille.jpg is the filepath of the profile picture for Remy and ./images/linguini.png is the filepath of Linguini's profile picture.

content-center

How the center content will look like

We require that you include the profile picture in every one of the tweets. Additionally, in at least one of the tweets you should have a span tag to change the styling of a single word within the tweet.

border-radius: 50%; or Bootstrap class rounded-circle makes an element a circle.

content-right

If you minimize the width of your browser when on Twitter, you will notice that the content on the right disappears at a certain point. This is done using CSS media queries.

We require you to do the same on your mockup. So, use a media query to make content-right disappear when the window’s width is less than or equal to 1200px.

Other than the explicitly stated requirements for this part, we would like you to make your Twitter mockup generally resemble the solution provided above

If you can, please make your webpage compliant across browsers. But we will be testing your assignment on Chrome.

To access Chrome in CIT machine: From the command-line, type chrome .

General Notes

As a reminder, it's a good idea to run your HTML and CSS syntax through validators. You should also consider using an accessibility checker such as WAVE.

Troubleshooting

There are hundreds of HTML and CSS tags, properties, and values, and CS132 does not expect students to learn each one by heart. However, this assignment and the first lab are intended for you to intuitively understand the languages, and to be proficient at knowing how to tackle a design by the end of the semester.

If you’re having problems, there are many guides on HTML and CSS online (CSSTricks and MDN are your friends), as well as on our resources page.

As always, if you are stuck on a particular part, you can always talk to the friendly TAs or ask questions on course piazza (check your email for a signup link).

As a general rule of thumb, do not expect TAs to be able to solve every web problem you have. Even the most adept web developer can struggle a lot with specific CSS rules to use.

To hand in your code for Assignment 1, upload the directory containing your solution to part 1 and part 2 to Gradescope .

IMAGES

  1. How to Grade Online Assignments and Exams

    assignment 1 image

  2. Assignment

    assignment 1 image

  3. Assignment 1 Explanation

    assignment 1 image

  4. 4 Benefits of Submitting Your Assignments in PDF Format

    assignment 1 image

  5. Assignment First Page Format

    assignment 1 image

  6. Law Assignment 1

    assignment 1 image

VIDEO

  1. Assignment 1-task 4_CS112 Cairo-University

  2. Assignment 1-2: Domain Classification + Intent Detection

  3. Assignment 1

  4. 3250 Assignment 1 Reflective Writing

  5. Assignment 1 : Personal Video Project !

  6. Assignment 1.3 Reflection

COMMENTS

  1. Assignment 1: Image Processing

    Getting Started You should use the following skeleton code (1.zip or 1.tar.gz ) as a starting point for your assignment.We provide you with several files, but you should mainly change image.cpp.. main.cpp: Parses the command line arguments, and calls the appropriate image functions.; image.[cpp/h]: Image processing. pixel.[cpp/h]: Pixel processing. vector.[cpp/h]: Simple 2D vector class you ...

  2. PDF Introducing Assignment 1: Image Processing

    Introducing Assignment 1: Image Processing. Setup Same as in A0: • Run "python3 -m http.server" (or similar) inside the assignment directory ... -create an intermediate image with interpolated correspondences (alpha) -warp the background image to the intermediate image

  3. Computer Vision and Image Processing

    The intent of this course is to familiarize the students to explain the fundamental concepts/issues of Computer Vision and Image Processing, and major approa...

  4. Homework 1: Hybrid Images

    Support grayscale and color images. Note that grayscale images will be 2D numpy arrays. The filtered image should be in the same format as the original image (e.g. for a grayscale image, you should return a 2D numpy array). We have provided a 2D grayscale image (marilyn.bmp) and many other RGB images in the data folder.

  5. PDF CSE 190: Assignment 1|Image and Signal Processing

    1. main.cpp parses the command line, calling the functions you write 2. image.cpp , image.h Image processing with stubs for the assignment 3. pixel.cpp , pixel.h Pixel processing functions 4. vector.cpp, vector.h A simple 2D vector class that you may or may not nd useful 1

  6. Assignment 1: Image Processing

    Welcome to the multimedia processing module. This handout contains information for the image process tutorial, individual assignment, and group assignment. This module has the following learning goals: Goal 1: Understand how image processing can be applied to inform the design of product-service systems or to be integrated into such systems (i ...

  7. Stanford University CS231n: Deep Learning for Computer Vision

    There will be three assignments which will improve both your theoretical understanding and your practical skills. All assignments will contain programming parts and written questions. For practical reasons, in office hours, TAs have been asked to not look at students' code. Assignment 1 (10%): Image Classification, kNN, SVM, Softmax, Fully ...

  8. COMP4901L Homework Assignment 1 Image Filtering and Hough Transform

    Write a function that convolves an image with a given convolution filter. function [img1] = myImageFilter (img0, h) 2. As input, the function takes a greyscale image (img0) and a convolution filter stored in matrix h. The output of the function should be an image img1 of the same size as img0 which results from.

  9. Programming Assignments : Computer Vision : Fall 2020

    Assignments. (Due Sep 23rd) Programming Assignment 1: Image Filtering and Hough Transform. (Due Oct 7th) Programming Assignment 2: Augmented Reality with Planar Homographies. (Due Oct 21st) Programming Assignment 3: 3D Reconstruction. (Due Nov 4th) Programming Assignment 4: Physics-based Vision.

  10. Solved DIGITAL IMAGE PROCESSING: ASSIGNMENT 1 Assignment

    DIGITAL IMAGE PROCESSING: ASSIGNMENT 1 Assignment # 1: Spring 2021/2022Due date: 12/5/2022 Task 1:Create an m-file that do the following: a) Read a 'coins.png image. b) Convert'coins.png'image into binary using functionim2bw with threshold0.38. c) Count the number of coins that results from question number 2 using the following two functions ...

  11. Medical Terminology Image labeling Flashcards

    proximal convoluted tube. (4) descending nephron loop. distal convoluted tubule. collection tubule. ascending nephron loop. peritubular capillaries. Study with Quizlet and memorize flashcards containing terms like Kidney, urinary bladder, female urethra and more.

  12. PDF Image Labeling by Assignment

    1.1 Motivation. at some global maximum, each of which corresponds to an image labeling that uniquely assigns the prior data. Our geo- Image Labeling is a basic problem of variational low-level metric variational approach constitutes a smooth non-convex image analysis.

  13. assignment_1

    Assignment 1: Building a Better Contact Sheet ¶. In the lectures for this week you were shown how to make a contact sheet for digital photographers, and how you can take one image and create nine different variants based on the brightness of that image. In this assignment you are going to change the colors of the image, creating variations ...

  14. Solved Computer Vision (CV) 501574-3 3rd Trimester,

    Computer Vision (CV) 501574-3 3rd Trimester, 2022-2023 Assignment \# 1: Image Filtering Assignment Description: The purpose of this homework is to write an image filtering function to apply on input images. Image filtering (or convolution) is a fundamental image processing tool to modify the image with some smoothing or sharpening affect. A).

  15. Creative Assignments: Teaching with Images

    Google Jamboard can be used to add simple mark-ups to an image as an assignment or in real-time. The bottom tool in the tool bar is a digital laser pointer that can be used during a presentation to highlight an aspect of an image. As part of the Google Suite, Google Jamboard files can be easily shared with multiple collaborators. ...

  16. Assignment 1 digital image processing.docx

    CSE20403 - Digital Image Processing Digital Image Processing - CSE20403 Session: February 2021 Assignment 1 (20%) INSTRUCTIONS: 1. Report must be completed in the form of a written document. 2. Use font size 12, font type Times New Roman with 1.5 line spacing and justify paragraph. 3.

  17. Assignment 1: Image Processing

    In this assignment you will create a simple image processing program. The operations that you implement will be mostly filters which take an input image, process the image, and produce an output image. ... (1) Create a composite image of yourself and a famous person. (2) Blur: Blur an image by convolving it with a Gaussian low-pass filter. (2 ...

  18. Assignment 1

    Assignment 1: Static Web: HTML/CSS. Due Sunday, February 7 11:59pm ET Setup. Accept ... The file path of the twitter logo is ./images/twitter-logo.png. content-wrapper . We require you to have all the content below the header within a boundary that is centered on the screen. To do this, we recommend creating a wrapper div for the three columns ...

  19. Assignment 1

    Blurs an image by convolving it with a n x n Gaussian filter. In the examples below, the Gaussian function used was. and the value for sigma was chosen so that the weight at the center of the filter is 1.0, and the weight at a corner of the filter is 1 / (6 * r2 + 4 * r + 1), where r is n div 2. 3. 7.

  20. GEOM30009 Assignment 1. Image Interpretation 2021

    Melbourne Scho ol of Engineering. Assignment 1. Image Interpretati on. Due Date: Friday 1 9 March at 10: 00pm. Value: 10% of Subject Mark. Objective. The purpose o f this exercise is to learn how t o visually int erpret satellite im ages and identify various. features and land cover t ypes.

  21. Solved Assignment #1 Image Enhancement 1 Image enhancement

    Assignment #1 Image Enhancement 1 Image enhancement using intensity transformations Consider the problem of mapping the normalized intensity levels on e {ons-1} into the normalized intensity levels skane 1. Define a class of functions, with parameter x,y 20, to implement the transformations shown in the Figs. 1 (a) and (b).

  22. Solved 11.1 Image Labeling Drag and Drop 1 Choroid 2 Optic

    Question: 11.1 Image Labeling Drag and Drop 1 Choroid 2 Optic nerve 3 Aqueous fluid 4 Lens 5 Retina 6 Sclera Fovea centralis 8 Macula 9 Cornea (A) Anterior Segment (B) Posterior Segment 10 Iris 11 Vitreous fluid 12 Optic disk edback. There are 2 steps to solve this one.