39 skills found · Page 1 of 2
Jonathan-Uy / CSES SolutionsAccepted Solutions to the CSES Competitive Programming Problem Set
molyswu / Hand Detectionusing Neural Networks (SSD) on Tensorflow. This repo documents steps and scripts used to train a hand detector using Tensorflow (Object Detection API). As with any DNN based task, the most expensive (and riskiest) part of the process has to do with finding or creating the right (annotated) dataset. I was interested mainly in detecting hands on a table (egocentric view point). I experimented first with the [Oxford Hands Dataset](http://www.robots.ox.ac.uk/~vgg/data/hands/) (the results were not good). I then tried the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) which was a much better fit to my requirements. The goal of this repo/post is to demonstrate how neural networks can be applied to the (hard) problem of tracking hands (egocentric and other views). Better still, provide code that can be adapted to other uses cases. If you use this tutorial or models in your research or project, please cite [this](#citing-this-tutorial). Here is the detector in action. <img src="images/hand1.gif" width="33.3%"><img src="images/hand2.gif" width="33.3%"><img src="images/hand3.gif" width="33.3%"> Realtime detection on video stream from a webcam . <img src="images/chess1.gif" width="33.3%"><img src="images/chess2.gif" width="33.3%"><img src="images/chess3.gif" width="33.3%"> Detection on a Youtube video. Both examples above were run on a macbook pro **CPU** (i7, 2.5GHz, 16GB). Some fps numbers are: | FPS | Image Size | Device| Comments| | ------------- | ------------- | ------------- | ------------- | | 21 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run without visualizing results| | 16 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | | 11 | 640 * 480 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | > Note: The code in this repo is written and tested with Tensorflow `1.4.0-rc0`. Using a different version may result in [some errors](https://github.com/tensorflow/models/issues/1581). You may need to [generate your own frozen model](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/) graph using the [model checkpoints](model-checkpoint) in the repo to fit your TF version. **Content of this document** - Motivation - Why Track/Detect hands with Neural Networks - Data preparation and network training in Tensorflow (Dataset, Import, Training) - Training the hand detection Model - Using the Detector to Detect/Track hands - Thoughts on Optimizations. > P.S if you are using or have used the models provided here, feel free to reach out on twitter ([@vykthur](https://twitter.com/vykthur)) and share your work! ## Motivation - Why Track/Detect hands with Neural Networks? There are several existing approaches to tracking hands in the computer vision domain. Incidentally, many of these approaches are rule based (e.g extracting background based on texture and boundary features, distinguishing between hands and background using color histograms and HOG classifiers,) making them not very robust. For example, these algorithms might get confused if the background is unusual or in situations where sharp changes in lighting conditions cause sharp changes in skin color or the tracked object becomes occluded.(see [here for a review](https://www.cse.unr.edu/~bebis/handposerev.pdf) paper on hand pose estimation from the HCI perspective) With sufficiently large datasets, neural networks provide opportunity to train models that perform well and address challenges of existing object tracking/detection algorithms - varied/poor lighting, noisy environments, diverse viewpoints and even occlusion. The main drawbacks to usage for real-time tracking/detection is that they can be complex, are relatively slow compared to tracking-only algorithms and it can be quite expensive to assemble a good dataset. But things are changing with advances in fast neural networks. Furthermore, this entire area of work has been made more approachable by deep learning frameworks (such as the tensorflow object detection api) that simplify the process of training a model for custom object detection. More importantly, the advent of fast neural network models like ssd, faster r-cnn, rfcn (see [here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models) ) etc make neural networks an attractive candidate for real-time detection (and tracking) applications. Hopefully, this repo demonstrates this. > If you are not interested in the process of training the detector, you can skip straight to applying the [pretrained model I provide in detecting hands](#detecting-hands). Training a model is a multi-stage process (assembling dataset, cleaning, splitting into training/test partitions and generating an inference graph). While I lightly touch on the details of these parts, there are a few other tutorials cover training a custom object detector using the tensorflow object detection api in more detail[ see [here](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) and [here](https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9) ]. I recommend you walk through those if interested in training a custom object detector from scratch. ## Data preparation and network training in Tensorflow (Dataset, Import, Training) **The Egohands Dataset** The hand detector model is built using data from the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) dataset. This dataset works well for several reasons. It contains high quality, pixel level annotations (>15000 ground truth labels) where hands are located across 4800 images. All images are captured from an egocentric view (Google glass) across 48 different environments (indoor, outdoor) and activities (playing cards, chess, jenga, solving puzzles etc). <img src="images/egohandstrain.jpg" width="100%"> If you will be using the Egohands dataset, you can cite them as follows: > Bambach, Sven, et al. "Lending a hand: Detecting hands and recognizing activities in complex egocentric interactions." Proceedings of the IEEE International Conference on Computer Vision. 2015. The Egohands dataset (zip file with labelled data) contains 48 folders of locations where video data was collected (100 images per folder). ``` -- LOCATION_X -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder -- LOCATION_Y -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder ``` **Converting data to Tensorflow Format** Some initial work needs to be done to the Egohands dataset to transform it into the format (`tfrecord`) which Tensorflow needs to train a model. This repo contains `egohands_dataset_clean.py` a script that will help you generate these csv files. - Downloads the egohands datasets - Renames all files to include their directory names to ensure each filename is unique - Splits the dataset into train (80%), test (10%) and eval (10%) folders. - Reads in `polygons.mat` for each folder, generates bounding boxes and visualizes them to ensure correctness (see image above). - Once the script is done running, you should have an images folder containing three folders - train, test and eval. Each of these folders should also contain a csv label document each - `train_labels.csv`, `test_labels.csv` that can be used to generate `tfrecords` Note: While the egohands dataset provides four separate labels for hands (own left, own right, other left, and other right), for my purpose, I am only interested in the general `hand` class and label all training data as `hand`. You can modify the data prep script to generate `tfrecords` that support 4 labels. Next: convert your dataset + csv files to tfrecords. A helpful guide on this can be found [here](https://pythonprogramming.net/creating-tfrecord-files-tensorflow-object-detection-api-tutorial/).For each folder, you should be able to generate `train.record`, `test.record` required in the training process. ## Training the hand detection Model Now that the dataset has been assembled (and your tfrecords), the next task is to train a model based on this. With neural networks, it is possible to use a process called [transfer learning](https://www.tensorflow.org/tutorials/image_retraining) to shorten the amount of time needed to train the entire model. This means we can take an existing model (that has been trained well on a related domain (here image classification) and retrain its final layer(s) to detect hands for us. Sweet!. Given that neural networks sometimes have thousands or millions of parameters that can take weeks or months to train, transfer learning helps shorten training time to possibly hours. Tensorflow does offer a few models (in the tensorflow [model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models)) and I chose to use the `ssd_mobilenet_v1_coco` model as my start point given it is currently (one of) the fastest models (read the SSD research [paper here](https://arxiv.org/pdf/1512.02325.pdf)). The training process can be done locally on your CPU machine which may take a while or better on a (cloud) GPU machine (which is what I did). For reference, training on my macbook pro (tensorflow compiled from source to take advantage of the mac's cpu architecture) the maximum speed I got was 5 seconds per step as opposed to the ~0.5 seconds per step I got with a GPU. For reference it would take about 12 days to run 200k steps on my mac (i7, 2.5GHz, 16GB) compared to ~5hrs on a GPU. > **Training on your own images**: Please use the [guide provided by Harrison from pythonprogramming](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) on how to generate tfrecords given your label csv files and your images. The guide also covers how to start the training process if training locally. [see [here] (https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/)]. If training in the cloud using a service like GCP, see the [guide here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_cloud.md). As the training process progresses, the expectation is that total loss (errors) gets reduced to its possible minimum (about a value of 1 or thereabout). By observing the tensorboard graphs for total loss(see image below), it should be possible to get an idea of when the training process is complete (total loss does not decrease with further iterations/steps). I ran my training job for 200k steps (took about 5 hours) and stopped at a total Loss (errors) value of 2.575.(In retrospect, I could have stopped the training at about 50k steps and gotten a similar total loss value). With tensorflow, you can also run an evaluation concurrently that assesses your model to see how well it performs on the test data. A commonly used metric for performance is mean average precision (mAP) which is single number used to summarize the area under the precision-recall curve. mAP is a measure of how well the model generates a bounding box that has at least a 50% overlap with the ground truth bounding box in our test dataset. For the hand detector trained here, the mAP value was **0.9686@0.5IOU**. mAP values range from 0-1, the higher the better. <img src="images/accuracy.jpg" width="100%"> Once training is completed, the trained inference graph (`frozen_inference_graph.pb`) is then exported (see the earlier referenced guides for how to do this) and saved in the `hand_inference_graph` folder. Now its time to do some interesting detection. ## Using the Detector to Detect/Track hands If you have not done this yet, please following the guide on installing [Tensorflow and the Tensorflow object detection api](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This will walk you through setting up the tensorflow framework, cloning the tensorflow github repo and a guide on - Load the `frozen_inference_graph.pb` trained on the hands dataset as well as the corresponding label map. In this repo, this is done in the `utils/detector_utils.py` script by the `load_inference_graph` method. ```python detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') sess = tf.Session(graph=detection_graph) print("> ====== Hand Inference graph loaded.") ``` - Detect hands. In this repo, this is done in the `utils/detector_utils.py` script by the `detect_objects` method. ```python (boxes, scores, classes, num) = sess.run( [detection_boxes, detection_scores, detection_classes, num_detections], feed_dict={image_tensor: image_np_expanded}) ``` - Visualize detected bounding detection_boxes. In this repo, this is done in the `utils/detector_utils.py` script by the `draw_box_on_image` method. This repo contains two scripts that tie all these steps together. - detect_multi_threaded.py : A threaded implementation for reading camera video input detection and detecting. Takes a set of command line flags to set parameters such as `--display` (visualize detections), image parameters `--width` and `--height`, videe `--source` (0 for camera) etc. - detect_single_threaded.py : Same as above, but single threaded. This script works for video files by setting the video source parameter videe `--source` (path to a video file). ```cmd # load and run detection on video at path "videos/chess.mov" python detect_single_threaded.py --source videos/chess.mov ``` > Update: If you do have errors loading the frozen inference graph in this repo, feel free to generate a new graph that fits your TF version from the model-checkpoint in this repo. Use the [export_inference_graph.py](https://github.com/tensorflow/models/blob/master/research/object_detection/export_inference_graph.py) script provided in the tensorflow object detection api repo. More guidance on this [here](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/). ## Thoughts on Optimization. A few things that led to noticeable performance increases. - Threading: Turns out that reading images from a webcam is a heavy I/O event and if run on the main application thread can slow down the program. I implemented some good ideas from [Adrian Rosebuck](https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2-videocapture-and-opencv/) on parrallelizing image capture across multiple worker threads. This mostly led to an FPS increase of about 5 points. - For those new to Opencv, images from the `cv2.read()` method return images in [BGR format](https://www.learnopencv.com/why-does-opencv-use-bgr-color-format/). Ensure you convert to RGB before detection (accuracy will be much reduced if you dont). ```python cv2.cvtColor(image_np, cv2.COLOR_BGR2RGB) ``` - Keeping your input image small will increase fps without any significant accuracy drop.(I used about 320 x 240 compared to the 1280 x 720 which my webcam provides). - Model Quantization. Moving from the current 32 bit to 8 bit can achieve up to 4x reduction in memory required to load and store models. One way to further speed up this model is to explore the use of [8-bit fixed point quantization](https://heartbeat.fritz.ai/8-bit-quantization-and-tensorflow-lite-speeding-up-mobile-inference-with-low-precision-a882dfcafbbd). Performance can also be increased by a clever combination of tracking algorithms with the already decent detection and this is something I am still experimenting with. Have ideas for optimizing better, please share! <img src="images/general.jpg" width="100%"> Note: The detector does reflect some limitations associated with the training set. This includes non-egocentric viewpoints, very noisy backgrounds (e.g in a sea of hands) and sometimes skin tone. There is opportunity to improve these with additional data. ## Integrating Multiple DNNs. One way to make things more interesting is to integrate our new knowledge of where "hands" are with other detectors trained to recognize other objects. Unfortunately, while our hand detector can in fact detect hands, it cannot detect other objects (a factor or how it is trained). To create a detector that classifies multiple different objects would mean a long involved process of assembling datasets for each class and a lengthy training process. > Given the above, a potential strategy is to explore structures that allow us **efficiently** interleave output form multiple pretrained models for various object classes and have them detect multiple objects on a single image. An example of this is with my primary use case where I am interested in understanding the position of objects on a table with respect to hands on same table. I am currently doing some work on a threaded application that loads multiple detectors and outputs bounding boxes on a single image. More on this soon.
ankitpriyarup / CSES ProblemSet SolutionNo description available
iamprayush / Cses Problemset SolutionsSolutions of the CSES Problem Set in C++
ncduy0303 / Cses SolutionsThis repository contains my solutions to the CSES Problem Set
div5252 / CSES Problem SetMy solutions for CSES Problem Set.
noob-hu-yaar / CSES Problem Set Solutions DP Solution to CSES DP problems.
hieplpvip / Cses SolutionsSolutions to CSES Problem Set
Manishak798 / CSES ProblemSetWelcome to my repository containing solutions to problems from the CSES Problem Set.
mdmzfzl / CSES SolutionsMy solutions for the CSES Problem Set, implemented in C++
atharvf14t / Cses SolvingsSolutions to 120+ problems solved by me from CSES problem-set. Including solutions to Introductory problems, Sorting and Searching, Dynamic Programming, Graphs, Range queries and Trees.
Priyansh19077 / CSES Problem SetSolutions to some cool problems from the CSES problem set
nevikw39 / OjMy low-quality and poor-performance codes submitted to several online judges, such as ZeroJudge, GreenJudge, UVa, TIOJ, AtCoder, CSES problem set, LeetCode, Codeforces & Google Kick Start.
kartik8800 / CSESCSES Problem Set Solutions
destinationunknown / CSESSolutions to the CSES problem set in Python3
ks-iitjmu / MyCSESCompetitive programming solutions for CSES Problem Set in C++. Covers algorithms, data structures, dynamic programming, and mathematics. Optimized for contests with fast I/O and efficient implementations.
Ayush170-Future / Cses Java SolutionsThese are java solutions of CSES problem set.
truongcongthanh2000 / CSES SolutionsSolutions of the CSES Problem Set
sumitesh9 / CSESMy solutions for CSES problem set.
fazeelkhalid / Graph Real Time ProblemsPoints: 100 Topics: Graphs, topological sort, freedom to decide how to represent data and organize code (while still reading in a graph and performing topological sort) PLAGIARISM/COLLUSION: You should not read any code (solution) that directly solves this problem (e.g. implements DFS, topological sorting or other component needed for the homework). The graph representation provided on the Code page (which you are allowed to use in your solution) and the pseudocode and algorithm discussed in class provide all the information needed. If anything is unclear in the provided materials check with us. You can read materials on how to read from a file, or read a Unix file or how to tokenize a line of code, BUT not in a sample code that deals with graphs or this specific problem. E.g. you can read tutorials about these topics, but not a solution to this problem (or a problem very similar to it). You should not share your code with any classmate or read another classmate's code. Part 1: Main program requirements (100 pts) Given a list of courses and their prerequisites, compute the order in which courses must be taken so that when taking a courses, all its prerequisites have already been taken. All the files that the program would read from are in Unix format (they have the Unix EOL). Provided files: ● Grading Criteria ● cycle0.txt ● data0.txt ● data0_rev.txt ● data1.txt - like data0.txt but the order of the prerequisite courses is modified on line 2. ● slides.txt (graph image) - courses given in such a way that they produce the same graph as in the image. (The last digit in the course number is the same as the vertex corresponding to it in the drawn graph. You can also see this in the vertex-to-course name correspondence in the sample run for this file.) ● run.html● data0_easy.txt - If you cannot handle the above file format, this is an easier file format that you can use, but there will be 15 points lost in this case. More details about this situation are given in Part 3. ● Unix.zip - zipped folder with all data files. ● For your reference: EOL_Mac_Unix_Windows.png - EOL symbols for Unix/Mac/Windows Specifications: 1. You can use structs, macros, typedef. 2. All the code must be in C (not C++, or any other language) 3. Global or static variables are NOT allowed. The exception is using macros to define constants for the size limits (e.g. instead of using 30 for the max course name size). E.g. #define MAX_ARRAY_LENGTH 20 4. You can use static memory (on the frame stack) or dynamic memory. (Do not confuse static memory with static variables.) 5. The program must read from the user a filename. The filename (as given by the user) will include the extension, but NOT the path. E.g.: data0.txt 6. You can open and close the file however many times you want. 7. File format: 1. Unix file. It will have the Unix EOL (end-of-line). 2. Size limits: 1. The file name will be at most 30 characters. 2. A course name will be at most 30 characters 3. A line in the file will be at most 1000 characters. 3. The file ends with an empty new line. 4. Each line (except for the last empty line) has one or more course names. 5. Each course name is a single word (without any spaces). E.g. CSE1310 (with no space between CSE and 1310). 6. There is no empty space at the end of the line. 7. There is exactly one empty space between any two consecutive courses on the same line. (You do not need to worry about having tabs or more than one empty space between 2 courses.) The first course name on each line is the course being described and the following courses are the prerequisites for it. E.g. CSE2315 CSE1310 MATH1426 ENGL13018. The first line describes course CSE2315 and it indicates that CSE2315 has 2 prerequisite courses, namely: CSE1310 and MATH1426. The second line describes course ENG1301 and it indicates that ENG1301 has no prerequisites. 9. You can assume that there is exactly one line for every course, even for those that do not have prerequisites (see ENGL1301 above). Therefore you can count the number of lines in the file to get the total number of courses. 10.The courses are not given in any specific order in the file. 8. You must create a directed graph corresponding to the data in the file. 1. The graph will have as many vertices as different courses listed in the file. 2. You can represent the vertices and edges however you want. 3. You do NOT have to use a graph struct. If you can do all the work with just the 2D table (the adjacency matrix) that is fine. You HAVE TO implement the topological sorting covered in class (as this assignment is on Graphs), but you can organize, represent and store the data however you want. 4. For the edges, you can use either the adjacency matrix representation or the adjacency list. If you use the adjacency list, keep the nodes in the list sorted in increasing order. 5. For each course that has prerequisites, there is an edge, from each prerequisite to that course. Thus the direction of the edge indicates the dependency. The actual edge will be between the vertices in the graph corresponding to these courses. E.g. file data0.txt has: c100 c300 c200 c100 c200 c100 Meaning: c100-----> c200 \ | \ | \ | \ | \ | \ | V V c300(The above drawing is provided here to give a picture of how the data in the file should be interpreted and the graph that represents this data. Your program should *NOT* print this drawing. See the sample run for expected program output.) From this data you should create the correspondence: vertex 0 - c100 vertex 1 - c300 vertex 2 - c200 and you can represent the graph using adjacency matrix (the row and column indexes are provided for convenience): | 0 1 2 ----------------- 0| 0 1 1 1| 0 0 0 2| 0 1 0 e.g. E[0][1] is 1 because vertex 0 corresponds to c100 and vertex 1 corresponds to c300 and c300 has c100 as a prerequisite. Notice that E[1][0] is not 1. If you use the adjacency list representation, then you can print the adjacency list. The list must be sorted in increasing order (e.g. see the list for 0). It should show the corresponding node numbers. E.g. for the above example the adjacency list will be: 0: 1, 2, 1: 2: 1, 6. 7. In order for the output to look the same for everyone, use the correspondence given here: vertex 0 for the course on the first line, vertex 1 for the course on the second line, etc. 1. Print the courses in topological sorted order. This should be done using the DFS (Depth First Search) algorithm that we covered in class and the topological sorting based on DFS discussed in class. There is no topological order if there is a cycle in the graph; in this case print an error message. If in DFV-visit when looking at the (u,v) edge, if the color of v is GRAY then there is a cycle in the graph (and therefore topological sorting is not possible). See the Lecture on topological sorting (You can find the date based on the table on the Scans page and then watch the video from that day. I have also updated the pseudocodein the slides to show that. Refresh the slides and check the date on the first page. If it is 11/26/2020, then you have the most recent version.) 8. (6 points) create and submit 1 test file. It must cover a special case. Indicate what special case you are covering (e.g. no course has any prerequisite). At the top of the file indicate what makes it a special case. Save this file as special.txt. It should be in Unix EOL format. Part 2: Suggestions for improvements (not for grade) 1. CSE Advisors also are mindful and point out to students the "longest path through the degree". That is longest chain of course prerequisites (e.g. CSE1310 ---> CSE1320 --> CSE3318 -->...) as this gives a lower bound on the number of semesters needed until graduation. Can you calculate for each course the LONGEST chain ending with it? E.g. in the above example, there are 2 chains ending with c300 (size 2: just c100-->c300, size 3: c100-->c200-->c300) and you want to show longest path 3 for c300. Can you calculate this number for each course? 2. Allow the user the enter a list of courses taken so far (from the user or from file) and print a list of the courses they can take (they have all the prerequisites for). 3. Ask the user to enter a desired number of courses per semester and suggest a schedule (by semester). Part 3: Implementation suggestions 1. Reading from file: (15 points) For each line in the file, the code can extract the first course and the prerequisites for it. If you cannot process each line in the file correctly, you can use a modified input file that shows on each line, the number of courses, but you would lose the 15 points dedicated to line processing. If your program works with the "easy files", in order to make it easy for the TAs to know which file to provide, please name your C program courses_graph_easy.c. Here is the modification shown for a new example. Instead of c100 c300 c200 c100 c200 the file would have: 1 c1003 c300 c200 c100 1 c200 1. that way the first data on each line is a number that tells how many courses (strings) follow after it on that line. Everything is separated by exactly one space. All the other specifications are the same as for the original file (empty line at the end, no space at the end of any line, length of words, etc). Here is data0_easy.txt Make a direct correspondence between vertex numbers and course names. E.g. the **first** course name on the first line corresponds to vertex 0, the **first** course name on the second line corresponds to vertex 1, etc... 2. 3. The vertex numbers are used to refer to vertices. 4. In order to add an edge in the graph you will need to find the vertex number corresponding to a given course name. E.g. find that c300 corresponds to vertex 1 and c200 corresponds to vertex 2. Now you can set E[2][1] to be 1. (With the adjacency list, add node 1 in the adjacency list for 2 keeping the list sorted.) To help with this, write a function that takes as arguments the list/array of [unique] course names and one course name and returns the index of that course in the list. You can use that index as the vertex number. (This is similar to the indexOf method in Java.) 5. To see all the non-printable characters that may be in a file, find an editor that shows them. E.g. in Notepad++ : open the file, go to View -> Show symbol -> Show all characters. YOU SHOULD TRY THIS! In general, not necessarily for this homework, if you make the text editor show the white spaces, you will know if what you see as 4 empty spaces comes from 4 spaces or from one tab or show other hidden characters. This can help when you tokenize. E.g. here I am using Notepad++ to see the EOL for files saved with Unix/Mac/Windows EOL (see the CR/LF/CRLF at the end of each line): EOL_Mac_Unix_Windows.png How to submit Submit courses_graph.c (or courses_graph_easy.c) and special.txt (the special test case you created) in Canvas . (For courses_graph_easy.c you can submit the "easy" files that you created.)Your program should be named courses_graph.c if it reads from the normal/original files. If instead it reads from the 'easy' files, name it courses_graph_easy.c As stated on the course syllabus, programs must be in C, and must run on omega.uta.edu or the VM. IMPORTANT: Pay close attention to all specifications on this page, including file names and submission format. Even in cases where your answers are correct, points will be taken off liberally for non-compliance with the instructions given on this page (such as wrong file names, wrong compression format for the submitted code, and so on). The reason is that non-compliance with the instructions makes the grading process significantly (and unnecessarily) more time consuming. Contact the instructor or TA if you have any questions