14 skills found
ShelvanLee / XFEM# XFEM_Fracture2D ### Description This is a Matlab program that can be used to solve fracture problems involving arbitrary multiple crack propagations in a 2D linear-elastic solid based on the principle of minimum potential energy. The extended finite element method is used to discretise the solid continuum considering cracks as discontinuities in the displacement field. To this end, a strong discontinuity enrichment and a square-root singular crack tip enrichment are used to describe each crack. Several crack growth criteria are available to determine the evolution of cracks over time; apart from the classic maximum tension (or hoop-stress) criterion, the minimum total energy criterion and the local symmetry criterion are implemented implicitly with respect to the discrete time-stepping. ### Key features * *Fast:* The stiffness matrix and the force vector (i.e. the equations' system) and the enrichment tracking data structures are updated at each time step only with respect to the changes in the fracture topology. This ultimately results in the major part of the computational expense in the solution to the linear system of equations rather than in the post-processing of the solution or in the assembly and updating of the equations. As Matlab offers fast and robust direct solvers, the computational times are reasonably fast. * *Robust.* Suitable for multiple crack propagations with intersections. Furthermore, the stress intensity factors are computed robustly via the interaction integral approach (with the inclusion of the terms to account for crack surface pressure, residual stresses or strains). The minimum total energy criterion and the principle of local symmetry are implemented implicitly in time. The energy release rates are computed based on the stiffness derivative approach using algebraic differentiation (rather than finite differencing of the potential energy). On the other hand, the crack growth direction based on the local symmetry criterion is determined such that the local mode-II stress intensity factor vanishes; the change in a crack tip kink angle is approximated using the ratio of the crack tip stress intensity factors. * *Easy to run.* Each job has its own input files which are independent form those of all other jobs. The code especially lends itself to running parametric studies. Various results can be saved relating to the fracture geometry, fracture mechanics parameters, and the elastic fields in the solid domain. Extensive visualisation library is available for plotting results. ### Instructions 1. Get started by running the demo to showcase some of the capabilities of the program and to determine if it can be useful for you. At the Matlab's command line enter: ```Matlab >> RUN_JOBS.m ``` This will execute a series of jobs located inside the *jobs directory* `./JOBS_LIBRARY/`. These jobs do not take very long to execute (around 5 minutes in total). 2. Subsequently, you can pick one of the jobs inside `./JOBS_LIBRARY/` by defining the job title: ```Matlab >> job_title = 'several_cracks/edge/vertical_tension' ``` 3. Then you can open all the relevant scripts for this job as follows: ```Matlab >> open_job ``` The following input scripts for the *job* will be open in the Matlab's editor: 1. `JOB_MAIN.m`: This is the job's main script. It is called when executing `RUN_JOB` (or `RUN_JOBS`) and acts like a wrapper. Notably, it can serve as a convenient interface to run parametric studies and to save intermediate simulation results. 2. `Input_Scope.m`: This defines the scope of the simulation. From which crack growth criteria to use, to what to compute and what results to show via plots and/or movies. To put it simply, the script is a bunch of "switches" that tell the program what the user wants to be done. 3. `Input_Material.m`: Defines the material's elastic properties in different regions or layers (called "phases") of the computational domain. Moreover, it defines the fracture toughness of the material (assumed to be constant in all material phases). 4. `Input_Crack.m`: Defines the initial crack geometry. 5. `Input_BC.m`: Defines boundary conditions, such as displacements, tractions, crack surface pressure (assumed to be constant in all cracks), body loads (e.g. gravity, pre-stress or pre-strain). 6. `Mesh_make.m`: In-house structured mesh generator for rectangular domains using either linear triangle or bilinear quadrilateral elements. It is possible to mesh horizontal layers using different mesh sizes. 7. `Mesh_read.m`: Gmsh based mesh reader for version-1 mesh files. Of course you can use your own mesh reader provided the output variables are of the correct format (see later). 8. `Mesh_file.m`: Specifies the mesh input file (.msh). At the moment, only Gmsh mesh files of version-1 are allowed. ### Mesh_file.m A mesh file needs to be able to output the following data or variables: * `mNdCrd`: Node coordinates, size = `[nNdStd, 2]` * `mLNodS`: Element connectivities, size = `[nElemn,nLNodS]` * `vElPhz`: Element material phase (or region) ID's, size = `[nElemn,1]` * `cBCNod`: cell of boundary nodes, cell size = `{nBound,1}`, cell element size = `[nBnNod,2]` Example mesh files are located in `./JOBS_LIBRARY/`. Gmsh version-1 file format is described [here](http://www.manpagez.com/info/gmsh/gmsh-2.4.0/gmsh_60.php). ### Additional notes * global variables are defined in `.\Routines_AuxInput\Declare_Global.m` * External libraries are `.\Other_Libs\distmesh` and `.\Other_Libs\mesh2d` ### References Two external meshing libraries are used for the local mesh refinement and remeshing at the crack tip during crack propagation or prior to a crack intersection with another crack or with a boundary of the domain. Specifically, these libraries, which are located in `.\Other_Libs\`, are the following: * [*mesh2d*](https://people.sc.fsu.edu/~jburkardt/m_src/mesh2d/mesh2d.html) by Darren Engwirda * [*distmesh*](http://persson.berkeley.edu/distmesh/) by Per-Olof Persson and Gilbert Strang. ### Issues and Support For support or questions please email [sutula.danas@gmail.com](mailto:sutula.danas@gmail.com). ### Authors Danas Sutula, University of Luxembourg, Luxembourg. If you find this code useful, we kindly ask that you consider citing us. * [Minimum energy multiple crack propagation](http://hdl.handle.net/10993/29414)
nyaundid / EC2 AWS AND SHELLSEIS 665 Assignment 2: Linux & Git Overview This week we will focus on becoming familiar with launching a Linux server and working with some basic Linux and Git commands. We will use AWS to launch and host the Linux server. AWS might seem a little confusing at this point. Don’t worry, we will gain much more hands-on experience with AWS throughout the course. The goal is to get you comfortable working with the technology and not overwhelm you with all the details. Requirements You need to have a personal AWS account and GitHub account for this assignment. You should also read the Git Hands-on Guide and Linux Hands-on Guide before beginning this exercise. A word about grading One of the key DevOps practices we learn about in this class is the use of automation to increase the speed and repeatability of processes. Automation is utilized during the assignment grading process to review and assess your work. It’s important that you follow the instructions in each assignment and type in required files and resources with the proper names. All names are case sensitive, so a name like "Web1" is not the same as "web1". If you misspell a name, use the wrong case, or put a file in the wrong directory location you will lose points on your assignment. This is the easiest way to lose points, and also the most preventable. You should always double-check your work to make sure it accurately reflects the requirements specified in the assignment. You should always carefully review the content of your files before submitting your assignment. The assignment Let’s get started! Create GitHub repository The first step in the assignment is to setup a Git repository on GitHub. We will use a special solution called GitHub Classroom for this course which automates the process of setting up student assignment repositories. Here are the basic steps: Click on the following link to open Assignment 2 on the GitHub Classroom site: https://classroom.github.com/a/K4zcVmX- (Links to an external site.)Links to an external site. Click on the Accept this assignment button. GitHub Classroom will provide you with a URL (https) to access the assignment repository. Either copy this address to your clipboard or write it down somewhere. You will need to use this address to set up the repository on a Linux server. Example: https://github.com/UST-SEIS665/hw2-seis665-02-spring2019-<your github id>.git At this point your new repository to ready to use. The repository is currently empty. We will put some content in there soon! Launch Linux server The second step in the assignment is to launch a Linux server using AWS EC2. The server should have the following characteristics: Amazon Linux 2 AMI 64-bit (usually the first option listed) Located in a U.S. region (us-east-1) t2.micro instance type All default instance settings (storage, vpm, security group, etc.) I’ve shown you how to launch EC2 instances in class. You can review it on Canvas. Once you launch the new server, it may take a few minutes to provision. Log into server The next step is to log into the Linux server using a terminal program with a secure shell (SSH) support. You can use iTerm2 (Links to an external site.)Links to an external site. on a Mac and GitBash/PuTTY (Links to an external site.)Links to an external site. on a PC. You will need to have the private server key and the public IP address before attempting to log into the server. The server key is basically your password. If you lose it, you will need to terminate the existing instance and launch a new server. I recommend reusing the same key when launching new servers throughout the class. Note, I make this recommendation to make the learning process easier and not because it is a common security practice. I’ve shown you how to use a terminal application to log into the instance using a Windows desktop. Your personal computer or lab computer may be running a different OS version, but the process is still very similar. You can review the videos on the Canvas. Working with Linux If you’ve made it this far, congratulations! You’ve made it over the toughest hurdle. By the end of this course, I promise you will be able to launch and log into servers in your sleep. You should be looking at a login screen that looks something like this: Last login: Mon Mar 21 21:17:54 2016 from 174-20-199-194.mpls.qwest.net __| __|_ ) _| ( / Amazon Linux AMI ___|\___|___| https://aws.amazon.com/amazon-linux-ami/2015.09-release-notes/ 8 package(s) needed for security, out of 17 available Run "sudo yum update" to apply all updates. ec2-user@ip-172-31-15-26 ~]$ Your terminal cursor is sitting at the shell prompt, waiting for you to type in your first command. Remember the shell? It is a really cool program that lets you start other programs and manage services on the Linux system. The rest of this assignment will be spent working with the shell. Note, when you are asked to type in a command in the steps below, don’t type in the dollar-sign ($) character. This is just meant to represent the command prompt. The actual commands are represented by the characters to the right of the command prompt. Let’s start by asking the shell for some help. Type in: $ help The shell provides you with a list of commands you can run along with possible command options. Next, check out one of the pages in the built-in manual: $ man ls A man page will appear with information on how to use the ls command. This command is used to list the contents of file directories. Either space through the contents of the man page or hit q to exit. Most of the core Linux commands have man pages available. But honestly, some of these man pages are a bit hard to understand. Sometimes your best bet is to search on Google if you are trying to figure out how to use a specific command. When you initially log into Linux, the system places you in your home directory. Each user on the system has a separate home directory. Let’s see where your home directory is located: $ pwd The response should be /home/ec2-user. The pwd command is handy to remember if you ever forget what file directory you are currently located in. If you recall from the Linux Hands-on Guide, this directory is also your current working directory. Type in: $ cd / The cd command let’s you change to a new working directory on the server. In this case, we changed to the root (/) directory. This is the parent of all the other directories on the file system. Type in: $ ls The ls command lists the contents of the current directory. As you can see, root directory contains many other directories. You will become familiar with these directories over time. The ls command provides a very basic directory listing. You need to supply the command with some options if you want to see more detailed information. Type in: $ ls -la See how this command provides you with much more detailed information about the files and directories? You can use this detailed listing to see the owner, group, and access control list settings for each file or directory. Do you see any files listed? Remember, the first character in the access control list column denotes whether a listed item is a file or a directory. You probably see a couple files with names like .autofsck. How come you didn’t see this file when you typed in the lscommand without any options? (Try to run this command again to convince yourself.) Files names that start with a period are called hidden files. These files won’t appear on normal directory listings. Type in: $ cd /var Then, type in: $ ls You will see a directory listing for the /var directory. Next, type in: $ ls .. Huh. This directory listing looks the same as the earlier root directory listing. When you use two periods (..) in a directory path that means you are referring to the parent directory of the current directory. Just think of the two dots as meaning the directory above the current directory. Now, type in: $ cd ~ $ pwd Whoa. We’re back at our home directory again. The tilde character (~) is another one of those handy little directory path shortcuts. It always refers to our personal home directory. Keep in mind that since every user has their own home directory, the tilde shortcut will refer to a unique directory for each logged-in user. Most students are used to navigating a file system by clicking a mouse in nested graphical folders. When they start using a command-line to navigate a file system, they sometimes get confused and lose track of their current position in the file system. Remember, you can always use the pwd command to quickly figure out what directory you are currently working in. Let’s make some changes to the file system. We can easily make our own directories on the file system. Type: mkdir test Now type: ls Cool, there’s our new test directory. Let’s pretend we don’t like that directory name and delete it. Type: rmdir test Now it’s gone. How can you be sure? You should know how to check to see if the directory still exists at this point. Go ahead and check. Let’s create another directory. Type in: $ mkdir documents Next, change to the new directory: $ cd documents Did you notice that your command prompt displays the name of the current directory? Something like: [ec2-user@ip-172-31-15-26 documents]$. Pretty handy, huh? Okay, let’s create our first file in the documents directory. This is just an empty file for training purposes. Type in: $ touch paper.txt Check to see that the new file is in the directory. Now, go back to the previous directory. Remember the double dot shortcut? $ cd .. Okay, we don’t like our documents directory any more. Let’s blow it away. Type in: $ rmdir documents Uh oh. The shell didn’t like that command because the directory isn’t empty. Let’s change back into the documents directory. But this time don’t type in the full name of the directory. You can let shell auto-completion do the typing for you. Type in the first couple characters of the directory name and then hit the tab key: $ cd doc<tab> You should use the tab auto-completion feature often. It saves typing and makes working with the Linux file system much much easier. Tab is your friend. Now, remove the file by typing: $ rm paper.txt Did you try to use the tab key instead of typing in the whole file name? Check to make sure the file was deleted from the directory. Next, create a new file: $ touch file1 We like file1 so much that we want to make a backup copy. Type: $ cp file1 file1-backup Check to make sure the new backup copy was created. We don’t really like the name of that new file, so let’s rename it. Type: $ mv file1-backup backup Moving a file to the same directory and giving it a new name is basically the same thing as renaming it. We could have moved it to a different directory if we wanted. Let’s list all of the files in the current directory that start with the letter f: $ ls f* Using wildcard pattern matching in file commands is really useful if you want the command to impact or filter a group of files. Now, go up one directory to the parent directory (remember the double dot shortcut?) We tried to remove the documents directory earlier when it had files in it. Obviously that won’t work again. However, we can use a more powerful command to destroy the directory and vanquish its contents. Behold, the all powerful remove command: $ rm -fr documents Did you remember to use auto-completion when typing in documents? This command and set of options forcibly removes the directory and its contents. It’s a dangerous command wielded by the mightiest Linux wizards. Okay, maybe that’s a bit of an exaggeration. Just be careful with it. Check to make sure the documents directory is gone before proceeding. Let’s continue. Change to the directory /var and make a directory called test. Ugh. Permission denied. We created this darn Linux server and we paid for it. Shouldn’t we be able to do anything we want on it? You logged into the system as a user called ec2-user. While this user can create and manage files in its home directory, it cannot change files all across the system. At least it can’t as a normal user. The ec2-user is a member of the root group, so it can escalate its privileges to super-user status when necessary. Let’s try it: $ sudo mkdir test Check to make sure the directory exists now. Using sudo we can execute commands as a super-user. We can do anything we want now that we know this powerful new command. Go ahead and delete the test directory. Did you remember to use sudo before the rmdir command? Check to make sure the directory is gone. You might be asking yourself the question: why can we list the contents of the /var directory but not make changes? That’s because all users have read access to the /var directory and the ls command is a read function. Only the root users or those acting as a super-user can write changes to the directory. Let’s go back to our home directory: $ cd ~ Editing text files is a really common task on Linux systems because many of the application configuration files are text files. We can create a text file by using a text editor. Type in: $ nano myfile.conf The shell starts up the nano text editor and places your terminal cursor in the editing screen. Nano is a simple text-based word processor. Type in a few lines of text. When you’re done writing your novel, hit ctrl-x and answer y to the prompt to save your work. Finally, hit enter to save the text to the filename you specified. Check to see that your file was saved in the directory. You can take a look at the contents of your file by typing: $ cat myfile.conf The cat command displays your text file content on the terminal screen. This command works fine for displaying small text files. But if your file is hundreds of lines long, the content will scroll down your terminal screen so fast that you won’t be able to easily read it. There’s a better way to view larger text files. Type in: $ less myfile.conf The less command will page the display of a text file, allowing you to page through the contents of the file using the space bar. Your text file is probably too short to see the paging in action though. Hit q to quit out of the less text viewer. Hit the up-arrow key on your keyboard a few times until the commmand nano myfile.conf appears next to your command prompt. Cool, huh? The up-arrow key allows you to replay a previously run command. Linux maintains a list of all the commands you have run since you logged into the server. This is called the command history. It’s a really useful feature if you have to re-run a complex command again. Now, hit ctrl-c. This cancels whatever command is displayed on the command line. Type in the following command to create a couple empty files in the directory: $ touch file1 file2 file3 Confirm that the files were created. Some commands, like touch. allow you to specify multiple files as arguments. You will find that Linux commands have all kinds of ways to make tasks more efficient like this. Throughout this assignment, we have been running commands and viewing results on the terminal screen. The screen is the standard place for commands to output results. It’s known as the standard out (stdout). However, it’s really useful to output results to the file system sometimes. Type in: $ ls > listing.txt Take a look at the directory listing now. You just created a new file. View the contents of the listing.txt file. What do you see? Instead of sending the output from the ls command to the screen we sent it to a text file. Let’s try another one. Type: $ cat myfile.conf > listing.txt Take a look at the contents of the listing.txt file again. It looks like your myfile.conf file now. It’s like you made a copy of it. But what happened to the previous content in the listing.txt file? When you redirect the output of a command using the right angle-bracket character (>), the output overwrites the existing file. Type this command in: $ cat myfile.conf >> listing.txt Now look at the contents of the listing.txt file. You should see your original content displayed twice. When you use two angle-bracket characters in the commmand the output appends (or adds to) the file instead of overwriting it. We redirected the output from a command to a text file. It’s also possible to redirect the input to a command. Typically we use a keyboard to provide input, but sometimes it makes more sense to input a file to a command. For example, how many words are in your new listing.txt file? Let’s find out. Type in: $ wc -w < listing.txt Did you get a number? This command inputs the listing.txt file into a word count program called wc. Type in the command: $ ls /usr/bin The terminal screen probably scrolled quickly as filenames flashed by. The /usr/bin directory holds quite a few files. It would be nice if we could page through the contents of this directory. Well, we can. We can use a special shell feature called pipes. In previous steps, we redirected I/O using the file system. Pipes allow us to redirect I/O between programs. We can redirect the output from one program into another. Type in: $ ls /usr/bin | less Now the directory listing is paged. Hit the spacebar to page through the listing. The pipe, represented by a vertical bar character (|), takes the output from the ls command and redirects it to the less command where the resulting output is paged. Pipes are super powerful and used all the time by savvy Linux operators. Hit the q key to quit the paginated directory listing command. Working with shell scripts Now things are going to get interesting. We’ve been manually typing in commands throughout this exercise. If we were running a set of repetitive tasks, we would want to automate the process as much as possible. The shell makes it really easy to automate tasks using shell scripts. The shell provides many of the same features as a basic procedural programming language. Let’s write some code. Type in this command: $ j=123 $ echo $j We just created a variable named j referencing the string 123. The echo command printed out the value of the variable. We had to use a dollar sign ($) when referencing the variable in another command. Next, type in: $ j=1+1 $ echo $j Is that what you expected? The shell just interprets the variable value as a string. It’s not going to do any sort of computation. Typing in shell script commands on the command line is sort of pointless. We want to be able to create scripts that we can run over-and-over. Let’s create our first shell script. Use the nano editor to create a file named myscript. When the file is open in the editor, type in the following lines of code: #!/bin/bash echo Hello $1 Now quit the editor and save your file. We can run our script by typing: $ ./myscript World Er, what happened? Permission denied. Didn’t we create this file? Why can’t we run it? We can’t run the script file because we haven’t set the execute permission on the file. Type in: $ chmod u+x myscript This modifies the file access control list to allow the owner of the file to execute it. Let’s try to run the command again. Hit the up-arrow key a couple times until the ./myscript World command is displayed and hit enter. Hooray! Our first shell script. It’s probably a bit underwhelming. No problem, we’ll make it a little more complex. The script took a single argument called World. Any arguments provided to a shell script are represented as consecutively numbered variables inside the script ($1, $2, etc). Pretty simple. You might be wondering why we had to type the ./ characters before the name of our script file. Try to type in the command without them: $ myscript World Command not found. That seems a little weird. Aren’t we currently in the directory where the shell script is located? Well, that’s just not how the shell works. When you enter a command into the shell, it looks for the command in a predefined set of directories on the server called your PATH. Since your script file isn’t in your special path, the shell reports it as not found. By typing in the ./ characters before the command name you are basically forcing the shell to look for your script in the current directory instead of the default path. Create another file called cleanup using nano. In the file editor window type: #!/bin/bash # My cleanup script mkdir archive mv file* archive Exit the editor window and save the file. Change the permissions on the script file so that you can execute it. Now run the command: $ ./cleanup Take a look at the file directory listing. Notice the archive directory? List the contents of that directory. The script automatically created a new directory and moved three files into it. Anything you can do manually at a command prompt can be automated using a shell script. Let’s create one more shell script. Use nano to create a script called namelist. Here is the content of the script: #!/bin/bash # for-loop test script names='Jason John Jane' for i in $names do echo Hello $i done Change the permissions on the script file so that you can execute it. Run the command: $ ./namelist The script will loop through a set of names stored in a variable displaying each one. Scripts support several programming constructs like for-loops, do-while loops, and if-then-else. These building blocks allow you to create fairly complex scripts for automating tasks. Installing packages and services We’re nearing the end of this assignment. But before we finish, let’s install some new software packages on our server. The first thing we should do is make sure all the current packages installed on our Linux server are up-to-date. Type in: $ sudo yum update -y This is one of those really powerful commands that requires sudo access. The system will review the currently installed packages and go out to the Internet and download appropriate updates. Next, let’s install an Apache web server on our system. Type in: $ sudo yum install httpd -y Bam! You probably never knew that installing a web server was so easy. We’re not going to actually use the web server in this exercise, but we will in future assignments. We installed the web server, but is it actually running? Let’s check. Type in: $ sudo service httpd status Nope. Let’s start it. Type: $ sudo service httpd start We can use the service command to control the services running on the system. Let’s setup the service so that it automatically starts when the system boots up. Type in: $ sudo chkconfig httpd on Cool. We installed the Apache web server on our system, but what other programs are currently running? We can use the pscommand to find out. Type in: $ ps -ax Lots of processes are running on our system. We can even look at the overall performance of our system using the topcommand. Let’s try that now. Type in: $ top The display might seem a little overwhelming at first. You should see lots of performance information displayed including the cpu usage, free memory, and a list of running tasks. We’re almost across the finish line. Let’s make sure all of our valuable work is stored in a git repository. First, we need to install git. Type in the command: $ sudo yum install git -y Check your work It’s very important to check your work before submitting it for grading. A misspelled, misplaced or missing file will cost you points. This may seem harsh, but the reality is that these sorts of mistakes have consequences in the real world. For example, a server instance could fail to launch properly and impact customers because a single required file is missing. Here is what the contents of your git repository should look like before final submission: ┣archive ┃ ┣ file1 ┃ ┣ file2 ┃ ┗ file3 ┣ namelist ┗ myfile.conf Saving our work in the git repository Next, make sure you are still in your home directory (/home/ec2-user). We will install the git repository you created at the beginning of this exercise. You will need to modify this command by typing in the GitHub repository URL you copied earlier. $ git clone <your GitHub URL here>.git Example: git clone https://github.com/UST-SEIS665/hw2-seis665-02-spring2019-<your github id>.git The git application will ask you for your GitHub username and password. Note, if you have multi-factor authentication enabled on your GitHub account you will need to provide a personal token instead of your password. Git will clone (copy) the repository from GitHub to your Linux server. Since the repository is empty the clone happens almost instantly. Check to make sure that a sub-directory called "hw2-seis665-02-spring2019-<username>" exists in the current directory (where <username> is your GitHub account name). Git automatically created this directory as part of the cloning process. Change to the hw2-seis665-02-spring2019-<username> directory and type: $ ls -la Notice the .git hidden directory? This is where git actually stores all of the file changes in your repository. Nothing is actually in your repository yet. Change back to the parent directory (cd ..). Next, let’s move some of our files into the repository. Type: $ mv archive hw2-seis665-02-spring2019-<username> $ mv namelist hw2-seis665-02-spring2019-<username> $ mv myfile.conf hw2-seis665-02-spring2019-<username> Hopefully, you remembered to use the auto-complete function to reduce some of that typing. Change to the hw2-seis665-02-spring2019-<username> directory and list the directory contents. Your files are in the working directory, but are not actually stored in the repository because they haven’t been committed yet. Type in: $ git status You should see a list of untracked files. Let’s tell git that we want these files tracked. Type in: $ git add * Now type in the git status command again. Notice how all the files are now being tracked and are ready to be committed. These files are in the git staging area. We’ll commit them to the repository next. Type: $ git commit -m 'assignment 2 files' Next, take a look at the commit log. Type: $ git log You should see your commit listed along with an assigned hash (long string of random-looking characters). Finally, let’s save the repository to our GitHub account. Type in: $ git push origin master The git client will ask you for your GitHub username and password before pushing the repository. Go back to the GitHub.com website and login if you have been logged out. Click on the repository link for the assignment. Do you see your files listed there? Congratulations, you completed the exercise! Terminate server The last step is to terminate your Linux instance. AWS will bill you for every hour the instance is running. The cost is nominal, but there’s no need to rack up unnecessary charges. Here are the steps to terminate your instance: Log into your AWS account and click on the EC2 dashboard. Click the Instances menu item. Select your server in the instances table. Click on the Actions drop down menu above the instances table. Select the Instance State menu option Click on the Terminate action. Your Linux instance will shutdown and disappear in a few minutes. The EC2 dashboard will continue to display the instance on your instance listing for another day or so. However, the state of the instance will be terminated. Submitting your assignment — IMPORTANT! If you haven’t already, please e-mail me your GitHub username in order to receive credit for this assignment. There is no need to email me to tell me that you have committed your work to GitHub or to ask me if your GitHub submission worked. If you can see your work in your GitHub repository, I can see your work.
abhilash12iec002 / Penetration Depth Evaluation Of L And S Band SAR SignalsWe study the functional relationship between the dielectric constant of soil-water mixture and penetration depth of microwave signals into the ground at different frequency (L&S) band and incidence angles. Penetration depth of microwave signals into the ground depends on the incidence angle and wavelength of radar pulses and also on the soil properties such as moisture content and textural composition. It has been observed that the longer wavelengths have higher penetration in the soil but the penetration capability decreases with increasing dielectric behaviour of the soil. Moisture content in the soil can significantly increase its dielectric constant. Various empirical models have been proposed that evaluate the dielectric behaviour of soil-water mixture as a function of moisture content and texture of the soil. In this analysis we have used two such empirical models, the Dobson model and the Hallikainen model, to calculate the penetration depth at L- and C-band in soil and compared their results. We found that both of these models give different penetration depth and show different sensitivity towards the soil composition. Hallikainen model is more sensitive to soil composition as compared to Dobson model. Finally, we explore the penetration depth at different incidence angle for the proposed L- and S-band sensor of upcoming NASA-ISRO Synthetic Aperture Radar (NISAR) mission by using Hallikainen empirical model. We found that the soil penetration depth of SAR signals into the ground decreases with the increase in soil moisture content, incident angle and frequency. References [1] A. Singh, G. K. Meena, S. Kumar and K. Gaurav, "Evaluation of the Penetration Depth of L- and S-Band (NISAR mission) Microwave SAR Signals into Ground," 2019 URSI Asia-Pacific Radio Science Conference (AP-RASC), New Delhi, India, 2019, pp. 1-1. doi: 10.23919/URSIAP-RASC.2019.8738217 keywords: {Synthetic aperture radar;Dielectrics;Moisture;Soil moisture;Sensors;Remote sensing}, URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8738217&isnumber=8738126 [2] Singh, A., Meena, G. K., Kumar, S., and Gaurav, K.: ANALYSIS OF THE EFFECT OF INCIDENCE ANGLE AND MOISTURE CONTENT ON THE PENETRATION DEPTH OF L- AND S-BAND SAR SIGNALS INTO THE GROUND SURFACE, ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-5, 197-202, https://doi.org/10.5194/isprs-annals-IV-5-197-2018, 2018. [3] ABHILASH SINGH (2019). Penetration depth evaluation at L-and S-band SAR signals (https://www.mathworks.com/matlabcentral/fileexchange/73040-penetration-depth-evaluation-at-l-and-s-band-sar-signals), MATLAB Central File Exchange. Retrieved October 19, 2019.
DevashishPrasad / Angle DistanceFinding distance of a objects from a reference object or origin and their angle with respect to x-axis
ScottJohnson2718 / ChangeOfBasisPerforms transforms between reference frames on vectors, quats, matrices and Euler angles
NetFabric / NetFabric.AngleAn immutable double-precision angle that supports revolutions, radians, degrees, and gradians.
OptimizationExpert / GAMS DC OPF The current DC optimal power flow (OPF) model finds the optimal operating schedules of generating units considering transmission line constraints. The objective function is defined as the total operating costs of generating units. The model type is LP and is applied to a 5 bus PJM test system as described in the following reference: F. Li and R. Bo, "Small test systems for power system economic studies," IEEE PES General Meeting, Minneapolis, MN, 2010, pp. 1-4. doi: 10.1109/PES.2010.5589973 The proposed model is able to handle large scale set of data for practical power system transmission networks. Inputs: Generator costs + technical characteristics (min/max , connection point) Network characteristics (reactances, demands, line flow limits) Outputs: local marginal prices (LMP) Generators outputs Line flows Angle values -------------------------------------------------------------------------------------------- Contributed by Dr. Alireza Soroudi, University College Dublin, Dublin, Ireland. email: alireza.sor
jnuber / RaspberryPi Object Tracking USB WebCam# This object tracking solutions utilize the "Triangle Similarity" method. # In brief, the triangle similarity takes an object (marker) with a known # width. The object is placed at some distance from the camera. Preferably # the same camera to be used for detection and tracking. # # A photo/image is taken of the object using the same camera to be used for # object detection. We then measure the apparent width in pixels. Using # this image as a fixed reference point, if the camera moves away from the # object, the number of pixels measuring the object's width decreases. If # the camera moves closer to the object, the number of pixels increases. # One can calculate the distance with pretty good accuracy. The higher the # quality of the reference image, the better the accuracy. These attributes # include: good lighting, good color contrast, accurate distance, as close to # 90% camera angle as possible. In some cases, camera calibration and # focal length maybe required. However I found using the piCam,pinhole or # fish-eye distortion wasn't an issue. # # Solution Approach # This solution uses a reference or calibration image of the object to # track. The object/marker width is determined by the number of pixels. # Once this has been established, the main loop does the following: # - looks for an object with similar contours as the reference object # - if found, target identified # - determines target width in pixels # - calculates distance based on a % difference from the # reference/calibration image # - locate target center # - display/print target details # - for Raspberry Pi, get CPU temperature as well. # # This version previews the target, paints the target's boarders and provides # target data on the preview screen. This an excellent method of viewing # and debugging the code. Also included were performance stats for the # various functions. If using for robotics or autonomous mode, all the # above can be commented/removed for maximum performance. Through testing, # with good target LIGHTING, I was achieving 30+ FPS. (Raspberry Pi 3, # piCam ver2, Multitheaded Camera streaming feed, Python 3.6.
RashmiS5 / SAM Classification Of Satellite ImagesThe project aims at explaining the usage of SAM algorithm for satellite image classification. Hyperspectral Image provides pixel spectrum that fetches detailed information about a surface to identify and distinguish between spectrally similar (but unique) materials. The Hyperspectral Image sensor placed on board the Remote Sensing Satellite captures Hyperspectral Images with various bands of spectrum. Experiments are carried out for the implementation of Spectral Angle Mapper (SAM) on Hyper- spectral Images for classification of pixels on the surface. The false color composite of the image is also obtained for better visualization of surface differences. The Hyperspectral Images of various bands are stacked one after the other to form three-dimensional Cube of images for SAM implementation. SAM is a supervised classification algorithm which identifies the various classes in the image based on the calculation of the spectral angle. The spectral angle is calculated between the test vector built for each pixel and the reference vector built for each reference classes selected by the user. Results are obtained to read and reorganize multiple 2-D datasets into a single compact 3D dataset cube. The reference vector is built for performing SAM classification and the angle between the reference vector and pixel vector is calculated to compare with the determined threshold angle value. The color coding is then applied to distinguish between the various classes that have been recognized by the SAM algorithm. Hence using SAM, Hyperspectral images are analyzed to extract thematic information such as land-cover, water bodies, and clouds.
SullyVo / Lendingclub Loan1 Introduction LendingClub is a peer to peer lending company in which their product allows consumers to both invest and borrow loans. They offer multiple kinds of loans like student loans, personal loans, auto refinancing loans and even business loans. The borrowers who are interested in obtaining loans will get a loan grade assigned to them which affect their interest rates and the amount of money they can borrow. A lot of the LendingClub data leads to insightful conclusions about the borrowing and investing patterns of all kinds of individuals. Through our investigation, we will explain patterns and similarities of the behaviors of borrowers and investors. 1.1 Questions of Interest We intend to start off with exploratory data analysis of all the factors involved to find patterns and relationships. We will look at the data from multiple angles to get a sense of the intricacies that lie within the data. We will additionally match the trend we see in the data to external events to try to explain why such is happening. We will also conduct time series decomposition in regards to the average loan amounts being requested. We will take a look at the trend and the seasonality so that we can better forecast spikes in demand. After, we will try to fit prediction models in order to answer a couple questions: namely whether a loan request from a client should be funded or not from the perspective of the bank, and what interest rate a borrower would get for a loan from the perspective of a client. After finding good models, we will deconstruct them in order to get a deeper sense of the important aspects in such decisions. 1.2 Dataset The dataset we are using is a compilation of data on loans issued by LendingClub from the period 2007 to 2015. The data includes information on the current loan status (how much has been funded so far, how much has been paid off, etc) as well as information about the borrower (occupation, income, credit score, etc). This data lends itself to a variety of interesting financial analysis, notably time series analysis since the data is date stamped. We will touch on a number of variables present in the dataset throughout the course of this analysis. We will consolidate the meanings of all these variables here for future reference. • loan_amnt: listed amount of the loan applied for by the borrower • funded_amnt: total amount committed to that loan at that point in time • funded_amnt_inv: total amount committed by investors for that loan at that point in time • term: number of payments on the loan. Values are in months and can be either 36 or 60 • int_rate: interest rate on the loan University of California, Davis • installment: monthly payment owed by the borrower • grade: loan grade that corresponds to the risk of the loan • loan_status: current status of the loan • total_bal_il: total current balance of all installment accounts • emp_title: job title of the borrower • next_pymnt_d: next scheduled payment date • sec_app_mort_acc: number of mortgage accounts at time of application for the secondary applicant More information about the dataset can be found here: https://www.kaggle.com/wendykan/lending-club-loan-data
ZulqarnainZilli / 9 Email Marketing Tips For Content Marketers9 Email Marketing Tips For Content Marketers Even “agnostics” regarding email marketing can't hash out the following evidence - the average ROI from this promotional practice is close to 3,800%. Measureless opportunities to scale up and relative cheapness, compared to other reaching-out channels, are the two reasons why the email marketing is fair-haired by businesses. However, this is not about the price and physical extent alone. The chief advantage is a better alignment of communication with customers. If you hope a certain content strategy brings desirable results, overlooking the quality of mailing messages will be a sorry pitfall. Always keep in mind that newsletters, welcome, retention, and other emails are not just a brand's facade - but a powerful tool for generating conversions. By joining sides of email and content strategies, you can come up with synergy from both. In this guide, we’ll cover a few recommendations for content marketers on how to write email messages that work. Tips for email marketing Segment your list Split the batch of email recipients into smaller groups based on chosen criteria, and mail distinct relevant messages - for each. You can use recipients' GEO, demographic characteristics, or purchase history to distinguish homogeneous clusters and proceed with the content planning. Segmentation is the basic premise for personalization, and if you still doubt why bothering about the latter - here are just a few numbers we took from Instapage: 52% of customers claim they do care if the message was tailor-made or not 82% of marketers say that mail personalization increases the open ratio custom emails have 41% more unique clicks than mass-produced ones. To avoid a fragmented approach, use data from CRMs, website analytics tools, and other sources to define segments. Concerning phrasings, a good idea is to create Buyer personas profiles. Thus, you'll be able to choose the appropriate message length and wording. Say, design a newsletter to promote paid subscription for an email validator service. You've decided to distinguish corporate clients based on their company size and determined the following groups: #1 - B2Bs and #2 - sole entrepreneurs. Possible messages for the two: #1. Our "XXL" plan is perfect for agencies and enterprises. One can add unlimited users and conduct up to 100,000 checks per month. #2. With our "S" you get 1,000 credits and 5,000 unique recipients - for only $33 per month. Plus - a 7-days free trial. Use interactive content The best content marketers know that interactive content came into vogue a long time ago. As to emails, here are the most common examples: CSS animated buttons If you include CTAs buttons (that we hope you do) - liven them up a bit. Add an animated hover effect, so that every time a recipient puts a cursor on a button, it changes shape, shade, color, or text. “Add hover to emphasise objects”, source This shouldn’t necessarily be something dramatic - add tiny accents that will yet grab the user's attention. starring “Add a star rating component to engage readers with content”, source Including ranking or reviewing widgets in the email body is one of the most working ways to engage the reader with the message. Ask recipients to assess your product or service with stars. Add the link to Google Forms if you want to receive an extended opinion on overall customer satisfaction. pictures' rollovers “Use animated images to describe goods better”, source The effect is eagerly used by the ones who promote online stores. Using The rollover allows to show goods from different angles or even play with recipients, if relevant. Take into account that this feature only works on desktops - mobile mail users will see the very first picture only. images carousel “Add pieces of text directly on images”, source If you want to enhance goods cards with descriptive content, say - price and shipping details, use a carousel instead of a rollover. As so, you can add more info pictures to the email body and, hopefully, convert more recipients into customers. a countdown “Countdowns work well for limited in time offers”, source Again, this type of interactive content fits the online shopping niche. Animated clocks amplify urgency and theoretically increase conversions. But it's important to stay extremely careful and not to sound desperate - otherwise, the newsletter will end up in the recipient's "Spam". Improve design The attractiveness of an email is something granted on certain terms, indeed. Not all emails need to be flashy or include expensive designs. However, there are some prevailing common trends in the matter. By following them, you seem to show the recipient that your company is moving in step with the times, and not stuck in the 2000s. Here's the shortlist from the TOP email design trends list that a 99designs provides - as of 2021: magazine-styled “Make newsletters to look a bit editorial”, source More and more newsletters tend to look like a centerfold from good old printed media. With a strict following to the "Less is more" principle - clear fonts, short phrases, HD-quality images with a few objects on them, and short CTAs. hand-made illustrations “Unique pictures create a distinct flavour of your brand”, source Tailored icons or sketchy images - whatever fits your mailing purpose, just make sure it's not too bright, contrast, or overloaded with details. Give preference to clean colors. skeuomorphic objects This is when a design resembles a real object. To see an example - just open a reader App on your smartphone. “A skeuomorphic bookshelf”, source HD photographies “If you operate in the luxury segment, do not skimp on email visuals”, source These are expensive content, but if you work in fashion or other chick industries - it may be worth the effort. animated content Yeap, we've covered this in a previous tip. single scroll “Looks especially good on smartphones”, source Place the entire email content, including buttons, on the endless-looking long frame. Focus on conversions Stay focused on what's your mailing purpose. Don't forget that everybody expects to see a good ROI from email actions at the end of the reporting period. Craft effective CTAs - perceive these not as a sole button with a "Download now" text or so, but as an entire sense of a message that you write. To create a captivating CTA copy, adhere to the below advices: include win-win propositions Even though you’re not providing a customer with a discount or cash refund at the moment, your proposition may include a non-monetary incentive. New arrivals, selection of the latest news, free copies, advice from experts - the only rule here is to offer what’ll hold in high esteem. trigger on emotions Don't long-windedly list benefits. Instead, simulate a life situation and show how your product or service can help. use several CTAs throughout the email Email body may be viewed in several scrolls, especially when via small mobile devices’ screens. If you add a call to action at the beginning of the message, a mere number of users will get back to it after finishing reading the content. Thus, you may lose potential conversion. Include several buttons throughout the email body, but don’t sound repeatedly - change calls’ forms and wording. Encourage readers to reply Driving recipients to reply is challenging yet able to be done. First, choose the proper writing tone. According to an extensive study of emails that didn’t get a response, the most preferable is a 3rd-grade reading level. “Too elementary or too proficient tone may scare away readers”, source Of course, you must apply this recommendation with an eye on the recipient. If you mail to a professor or a government agency, a “3rd-grade” rule isn’t applicable. But all else being equal - simplify the lexicon to the level a schoolchild can understand it. Another trick is to sound overall happy. Emails that are enhanced with positive emotions get 10-15% more replies, on average than neutral ones. The best manner is to choose a slightly warm tone. Exaggerated excitement may look weird and even suspicious, especially when reaching out to business partners. And don’t forget about courtesy. A rare person will respond if you address him or her with a hair-raising “To whom it may concern” phrase. Make it personal Personification shouldn’t be confused with personalization. The second is rather about mailing fitting content from a commercial perspective, while the first term - about addressing the recipient as a one-off personality. Personal emails start with the recipient’s name - and no other way. They include references to the user's interests or past actions. For example, if your tourist agency’s client is interested in island vacations - you shall approach him or her with respective offers. They also shall contain personalized promotions, if any. The best way to expand this approach on hundreds or thousands of recipients is to launch trigger-based email campaigns. Create delivery scenarios for different segments or stages of a sales pipeline. Then prepare a fitting sequence of relevant content - for every single scenario. To give a human face to mailing, one can practice greetings, as well. Birthdays, state holidays, anniversaries, a new status in the loyalty system - there are a lot of examples of what one may congratulate the customer with. Keep your emails out of spam folders It is better not to launch mailing at all than to use an untrustworthy emails’ database. The risks are much higher than a slew of undelivered messages - from harming a sender's reputation to being banned by mailing systems. So it's better to stay proactive: tidy away broken, misspelled, temporary, or other worrisome emails from the database - either manually or with the help of software collect a valid email address only - through email finders avoid spam-trigger words establish a double opt-in validation set the correct mailing frequency. Make sure your emails look clean and crisp Newsletters shall afterall bring revenues - whether you want it or not. But in a bid of quantity, don’t lose the overall content integrity and sense: a subject line, pre-header, header, email body, and calls shall be consistent with one another the copy must be of the proper size; although the length depends on many factors, stick to an “ideal” interval - 50 to 125 words if can, don’t attach too many files or links to external websites - mailing filters are suspicious to these adapt the layout to fit smaller screens - nothing looks worse than broken email elements when you open it on mobile. Wrapping up It doesn't make much difference whether you create mailing content for personal or business purposes - these email marketing tips will serve both. No strains here - the recipient’s interest should be at your forefront. If you can hook him or her with the content by using tricks we've covered, you’ll never fail with enough conversions.
Tajamul21 / Counter Top Corner DetectionThe robot needed to find the extreme edges of the counter-top and the sink. To detect the corners, we used three methods to obtain the coordinates of 8 corners (4-counter top and 4-sink) as shown in figure. The three methods were: 1. Finding corner points using P4P(4 point algorithm) 2. Finding corner points using RANSAC. 3. Finding corner points using colour segmentation. They are explained below. 4.1 Method 1: Finding corner points using P4P The counter top had round colour lables stuck on them in the four outer and inner corners. (See Figure 13 for an example.) The outer points A, B, C , D in figure 13 were segmented based on color and the coordinates were saved. Using 4 Point algorithm, we were able to re project the outer and inner corners of the counter-top with an average re-projection error of 0.489. We can also compute the 3D locations of these points with respect to the camera coordinate frame. 4.2 Method 2: Finding corner points using RANSAC RANSAC divides data into inliers and outliers and yields an estimate of the counter top plane, computed from a minimal set of inliers with greatest support. We Improved this initial estimate with a Least Squares (S) estimation over all inliers (i.e., standard minimization), and then we found the inliers w.r.t the L.S. fit, and re-estimated the plane using L.S. one more time. We used the 3D points (given by the camera) to find the counter plane using this method. We then identified the pixels corresponding to the plane in the corresponding colour image. To find the corners, we used Harris corner detector. Harris’ corner detector takes the differential of the corner score into account with reference to direction directly, instead of using shifting patches for every 45◦ angles, and has been proved to be more accurate in distinguishing between edges and corners. The implementation was done as follows: 1. Compute image intensity gradients in x- and y-directions.
NOSALRO / Orientation MathReference implementation of common orientation math (Lie Algebra, Axis-Angle, Quaternions, Quat-Trick, Euler Angles)
hassanyousuf / H Yousuf A Chugh AutoAppAUTO BUILD Part 1: Project Description: The basis of this assignment is to create an interactive application using 3D, video, audio and still images. Design and develop an engaging experience for the end user with which they can view video, explore specs, etc for a car of your choice. This project will be a collaborative effort between the MMED-3001 class and MMED-1012 class. The assets you develop in MMED-3001 will be implemented, manipulated and controlled dynamically in the MMED-1012 class. Project Requirements: • All 3D elements must be created in Cinema 4D – no exceptions. • Animation and timing are to be created by you. • Brand consistent animation of your chosen car • A final video rendered out in web ready .mp4 format • Video size for final demo reel render will be 1280 pixels wide by 720 pixels high, with 100% H.264 codec compression Pass/Fail Grading Scheme: Pass/Fail Grading Scheme: Overall Accuracy, Design, Editing, Following Directions: 25 marks Split-Screen Animation/Video Look: 20 marks Use of Object Buffers: 10 marks Texturing: 10 marks Camera Angles: 10 marks Lighting Setup: 10 marks Timing to Music: 10 Batch Render Setting Set-up: 5 marks Part 2 Each team will develop a responsive mobile-first application to showcase an interactive implementation of your design. Please note that this means that you can consider how the graphics and interactivity evolve as a web app. Start with a splash page with a “hero shot” - but use a video produced in 3001 instead. See the provided example as a reference. Include some UI elements for navigation - you’ll need to move from this page to an interactive “details” page that showcases selected features. The “details” page should also have controls to navigate and view product details. DESIGNER (Apurav): Create the look and feel of the application. Start at mobile, and evolve your design for tablet and desktop. You are responsible for prototyping the application with a tool of your choice (Adobe XD, InVision, etc). Your prototype will assist the developer to create interactive elements and how to access remote data (text, video clips, images, etc). Remember that you’ll need a few different videos / sizes depending on device. DEVELOPER (Hassan): Create a repository on Github including a detailed Readme file. Put the appropriate information in the Readme file - you will likely have to update throughout the lifecycle of the project. Features Interactive specs info 360 view of car Built With: CSS Grid CSS Flexbox Javascript - Used for custom video player Progress Create WireFrames Create Gui for Thermo Create Thermo Model in C4D Code HTML Code CSS Code JS Implement Back-end elements Clean Up Folders Submit Authors Hassan - Developer Apurav - 3D Designer References