5 skills found
Botspot / BvmUser friendly, high performance Windows 11 Virtual Machine on ARM Linux
rramatchandran / Big O Performance Java# big-o-performance A simple html app to demonstrate performance costs of data structures. - Clone the project - Navigate to the root of the project in a termina or command prompt - Run 'npm install' - Run 'npm start' - Go to the URL specified in the terminal or command prompt to try out the app. # This app was created from the Create React App NPM. Below are instructions from that project. Below you will find some information on how to perform common tasks. You can find the most recent version of this guide [here](https://github.com/facebookincubator/create-react-app/blob/master/template/README.md). ## Table of Contents - [Updating to New Releases](#updating-to-new-releases) - [Sending Feedback](#sending-feedback) - [Folder Structure](#folder-structure) - [Available Scripts](#available-scripts) - [npm start](#npm-start) - [npm run build](#npm-run-build) - [npm run eject](#npm-run-eject) - [Displaying Lint Output in the Editor](#displaying-lint-output-in-the-editor) - [Installing a Dependency](#installing-a-dependency) - [Importing a Component](#importing-a-component) - [Adding a Stylesheet](#adding-a-stylesheet) - [Post-Processing CSS](#post-processing-css) - [Adding Images and Fonts](#adding-images-and-fonts) - [Adding Bootstrap](#adding-bootstrap) - [Adding Flow](#adding-flow) - [Adding Custom Environment Variables](#adding-custom-environment-variables) - [Integrating with a Node Backend](#integrating-with-a-node-backend) - [Proxying API Requests in Development](#proxying-api-requests-in-development) - [Deployment](#deployment) - [Now](#now) - [Heroku](#heroku) - [Surge](#surge) - [GitHub Pages](#github-pages) - [Something Missing?](#something-missing) ## Updating to New Releases Create React App is divided into two packages: * `create-react-app` is a global command-line utility that you use to create new projects. * `react-scripts` is a development dependency in the generated projects (including this one). You almost never need to update `create-react-app` itself: it’s delegates all the setup to `react-scripts`. When you run `create-react-app`, it always creates the project with the latest version of `react-scripts` so you’ll get all the new features and improvements in newly created apps automatically. To update an existing project to a new version of `react-scripts`, [open the changelog](https://github.com/facebookincubator/create-react-app/blob/master/CHANGELOG.md), find the version you’re currently on (check `package.json` in this folder if you’re not sure), and apply the migration instructions for the newer versions. In most cases bumping the `react-scripts` version in `package.json` and running `npm install` in this folder should be enough, but it’s good to consult the [changelog](https://github.com/facebookincubator/create-react-app/blob/master/CHANGELOG.md) for potential breaking changes. We commit to keeping the breaking changes minimal so you can upgrade `react-scripts` painlessly. ## Sending Feedback We are always open to [your feedback](https://github.com/facebookincubator/create-react-app/issues). ## Folder Structure After creation, your project should look like this: ``` my-app/ README.md index.html favicon.ico node_modules/ package.json src/ App.css App.js index.css index.js logo.svg ``` For the project to build, **these files must exist with exact filenames**: * `index.html` is the page template; * `favicon.ico` is the icon you see in the browser tab; * `src/index.js` is the JavaScript entry point. You can delete or rename the other files. You may create subdirectories inside `src`. For faster rebuilds, only files inside `src` are processed by Webpack. You need to **put any JS and CSS files inside `src`**, or Webpack won’t see them. You can, however, create more top-level directories. They will not be included in the production build so you can use them for things like documentation. ## Available Scripts In the project directory, you can run: ### `npm start` Runs the app in the development mode.<br> Open [http://localhost:3000](http://localhost:3000) to view it in the browser. The page will reload if you make edits.<br> You will also see any lint errors in the console. ### `npm run build` Builds the app for production to the `build` folder.<br> It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.<br> Your app is ready to be deployed! ### `npm run eject` **Note: this is a one-way operation. Once you `eject`, you can’t go back!** If you aren’t satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (Webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own. You don’t have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it. ## Displaying Lint Output in the Editor >Note: this feature is available with `react-scripts@0.2.0` and higher. Some editors, including Sublime Text, Atom, and Visual Studio Code, provide plugins for ESLint. They are not required for linting. You should see the linter output right in your terminal as well as the browser console. However, if you prefer the lint results to appear right in your editor, there are some extra steps you can do. You would need to install an ESLint plugin for your editor first. >**A note for Atom `linter-eslint` users** >If you are using the Atom `linter-eslint` plugin, make sure that **Use global ESLint installation** option is checked: ><img src="http://i.imgur.com/yVNNHJM.png" width="300"> Then make sure `package.json` of your project ends with this block: ```js { // ... "eslintConfig": { "extends": "./node_modules/react-scripts/config/eslint.js" } } ``` Projects generated with `react-scripts@0.2.0` and higher should already have it. If you don’t need ESLint integration with your editor, you can safely delete those three lines from your `package.json`. Finally, you will need to install some packages *globally*: ```sh npm install -g eslint babel-eslint eslint-plugin-react eslint-plugin-import eslint-plugin-jsx-a11y eslint-plugin-flowtype ``` We recognize that this is suboptimal, but it is currently required due to the way we hide the ESLint dependency. The ESLint team is already [working on a solution to this](https://github.com/eslint/eslint/issues/3458) so this may become unnecessary in a couple of months. ## Installing a Dependency The generated project includes React and ReactDOM as dependencies. It also includes a set of scripts used by Create React App as a development dependency. You may install other dependencies (for example, React Router) with `npm`: ``` npm install --save <library-name> ``` ## Importing a Component This project setup supports ES6 modules thanks to Babel. While you can still use `require()` and `module.exports`, we encourage you to use [`import` and `export`](http://exploringjs.com/es6/ch_modules.html) instead. For example: ### `Button.js` ```js import React, { Component } from 'react'; class Button extends Component { render() { // ... } } export default Button; // Don’t forget to use export default! ``` ### `DangerButton.js` ```js import React, { Component } from 'react'; import Button from './Button'; // Import a component from another file class DangerButton extends Component { render() { return <Button color="red" />; } } export default DangerButton; ``` Be aware of the [difference between default and named exports](http://stackoverflow.com/questions/36795819/react-native-es-6-when-should-i-use-curly-braces-for-import/36796281#36796281). It is a common source of mistakes. We suggest that you stick to using default imports and exports when a module only exports a single thing (for example, a component). That’s what you get when you use `export default Button` and `import Button from './Button'`. Named exports are useful for utility modules that export several functions. A module may have at most one default export and as many named exports as you like. Learn more about ES6 modules: * [When to use the curly braces?](http://stackoverflow.com/questions/36795819/react-native-es-6-when-should-i-use-curly-braces-for-import/36796281#36796281) * [Exploring ES6: Modules](http://exploringjs.com/es6/ch_modules.html) * [Understanding ES6: Modules](https://leanpub.com/understandinges6/read#leanpub-auto-encapsulating-code-with-modules) ## Adding a Stylesheet This project setup uses [Webpack](https://webpack.github.io/) for handling all assets. Webpack offers a custom way of “extending” the concept of `import` beyond JavaScript. To express that a JavaScript file depends on a CSS file, you need to **import the CSS from the JavaScript file**: ### `Button.css` ```css .Button { padding: 20px; } ``` ### `Button.js` ```js import React, { Component } from 'react'; import './Button.css'; // Tell Webpack that Button.js uses these styles class Button extends Component { render() { // You can use them as regular CSS styles return <div className="Button" />; } } ``` **This is not required for React** but many people find this feature convenient. You can read about the benefits of this approach [here](https://medium.com/seek-ui-engineering/block-element-modifying-your-javascript-components-d7f99fcab52b). However you should be aware that this makes your code less portable to other build tools and environments than Webpack. In development, expressing dependencies this way allows your styles to be reloaded on the fly as you edit them. In production, all CSS files will be concatenated into a single minified `.css` file in the build output. If you are concerned about using Webpack-specific semantics, you can put all your CSS right into `src/index.css`. It would still be imported from `src/index.js`, but you could always remove that import if you later migrate to a different build tool. ## Post-Processing CSS This project setup minifies your CSS and adds vendor prefixes to it automatically through [Autoprefixer](https://github.com/postcss/autoprefixer) so you don’t need to worry about it. For example, this: ```css .App { display: flex; flex-direction: row; align-items: center; } ``` becomes this: ```css .App { display: -webkit-box; display: -ms-flexbox; display: flex; -webkit-box-orient: horizontal; -webkit-box-direction: normal; -ms-flex-direction: row; flex-direction: row; -webkit-box-align: center; -ms-flex-align: center; align-items: center; } ``` There is currently no support for preprocessors such as Less, or for sharing variables across CSS files. ## Adding Images and Fonts With Webpack, using static assets like images and fonts works similarly to CSS. You can **`import` an image right in a JavaScript module**. This tells Webpack to include that image in the bundle. Unlike CSS imports, importing an image or a font gives you a string value. This value is the final image path you can reference in your code. Here is an example: ```js import React from 'react'; import logo from './logo.png'; // Tell Webpack this JS file uses this image console.log(logo); // /logo.84287d09.png function Header() { // Import result is the URL of your image return <img src={logo} alt="Logo" />; } export default function Header; ``` This works in CSS too: ```css .Logo { background-image: url(./logo.png); } ``` Webpack finds all relative module references in CSS (they start with `./`) and replaces them with the final paths from the compiled bundle. If you make a typo or accidentally delete an important file, you will see a compilation error, just like when you import a non-existent JavaScript module. The final filenames in the compiled bundle are generated by Webpack from content hashes. If the file content changes in the future, Webpack will give it a different name in production so you don’t need to worry about long-term caching of assets. Please be advised that this is also a custom feature of Webpack. **It is not required for React** but many people enjoy it (and React Native uses a similar mechanism for images). However it may not be portable to some other environments, such as Node.js and Browserify. If you prefer to reference static assets in a more traditional way outside the module system, please let us know [in this issue](https://github.com/facebookincubator/create-react-app/issues/28), and we will consider support for this. ## Adding Bootstrap You don’t have to use [React Bootstrap](https://react-bootstrap.github.io) together with React but it is a popular library for integrating Bootstrap with React apps. If you need it, you can integrate it with Create React App by following these steps: Install React Bootstrap and Bootstrap from NPM. React Bootstrap does not include Bootstrap CSS so this needs to be installed as well: ``` npm install react-bootstrap --save npm install bootstrap@3 --save ``` Import Bootstrap CSS and optionally Bootstrap theme CSS in the ```src/index.js``` file: ```js import 'bootstrap/dist/css/bootstrap.css'; import 'bootstrap/dist/css/bootstrap-theme.css'; ``` Import required React Bootstrap components within ```src/App.js``` file or your custom component files: ```js import { Navbar, Jumbotron, Button } from 'react-bootstrap'; ``` Now you are ready to use the imported React Bootstrap components within your component hierarchy defined in the render method. Here is an example [`App.js`](https://gist.githubusercontent.com/gaearon/85d8c067f6af1e56277c82d19fd4da7b/raw/6158dd991b67284e9fc8d70b9d973efe87659d72/App.js) redone using React Bootstrap. ## Adding Flow Flow typing is currently [not supported out of the box](https://github.com/facebookincubator/create-react-app/issues/72) with the default `.flowconfig` generated by Flow. If you run it, you might get errors like this: ```js node_modules/fbjs/lib/Deferred.js.flow:60 60: Promise.prototype.done.apply(this._promise, arguments); ^^^^ property `done`. Property not found in 495: declare class Promise<+R> { ^ Promise. See lib: /private/tmp/flow/flowlib_34952d31/core.js:495 node_modules/fbjs/lib/shallowEqual.js.flow:29 29: return x !== 0 || 1 / (x: $FlowIssue) === 1 / (y: $FlowIssue); ^^^^^^^^^^ identifier `$FlowIssue`. Could not resolve name src/App.js:3 3: import logo from './logo.svg'; ^^^^^^^^^^^^ ./logo.svg. Required module not found src/App.js:4 4: import './App.css'; ^^^^^^^^^^^ ./App.css. Required module not found src/index.js:5 5: import './index.css'; ^^^^^^^^^^^^^ ./index.css. Required module not found ``` To fix this, change your `.flowconfig` to look like this: ```ini [libs] ./node_modules/fbjs/flow/lib [options] esproposal.class_static_fields=enable esproposal.class_instance_fields=enable module.name_mapper='^\(.*\)\.css$' -> 'react-scripts/config/flow/css' module.name_mapper='^\(.*\)\.\(jpg\|png\|gif\|eot\|otf\|webp\|svg\|ttf\|woff\|woff2\|mp4\|webm\)$' -> 'react-scripts/config/flow/file' suppress_type=$FlowIssue suppress_type=$FlowFixMe ``` Re-run flow, and you shouldn’t get any extra issues. If you later `eject`, you’ll need to replace `react-scripts` references with the `<PROJECT_ROOT>` placeholder, for example: ```ini module.name_mapper='^\(.*\)\.css$' -> '<PROJECT_ROOT>/config/flow/css' module.name_mapper='^\(.*\)\.\(jpg\|png\|gif\|eot\|otf\|webp\|svg\|ttf\|woff\|woff2\|mp4\|webm\)$' -> '<PROJECT_ROOT>/config/flow/file' ``` We will consider integrating more tightly with Flow in the future so that you don’t have to do this. ## Adding Custom Environment Variables >Note: this feature is available with `react-scripts@0.2.3` and higher. Your project can consume variables declared in your environment as if they were declared locally in your JS files. By default you will have `NODE_ENV` defined for you, and any other environment variables starting with `REACT_APP_`. These environment variables will be defined for you on `process.env`. For example, having an environment variable named `REACT_APP_SECRET_CODE` will be exposed in your JS as `process.env.REACT_APP_SECRET_CODE`, in addition to `process.env.NODE_ENV`. These environment variables can be useful for displaying information conditionally based on where the project is deployed or consuming sensitive data that lives outside of version control. First, you need to have environment variables defined, which can vary between OSes. For example, let's say you wanted to consume a secret defined in the environment inside a `<form>`: ```jsx render() { return ( <div> <small>You are running this application in <b>{process.env.NODE_ENV}</b> mode.</small> <form> <input type="hidden" defaultValue={process.env.REACT_APP_SECRET_CODE} /> </form> </div> ); } ``` The above form is looking for a variable called `REACT_APP_SECRET_CODE` from the environment. In order to consume this value, we need to have it defined in the environment: ### Windows (cmd.exe) ```cmd set REACT_APP_SECRET_CODE=abcdef&&npm start ``` (Note: the lack of whitespace is intentional.) ### Linux, OS X (Bash) ```bash REACT_APP_SECRET_CODE=abcdef npm start ``` > Note: Defining environment variables in this manner is temporary for the life of the shell session. Setting permanent environment variables is outside the scope of these docs. With our environment variable defined, we start the app and consume the values. Remember that the `NODE_ENV` variable will be set for you automatically. When you load the app in the browser and inspect the `<input>`, you will see its value set to `abcdef`, and the bold text will show the environment provided when using `npm start`: ```html <div> <small>You are running this application in <b>development</b> mode.</small> <form> <input type="hidden" value="abcdef" /> </form> </div> ``` Having access to the `NODE_ENV` is also useful for performing actions conditionally: ```js if (process.env.NODE_ENV !== 'production') { analytics.disable(); } ``` ## Integrating with a Node Backend Check out [this tutorial](https://www.fullstackreact.com/articles/using-create-react-app-with-a-server/) for instructions on integrating an app with a Node backend running on another port, and using `fetch()` to access it. You can find the companion GitHub repository [here](https://github.com/fullstackreact/food-lookup-demo). ## Proxying API Requests in Development >Note: this feature is available with `react-scripts@0.2.3` and higher. People often serve the front-end React app from the same host and port as their backend implementation. For example, a production setup might look like this after the app is deployed: ``` / - static server returns index.html with React app /todos - static server returns index.html with React app /api/todos - server handles any /api/* requests using the backend implementation ``` Such setup is **not** required. However, if you **do** have a setup like this, it is convenient to write requests like `fetch('/api/todos')` without worrying about redirecting them to another host or port during development. To tell the development server to proxy any unknown requests to your API server in development, add a `proxy` field to your `package.json`, for example: ```js "proxy": "http://localhost:4000", ``` This way, when you `fetch('/api/todos')` in development, the development server will recognize that it’s not a static asset, and will proxy your request to `http://localhost:4000/api/todos` as a fallback. Conveniently, this avoids [CORS issues](http://stackoverflow.com/questions/21854516/understanding-ajax-cors-and-security-considerations) and error messages like this in development: ``` Fetch API cannot load http://localhost:4000/api/todos. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. ``` Keep in mind that `proxy` only has effect in development (with `npm start`), and it is up to you to ensure that URLs like `/api/todos` point to the right thing in production. You don’t have to use the `/api` prefix. Any unrecognized request will be redirected to the specified `proxy`. Currently the `proxy` option only handles HTTP requests, and it won’t proxy WebSocket connections. If the `proxy` option is **not** flexible enough for you, alternatively you can: * Enable CORS on your server ([here’s how to do it for Express](http://enable-cors.org/server_expressjs.html)). * Use [environment variables](#adding-custom-environment-variables) to inject the right server host and port into your app. ## Deployment By default, Create React App produces a build assuming your app is hosted at the server root. To override this, specify the `homepage` in your `package.json`, for example: ```js "homepage": "http://mywebsite.com/relativepath", ``` This will let Create React App correctly infer the root path to use in the generated HTML file. ### Now See [this example](https://github.com/xkawi/create-react-app-now) for a zero-configuration single-command deployment with [now](https://zeit.co/now). ### Heroku Use the [Heroku Buildpack for Create React App](https://github.com/mars/create-react-app-buildpack). You can find instructions in [Deploying React with Zero Configuration](https://blog.heroku.com/deploying-react-with-zero-configuration). ### Surge Install the Surge CLI if you haven't already by running `npm install -g surge`. Run the `surge` command and log in you or create a new account. You just need to specify the *build* folder and your custom domain, and you are done. ```sh email: email@domain.com password: ******** project path: /path/to/project/build size: 7 files, 1.8 MB domain: create-react-app.surge.sh upload: [====================] 100%, eta: 0.0s propagate on CDN: [====================] 100% plan: Free users: email@domain.com IP Address: X.X.X.X Success! Project is published and running at create-react-app.surge.sh ``` Note that in order to support routers that use html5 `pushState` API, you may want to rename the `index.html` in your build folder to `200.html` before deploying to Surge. This [ensures that every URL falls back to that file](https://surge.sh/help/adding-a-200-page-for-client-side-routing). ### GitHub Pages >Note: this feature is available with `react-scripts@0.2.0` and higher. Open your `package.json` and add a `homepage` field: ```js "homepage": "http://myusername.github.io/my-app", ``` **The above step is important!** Create React App uses the `homepage` field to determine the root URL in the built HTML file. Now, whenever you run `npm run build`, you will see a cheat sheet with a sequence of commands to deploy to GitHub pages: ```sh git commit -am "Save local changes" git checkout -B gh-pages git add -f build git commit -am "Rebuild website" git filter-branch -f --prune-empty --subdirectory-filter build git push -f origin gh-pages git checkout - ``` You may copy and paste them, or put them into a custom shell script. You may also customize them for another hosting provider. Note that GitHub Pages doesn't support routers that use the HTML5 `pushState` history API under the hood (for example, React Router using `browserHistory`). This is because when there is a fresh page load for a url like `http://user.github.io/todomvc/todos/42`, where `/todos/42` is a frontend route, the GitHub Pages server returns 404 because it knows nothing of `/todos/42`. If you want to add a router to a project hosted on GitHub Pages, here are a couple of solutions: * You could switch from using HTML5 history API to routing with hashes. If you use React Router, you can switch to `hashHistory` for this effect, but the URL will be longer and more verbose (for example, `http://user.github.io/todomvc/#/todos/42?_k=yknaj`). [Read more](https://github.com/reactjs/react-router/blob/master/docs/guides/Histories.md#histories) about different history implementations in React Router. * Alternatively, you can use a trick to teach GitHub Pages to handle 404 by redirecting to your `index.html` page with a special redirect parameter. You would need to add a `404.html` file with the redirection code to the `build` folder before deploying your project, and you’ll need to add code handling the redirect parameter to `index.html`. You can find a detailed explanation of this technique [in this guide](https://github.com/rafrex/spa-github-pages). ## Something Missing? If you have ideas for more “How To” recipes that should be on this page, [let us know](https://github.com/facebookincubator/create-react-app/issues) or [contribute some!](https://github.com/facebookincubator/create-react-app/edit/master/template/README.md)
nyaundid / EC2 AWS AND SHELLSEIS 665 Assignment 2: Linux & Git Overview This week we will focus on becoming familiar with launching a Linux server and working with some basic Linux and Git commands. We will use AWS to launch and host the Linux server. AWS might seem a little confusing at this point. Don’t worry, we will gain much more hands-on experience with AWS throughout the course. The goal is to get you comfortable working with the technology and not overwhelm you with all the details. Requirements You need to have a personal AWS account and GitHub account for this assignment. You should also read the Git Hands-on Guide and Linux Hands-on Guide before beginning this exercise. A word about grading One of the key DevOps practices we learn about in this class is the use of automation to increase the speed and repeatability of processes. Automation is utilized during the assignment grading process to review and assess your work. It’s important that you follow the instructions in each assignment and type in required files and resources with the proper names. All names are case sensitive, so a name like "Web1" is not the same as "web1". If you misspell a name, use the wrong case, or put a file in the wrong directory location you will lose points on your assignment. This is the easiest way to lose points, and also the most preventable. You should always double-check your work to make sure it accurately reflects the requirements specified in the assignment. You should always carefully review the content of your files before submitting your assignment. The assignment Let’s get started! Create GitHub repository The first step in the assignment is to setup a Git repository on GitHub. We will use a special solution called GitHub Classroom for this course which automates the process of setting up student assignment repositories. Here are the basic steps: Click on the following link to open Assignment 2 on the GitHub Classroom site: https://classroom.github.com/a/K4zcVmX- (Links to an external site.)Links to an external site. Click on the Accept this assignment button. GitHub Classroom will provide you with a URL (https) to access the assignment repository. Either copy this address to your clipboard or write it down somewhere. You will need to use this address to set up the repository on a Linux server. Example: https://github.com/UST-SEIS665/hw2-seis665-02-spring2019-<your github id>.git At this point your new repository to ready to use. The repository is currently empty. We will put some content in there soon! Launch Linux server The second step in the assignment is to launch a Linux server using AWS EC2. The server should have the following characteristics: Amazon Linux 2 AMI 64-bit (usually the first option listed) Located in a U.S. region (us-east-1) t2.micro instance type All default instance settings (storage, vpm, security group, etc.) I’ve shown you how to launch EC2 instances in class. You can review it on Canvas. Once you launch the new server, it may take a few minutes to provision. Log into server The next step is to log into the Linux server using a terminal program with a secure shell (SSH) support. You can use iTerm2 (Links to an external site.)Links to an external site. on a Mac and GitBash/PuTTY (Links to an external site.)Links to an external site. on a PC. You will need to have the private server key and the public IP address before attempting to log into the server. The server key is basically your password. If you lose it, you will need to terminate the existing instance and launch a new server. I recommend reusing the same key when launching new servers throughout the class. Note, I make this recommendation to make the learning process easier and not because it is a common security practice. I’ve shown you how to use a terminal application to log into the instance using a Windows desktop. Your personal computer or lab computer may be running a different OS version, but the process is still very similar. You can review the videos on the Canvas. Working with Linux If you’ve made it this far, congratulations! You’ve made it over the toughest hurdle. By the end of this course, I promise you will be able to launch and log into servers in your sleep. You should be looking at a login screen that looks something like this: Last login: Mon Mar 21 21:17:54 2016 from 174-20-199-194.mpls.qwest.net __| __|_ ) _| ( / Amazon Linux AMI ___|\___|___| https://aws.amazon.com/amazon-linux-ami/2015.09-release-notes/ 8 package(s) needed for security, out of 17 available Run "sudo yum update" to apply all updates. ec2-user@ip-172-31-15-26 ~]$ Your terminal cursor is sitting at the shell prompt, waiting for you to type in your first command. Remember the shell? It is a really cool program that lets you start other programs and manage services on the Linux system. The rest of this assignment will be spent working with the shell. Note, when you are asked to type in a command in the steps below, don’t type in the dollar-sign ($) character. This is just meant to represent the command prompt. The actual commands are represented by the characters to the right of the command prompt. Let’s start by asking the shell for some help. Type in: $ help The shell provides you with a list of commands you can run along with possible command options. Next, check out one of the pages in the built-in manual: $ man ls A man page will appear with information on how to use the ls command. This command is used to list the contents of file directories. Either space through the contents of the man page or hit q to exit. Most of the core Linux commands have man pages available. But honestly, some of these man pages are a bit hard to understand. Sometimes your best bet is to search on Google if you are trying to figure out how to use a specific command. When you initially log into Linux, the system places you in your home directory. Each user on the system has a separate home directory. Let’s see where your home directory is located: $ pwd The response should be /home/ec2-user. The pwd command is handy to remember if you ever forget what file directory you are currently located in. If you recall from the Linux Hands-on Guide, this directory is also your current working directory. Type in: $ cd / The cd command let’s you change to a new working directory on the server. In this case, we changed to the root (/) directory. This is the parent of all the other directories on the file system. Type in: $ ls The ls command lists the contents of the current directory. As you can see, root directory contains many other directories. You will become familiar with these directories over time. The ls command provides a very basic directory listing. You need to supply the command with some options if you want to see more detailed information. Type in: $ ls -la See how this command provides you with much more detailed information about the files and directories? You can use this detailed listing to see the owner, group, and access control list settings for each file or directory. Do you see any files listed? Remember, the first character in the access control list column denotes whether a listed item is a file or a directory. You probably see a couple files with names like .autofsck. How come you didn’t see this file when you typed in the lscommand without any options? (Try to run this command again to convince yourself.) Files names that start with a period are called hidden files. These files won’t appear on normal directory listings. Type in: $ cd /var Then, type in: $ ls You will see a directory listing for the /var directory. Next, type in: $ ls .. Huh. This directory listing looks the same as the earlier root directory listing. When you use two periods (..) in a directory path that means you are referring to the parent directory of the current directory. Just think of the two dots as meaning the directory above the current directory. Now, type in: $ cd ~ $ pwd Whoa. We’re back at our home directory again. The tilde character (~) is another one of those handy little directory path shortcuts. It always refers to our personal home directory. Keep in mind that since every user has their own home directory, the tilde shortcut will refer to a unique directory for each logged-in user. Most students are used to navigating a file system by clicking a mouse in nested graphical folders. When they start using a command-line to navigate a file system, they sometimes get confused and lose track of their current position in the file system. Remember, you can always use the pwd command to quickly figure out what directory you are currently working in. Let’s make some changes to the file system. We can easily make our own directories on the file system. Type: mkdir test Now type: ls Cool, there’s our new test directory. Let’s pretend we don’t like that directory name and delete it. Type: rmdir test Now it’s gone. How can you be sure? You should know how to check to see if the directory still exists at this point. Go ahead and check. Let’s create another directory. Type in: $ mkdir documents Next, change to the new directory: $ cd documents Did you notice that your command prompt displays the name of the current directory? Something like: [ec2-user@ip-172-31-15-26 documents]$. Pretty handy, huh? Okay, let’s create our first file in the documents directory. This is just an empty file for training purposes. Type in: $ touch paper.txt Check to see that the new file is in the directory. Now, go back to the previous directory. Remember the double dot shortcut? $ cd .. Okay, we don’t like our documents directory any more. Let’s blow it away. Type in: $ rmdir documents Uh oh. The shell didn’t like that command because the directory isn’t empty. Let’s change back into the documents directory. But this time don’t type in the full name of the directory. You can let shell auto-completion do the typing for you. Type in the first couple characters of the directory name and then hit the tab key: $ cd doc<tab> You should use the tab auto-completion feature often. It saves typing and makes working with the Linux file system much much easier. Tab is your friend. Now, remove the file by typing: $ rm paper.txt Did you try to use the tab key instead of typing in the whole file name? Check to make sure the file was deleted from the directory. Next, create a new file: $ touch file1 We like file1 so much that we want to make a backup copy. Type: $ cp file1 file1-backup Check to make sure the new backup copy was created. We don’t really like the name of that new file, so let’s rename it. Type: $ mv file1-backup backup Moving a file to the same directory and giving it a new name is basically the same thing as renaming it. We could have moved it to a different directory if we wanted. Let’s list all of the files in the current directory that start with the letter f: $ ls f* Using wildcard pattern matching in file commands is really useful if you want the command to impact or filter a group of files. Now, go up one directory to the parent directory (remember the double dot shortcut?) We tried to remove the documents directory earlier when it had files in it. Obviously that won’t work again. However, we can use a more powerful command to destroy the directory and vanquish its contents. Behold, the all powerful remove command: $ rm -fr documents Did you remember to use auto-completion when typing in documents? This command and set of options forcibly removes the directory and its contents. It’s a dangerous command wielded by the mightiest Linux wizards. Okay, maybe that’s a bit of an exaggeration. Just be careful with it. Check to make sure the documents directory is gone before proceeding. Let’s continue. Change to the directory /var and make a directory called test. Ugh. Permission denied. We created this darn Linux server and we paid for it. Shouldn’t we be able to do anything we want on it? You logged into the system as a user called ec2-user. While this user can create and manage files in its home directory, it cannot change files all across the system. At least it can’t as a normal user. The ec2-user is a member of the root group, so it can escalate its privileges to super-user status when necessary. Let’s try it: $ sudo mkdir test Check to make sure the directory exists now. Using sudo we can execute commands as a super-user. We can do anything we want now that we know this powerful new command. Go ahead and delete the test directory. Did you remember to use sudo before the rmdir command? Check to make sure the directory is gone. You might be asking yourself the question: why can we list the contents of the /var directory but not make changes? That’s because all users have read access to the /var directory and the ls command is a read function. Only the root users or those acting as a super-user can write changes to the directory. Let’s go back to our home directory: $ cd ~ Editing text files is a really common task on Linux systems because many of the application configuration files are text files. We can create a text file by using a text editor. Type in: $ nano myfile.conf The shell starts up the nano text editor and places your terminal cursor in the editing screen. Nano is a simple text-based word processor. Type in a few lines of text. When you’re done writing your novel, hit ctrl-x and answer y to the prompt to save your work. Finally, hit enter to save the text to the filename you specified. Check to see that your file was saved in the directory. You can take a look at the contents of your file by typing: $ cat myfile.conf The cat command displays your text file content on the terminal screen. This command works fine for displaying small text files. But if your file is hundreds of lines long, the content will scroll down your terminal screen so fast that you won’t be able to easily read it. There’s a better way to view larger text files. Type in: $ less myfile.conf The less command will page the display of a text file, allowing you to page through the contents of the file using the space bar. Your text file is probably too short to see the paging in action though. Hit q to quit out of the less text viewer. Hit the up-arrow key on your keyboard a few times until the commmand nano myfile.conf appears next to your command prompt. Cool, huh? The up-arrow key allows you to replay a previously run command. Linux maintains a list of all the commands you have run since you logged into the server. This is called the command history. It’s a really useful feature if you have to re-run a complex command again. Now, hit ctrl-c. This cancels whatever command is displayed on the command line. Type in the following command to create a couple empty files in the directory: $ touch file1 file2 file3 Confirm that the files were created. Some commands, like touch. allow you to specify multiple files as arguments. You will find that Linux commands have all kinds of ways to make tasks more efficient like this. Throughout this assignment, we have been running commands and viewing results on the terminal screen. The screen is the standard place for commands to output results. It’s known as the standard out (stdout). However, it’s really useful to output results to the file system sometimes. Type in: $ ls > listing.txt Take a look at the directory listing now. You just created a new file. View the contents of the listing.txt file. What do you see? Instead of sending the output from the ls command to the screen we sent it to a text file. Let’s try another one. Type: $ cat myfile.conf > listing.txt Take a look at the contents of the listing.txt file again. It looks like your myfile.conf file now. It’s like you made a copy of it. But what happened to the previous content in the listing.txt file? When you redirect the output of a command using the right angle-bracket character (>), the output overwrites the existing file. Type this command in: $ cat myfile.conf >> listing.txt Now look at the contents of the listing.txt file. You should see your original content displayed twice. When you use two angle-bracket characters in the commmand the output appends (or adds to) the file instead of overwriting it. We redirected the output from a command to a text file. It’s also possible to redirect the input to a command. Typically we use a keyboard to provide input, but sometimes it makes more sense to input a file to a command. For example, how many words are in your new listing.txt file? Let’s find out. Type in: $ wc -w < listing.txt Did you get a number? This command inputs the listing.txt file into a word count program called wc. Type in the command: $ ls /usr/bin The terminal screen probably scrolled quickly as filenames flashed by. The /usr/bin directory holds quite a few files. It would be nice if we could page through the contents of this directory. Well, we can. We can use a special shell feature called pipes. In previous steps, we redirected I/O using the file system. Pipes allow us to redirect I/O between programs. We can redirect the output from one program into another. Type in: $ ls /usr/bin | less Now the directory listing is paged. Hit the spacebar to page through the listing. The pipe, represented by a vertical bar character (|), takes the output from the ls command and redirects it to the less command where the resulting output is paged. Pipes are super powerful and used all the time by savvy Linux operators. Hit the q key to quit the paginated directory listing command. Working with shell scripts Now things are going to get interesting. We’ve been manually typing in commands throughout this exercise. If we were running a set of repetitive tasks, we would want to automate the process as much as possible. The shell makes it really easy to automate tasks using shell scripts. The shell provides many of the same features as a basic procedural programming language. Let’s write some code. Type in this command: $ j=123 $ echo $j We just created a variable named j referencing the string 123. The echo command printed out the value of the variable. We had to use a dollar sign ($) when referencing the variable in another command. Next, type in: $ j=1+1 $ echo $j Is that what you expected? The shell just interprets the variable value as a string. It’s not going to do any sort of computation. Typing in shell script commands on the command line is sort of pointless. We want to be able to create scripts that we can run over-and-over. Let’s create our first shell script. Use the nano editor to create a file named myscript. When the file is open in the editor, type in the following lines of code: #!/bin/bash echo Hello $1 Now quit the editor and save your file. We can run our script by typing: $ ./myscript World Er, what happened? Permission denied. Didn’t we create this file? Why can’t we run it? We can’t run the script file because we haven’t set the execute permission on the file. Type in: $ chmod u+x myscript This modifies the file access control list to allow the owner of the file to execute it. Let’s try to run the command again. Hit the up-arrow key a couple times until the ./myscript World command is displayed and hit enter. Hooray! Our first shell script. It’s probably a bit underwhelming. No problem, we’ll make it a little more complex. The script took a single argument called World. Any arguments provided to a shell script are represented as consecutively numbered variables inside the script ($1, $2, etc). Pretty simple. You might be wondering why we had to type the ./ characters before the name of our script file. Try to type in the command without them: $ myscript World Command not found. That seems a little weird. Aren’t we currently in the directory where the shell script is located? Well, that’s just not how the shell works. When you enter a command into the shell, it looks for the command in a predefined set of directories on the server called your PATH. Since your script file isn’t in your special path, the shell reports it as not found. By typing in the ./ characters before the command name you are basically forcing the shell to look for your script in the current directory instead of the default path. Create another file called cleanup using nano. In the file editor window type: #!/bin/bash # My cleanup script mkdir archive mv file* archive Exit the editor window and save the file. Change the permissions on the script file so that you can execute it. Now run the command: $ ./cleanup Take a look at the file directory listing. Notice the archive directory? List the contents of that directory. The script automatically created a new directory and moved three files into it. Anything you can do manually at a command prompt can be automated using a shell script. Let’s create one more shell script. Use nano to create a script called namelist. Here is the content of the script: #!/bin/bash # for-loop test script names='Jason John Jane' for i in $names do echo Hello $i done Change the permissions on the script file so that you can execute it. Run the command: $ ./namelist The script will loop through a set of names stored in a variable displaying each one. Scripts support several programming constructs like for-loops, do-while loops, and if-then-else. These building blocks allow you to create fairly complex scripts for automating tasks. Installing packages and services We’re nearing the end of this assignment. But before we finish, let’s install some new software packages on our server. The first thing we should do is make sure all the current packages installed on our Linux server are up-to-date. Type in: $ sudo yum update -y This is one of those really powerful commands that requires sudo access. The system will review the currently installed packages and go out to the Internet and download appropriate updates. Next, let’s install an Apache web server on our system. Type in: $ sudo yum install httpd -y Bam! You probably never knew that installing a web server was so easy. We’re not going to actually use the web server in this exercise, but we will in future assignments. We installed the web server, but is it actually running? Let’s check. Type in: $ sudo service httpd status Nope. Let’s start it. Type: $ sudo service httpd start We can use the service command to control the services running on the system. Let’s setup the service so that it automatically starts when the system boots up. Type in: $ sudo chkconfig httpd on Cool. We installed the Apache web server on our system, but what other programs are currently running? We can use the pscommand to find out. Type in: $ ps -ax Lots of processes are running on our system. We can even look at the overall performance of our system using the topcommand. Let’s try that now. Type in: $ top The display might seem a little overwhelming at first. You should see lots of performance information displayed including the cpu usage, free memory, and a list of running tasks. We’re almost across the finish line. Let’s make sure all of our valuable work is stored in a git repository. First, we need to install git. Type in the command: $ sudo yum install git -y Check your work It’s very important to check your work before submitting it for grading. A misspelled, misplaced or missing file will cost you points. This may seem harsh, but the reality is that these sorts of mistakes have consequences in the real world. For example, a server instance could fail to launch properly and impact customers because a single required file is missing. Here is what the contents of your git repository should look like before final submission: ┣archive ┃ ┣ file1 ┃ ┣ file2 ┃ ┗ file3 ┣ namelist ┗ myfile.conf Saving our work in the git repository Next, make sure you are still in your home directory (/home/ec2-user). We will install the git repository you created at the beginning of this exercise. You will need to modify this command by typing in the GitHub repository URL you copied earlier. $ git clone <your GitHub URL here>.git Example: git clone https://github.com/UST-SEIS665/hw2-seis665-02-spring2019-<your github id>.git The git application will ask you for your GitHub username and password. Note, if you have multi-factor authentication enabled on your GitHub account you will need to provide a personal token instead of your password. Git will clone (copy) the repository from GitHub to your Linux server. Since the repository is empty the clone happens almost instantly. Check to make sure that a sub-directory called "hw2-seis665-02-spring2019-<username>" exists in the current directory (where <username> is your GitHub account name). Git automatically created this directory as part of the cloning process. Change to the hw2-seis665-02-spring2019-<username> directory and type: $ ls -la Notice the .git hidden directory? This is where git actually stores all of the file changes in your repository. Nothing is actually in your repository yet. Change back to the parent directory (cd ..). Next, let’s move some of our files into the repository. Type: $ mv archive hw2-seis665-02-spring2019-<username> $ mv namelist hw2-seis665-02-spring2019-<username> $ mv myfile.conf hw2-seis665-02-spring2019-<username> Hopefully, you remembered to use the auto-complete function to reduce some of that typing. Change to the hw2-seis665-02-spring2019-<username> directory and list the directory contents. Your files are in the working directory, but are not actually stored in the repository because they haven’t been committed yet. Type in: $ git status You should see a list of untracked files. Let’s tell git that we want these files tracked. Type in: $ git add * Now type in the git status command again. Notice how all the files are now being tracked and are ready to be committed. These files are in the git staging area. We’ll commit them to the repository next. Type: $ git commit -m 'assignment 2 files' Next, take a look at the commit log. Type: $ git log You should see your commit listed along with an assigned hash (long string of random-looking characters). Finally, let’s save the repository to our GitHub account. Type in: $ git push origin master The git client will ask you for your GitHub username and password before pushing the repository. Go back to the GitHub.com website and login if you have been logged out. Click on the repository link for the assignment. Do you see your files listed there? Congratulations, you completed the exercise! Terminate server The last step is to terminate your Linux instance. AWS will bill you for every hour the instance is running. The cost is nominal, but there’s no need to rack up unnecessary charges. Here are the steps to terminate your instance: Log into your AWS account and click on the EC2 dashboard. Click the Instances menu item. Select your server in the instances table. Click on the Actions drop down menu above the instances table. Select the Instance State menu option Click on the Terminate action. Your Linux instance will shutdown and disappear in a few minutes. The EC2 dashboard will continue to display the instance on your instance listing for another day or so. However, the state of the instance will be terminated. Submitting your assignment — IMPORTANT! If you haven’t already, please e-mail me your GitHub username in order to receive credit for this assignment. There is no need to email me to tell me that you have committed your work to GitHub or to ask me if your GitHub submission worked. If you can see your work in your GitHub repository, I can see your work.
Ch-Jad / CH JaDi Rajput1# Cmder [](https://gitter.im/cmderdev/cmder?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [](https://ci.appveyor.com/project/MartiUK/cmder) Cmder is a **software package** created out of pure frustration over absence of usable console emulator on Windows. It is based on [ConEmu](https://conemu.github.io/) with *major* config overhaul, comes with a Monokai color scheme, amazing [clink](https://chrisant996.github.io/clink/) (further enhanced by [clink-completions](https://github.com/vladimir-kotikov/clink-completions)) and a custom prompt layout.  ## Why use it The main advantage of Cmder is portability. It is designed to be totally self-contained with no external dependencies, which makes it great for **USB Sticks** or **cloud storage**. So you can carry your console, aliases and binaries (like wget, curl and git) with you anywhere. The Cmder's user interface is also designed to be more eye pleasing, and you can compare the main differences between Cmder and ConEmu [here](https://conemu.github.io/en/cmder.html). ## Installation ### Single User Portable Config 1. Download the [latest release](https://github.com/cmderdev/cmder/releases/) 2. Extract the archive. *Note: This path should not be `C:\Program Files` or anywhere else that would require Administrator access for modifying configuration files* 3. (optional) Place your own executable files into the `%cmder_root%\bin` folder to be injected into your PATH. 4. Run `Cmder.exe` ### Shared Cmder install with Non-Portable Individual User Config 1. Download the [latest release](https://github.com/cmderdev/cmder/releases/) 2. Extract the archive to a shared location. 3. (optional) Place your own executable files and custom app folders into the `%cmder_root%\bin`. See: [bin/README.md](./bin/Readme.md) - This folder to be injected into your PATH by default. - See `/max_depth [1-5]` in 'Command Line Arguments for `init.bat`' table to add subdirectories recursively. 4. (optional) Place your own custom app folders into the `%cmder_root%\opt`. See: [opt/README.md](./opt/Readme.md) - This folder will NOT be injected into your PATH so you have total control of what gets added. 5. Run `Cmder.exe` with `/C` command line argument. Example: `cmder.exe /C %userprofile%\cmder_config` * This will create the following directory structure if it is missing. ``` c:\users\[CH JaDi Rajput]\cmder_config ├───bin ├───config │ └───profile.d └───opt ``` - (optional) Place your own executable files and custom app folders into `%userprofile%\cmder_config\bin`. - This folder to be injected into your PATH by default. - See `/max_depth [1-5]` in 'Command Line Arguments for `init.bat`' table to add subdirectories recursively. - (optional) Place your own custom app folders into the `%user_profile%\cmder_config\opt`. - This folder will NOT be injected into your PATH so you have total control of what gets added. * Both the shared install and the individual user config locations can contain a full set of init and profile.d scripts enabling shared config with user overrides. See below. ## Cmder.exe Command Line Arguments | Argument | Description | | ------------------- | ----------------------------------------------------------------------- | | `/C [user_root_path]` | Individual user Cmder root folder. Example: `%userprofile%\cmder_config` | | `/M` | Use `conemu-%computername%.xml` for ConEmu settings storage instead of `user_conemu.xml` | | `/REGISTER [ALL, USER]` | Register a Windows Shell Menu shortcut. | | `/UNREGISTER [ALL, USER]` | Un-register a Windows Shell Menu shortcut. | | `/SINGLE` | Start Cmder in single mode. | | `/START [start_path]` | Folder path to start in. | | `/TASK [task_name]` | Task to start after launch. | | `/X [ConEmu extras pars]` | Forwards parameters to ConEmu | ## Context Menu Integration So you've experimented with Cmder a little and want to give it a shot in a more permanent home; ### Shortcut to open Cmder in a chosen folder 1. Open a terminal as an Administrator 2. Navigate to the directory you have placed Cmder 3. Execute `.\cmder.exe /REGISTER ALL` _If you get a message "Access Denied" ensure you are executing the command in an **Administrator** prompt._ In a file explorer window right click in or on a directory to see "Cmder Here" in the context menu. ## Keyboard shortcuts ### Tab manipulation * <kbd>Ctrl</kbd> + <kbd>T</kbd> : New tab dialog (maybe you want to open cmd as admin?) * <kbd>Ctrl</kbd> + <kbd>W</kbd> : Close tab * <kbd>Ctrl</kbd> + <kbd>D</kbd> : Close tab (if pressed on empty command) * <kbd>Shift</kbd> + <kbd>Alt</kbd> + <kbd>#Number</kbd> : Fast new tab: <kbd>1</kbd> - CMD, <kbd>2</kbd> - PowerShell * <kbd>Ctrl</kbd> + <kbd>Tab</kbd> : Switch to next tab * <kbd>Ctrl</kbd> + <kbd>Shift</kbd> + <kbd>Tab</kbd> : Switch to previous tab * <kbd>Ctrl</kbd> + <kbd>#Number</kbd> : Switch to tab #Number * <kbd>Alt</kbd> + <kbd>Enter</kbd>: Fullscreen ### Shell * <kbd>Ctrl</kbd> + <kbd>Alt</kbd> + <kbd>U</kbd> : Traverse up in directory structure (lovely feature!) * <kbd>End</kbd>, <kbd>Home</kbd>, <kbd>Ctrl</kbd> : Traversing text with as usual on Windows * <kbd>Ctrl</kbd> + <kbd>R</kbd> : History search * <kbd>Shift</kbd> + Mouse : Select and copy text from buffer _(Some shortcuts are not yet documented, though they exist - please document them here)_ ## Features ### Access to multiple shells in one window using tabs You can open multiple tabs each containing one of the following shells: | Task | Shell | Description | | ---- | ----- | ----------- | | Cmder | `cmd.exe` | Windows `cmd.exe` shell enhanced with Git, Git aware prompt, Clink (GNU Readline), and Aliases. | | Cmder as Admin | `cmd.exe` | Administrative Windows `cmd.exe` Cmder shell. | | PowerShell | `powershell.exe` | Windows PowerShell enhanced with Git and Git aware prompt . | | PowerShell as Admin | `powershell.exe` | Administrative Windows `powershell.exe` Cmder shell. | | Bash | `bash.exe` | Unix/Linux like bash shell running on Windows. | | Bash as Admin | `bash.exe` | Administrative Unix/Linux like bash shell running on Windows. | | Mintty | `bash.exe` | Unix/Linux like bash shell running on Windows. See below for Mintty configuration differences | | Mintty as Admin | `bash.exe` | Administrative Unix/Linux like bash shell running on Windows. See below for Mintty configuration differences | Cmder, PowerShell, and Bash tabs all run on top of the Windows Console API and work as you might expect in Cmder with access to use ConEmu's color schemes, key bindings and other settings defined in the ConEmu Settings dialog. ⚠ *NOTE:* Only the full edition of Cmder comes with a pre-installed bash, using a vendored [git-for-windows](https://gitforwindows.org/) installation. The pre-configured Bash tabs may not work on Cmder mini edition without additional configuration. You may however, choose to use an external installation of bash, such as Microsoft's [Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10) (called WSL) or the [Cygwin](https://cygwin.com/) project which provides POSIX support on windows. ⚠ *NOTE:* Mintty tabs use a program called 'mintty' as the terminal emulator that is not based on the Windows Console API, rather it's rendered graphically by ConEmu. Mintty differs from the other tabs in that it supports xterm/xterm-256color TERM types, and does not work with ConEmu settings like color schemes and key bindings. As such, some differences in functionality are to be expected, such as Cmder not being able to apply a system-wide configuration to it. As a result mintty specific config is done via the `[%USERPROFILE%|$HOME]/.minttyrc` file. You may read more about Mintty and its config file [here](https://github.com/mintty/mintty). An example of setting Cmder portable terminal colors for mintty: From a bash/mintty shell: ``` cd $CMDER_ROOT/vendor git clone https://github.com/karlin/mintty-colors-solarized.git cd mintty-colors-solarized/ echo source \$CMDER_ROOT/vendor/mintty-colors-solarized/mintty-solarized-dark.sh>>$CMDER_ROOT/config/user_profile.sh ``` You may find some Monokai color schemes for mintty to match Cmder [here](https://github.com/oumu/mintty-color-schemes/blob/master/base16-monokai-mod.minttyrc). ### Changing Cmder Default `cmd.exe` Prompt Config File The default Cmder shell `cmd::Cmder` prompt is customized using `Clink` and is configured by editing a config file that exists in one of two locations: - Single User Portable Config `%CMDER_ROOT%\config\cmder_prompt_config.lua` - Shared Cmder install with Non-Portable Individual User Config `%CMDER_USER_CONFIG%\cmder_prompt_config.lua` If your Cmder setup does not have this file create it from `%CMDER_ROOT%\vendor\cmder_prompt_config.lua.default` Customizations include: - Colors. - Single/Multi-line. - Full path/Folder only. - `[user]@[host]` to the beginning of the prompt. - `~` for home directory. - `λ` symbol Documentation is in the file for each setting. ### Changing Cmder Default `cmd.exe` Shell Startup Behaviour Using Task Arguments 1. Press <kbd>Win</kbd> + <kbd>Alt</kbd> + <kbd>T</kbd> 1. Click either: * `1. {cmd::Cmder as Admin}` * `2. {cmd::Cmder}` 1. Add command line arguments where specified below: *Note: Pay attention to the quotes!* ``` cmd /s /k ""%ConEmuDir%\..\init.bat" [ADD ARGS HERE]" ``` ##### Command Line Arguments for `init.bat` | Argument | Description | Default | | ----------------------------- | ---------------------------------------------------------------------------------------------- | ------------------------------------- | | `/c [user cmder root]` | Enables user bin and config folders for 'Cmder as admin' sessions due to non-shared environment. | not set | | `/d` | Enables debug output. | not set | | `/f` | Enables Cmder Fast Init Mode. This disables some features, see pull request [#1492](https://github.com/cmderdev/cmder/pull/1942) for more details. | not set | | `/t` | Enables Cmder Timed Init Mode. This displays the time taken run init scripts | not set | | `/git_install_root [file path]` | User specified Git installation root path. | `%CMDER_ROOT%\vendor\Git-for-Windows` | | `/home [home folder]` | User specified folder path to set `%HOME%` environment variable. | `%userprofile%` | | `/max_depth [1-5]` | Define max recurse depth when adding to the path for `%cmder_root%\bin` and `%cmder_user_bin%` | 1 | | `/nix_tools [0-2]` | Define how `*nix` tools are added to the path. Prefer Windows Tools: 1, Prefer *nix Tools: 2, No `/usr/bin` in `%PATH%`: 0 | 1 | | `/svn_ssh [path to ssh.exe]` | Define `%SVN_SSH%` so we can use git svn with ssh svn repositories. | `%GIT_INSTALL_ROOT%\bin\ssh.exe` | | `/user_aliases [file path]` | File path pointing to user aliases. | `%CMDER_ROOT%\config\user_aliases.cmd` | | `/v` | Enables verbose output. | not set | | (custom arguments) | User defined arguments processed by `cexec`. Type `cexec /?` for more usage. | not set | ### Cmder Shell User Config Single user portable configuration is possible using the cmder specific shell config files. Edit the below files to add your own configuration: | Shell | Cmder Portable User Config | | ------------- | ----------------------------------------- | | Cmder | `%CMDER_ROOT%\config\user_profile.cmd` | | PowerShell | `$ENV:CMDER_ROOT\config\user_profile.ps1` | | Bash/Mintty | `$CMDER_ROOT/config/user_profile.sh` | Note: Bash and Mintty sessions will also source the `$HOME/.bashrc` file if it exists after it sources `$CMDER_ROOT/config/user_profile.sh`. You can write `*.cmd|*.bat`, `*.ps1`, and `*.sh` scripts and just drop them in the `%CMDER_ROOT%\config\profile.d` folder to add startup config to Cmder. | Shell | Cmder `Profile.d` Scripts | | ------------- | -------------------------------------------------- | | Cmder | `%CMDER_ROOT%\config\profile.d\*.bat and *.cmd` | | PowerShell | `$ENV:CMDER_ROOT\config\profile.d\*.ps1` | | Bash/Mintty | `$CMDER_ROOT/config/profile.d/*.sh` | #### Git Status Opt-Out To disable Cmder prompt git status globally add the following to `~/.gitconfig` or locally for a single repo `[repo]/.git/config` and start a new session. *Note: This configuration is not portable* ``` [cmder] status = false # Opt out of Git status for 'ALL' Cmder supported shells. cmdstatus = false # Opt out of Git status for 'Cmd.exe' shells. psstatus = false # Opt out of Git status for 'Powershell.exe and 'Pwsh.exe' shells. shstatus = false # Opt out of Git status for 'bash.exe' shells. ``` ### Aliases #### Cmder(`Cmd.exe`) Aliases You can define simple aliases for `cmd.exe` sessions with a command like `alias name=command`. Cmd.exe aliases support optional parameters through the `$1-9` or the `$*` special characters so the alias `vi=vim.exe $*` typed as `vi [filename]` will open `[filename]` in `vim.exe`. Cmd.exe aliases can also be more complex. See: [DOSKEY.EXE documentation](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/doskey) for additional details on complex aliases/macros for `cmd.exe` Aliases defined using the `alias.bat` command will automatically be saved in the `%CMDER_ROOT%\config\user_aliases.cmd` file To make an alias and/or any other profile settings permanent add it to one of the following: Note: These are loaded in this order by `$CMDER_ROOT/vendor/init.bat`. Anything stored in `%CMDER_ROOT%` will be a portable setting and will follow cmder to another machine. * `%CMDER_ROOT%\config\profile.d\*.cmd` and `\*.bat` * `%CMDER_ROOT%\config\user_aliases.cmd` * `%CMDER_ROOT%\config\user_profile.cmd` #### Bash.exe|Mintty.exe Aliases Bash shells support simple and complex aliases with optional parameters natively so they work a little different. Typing `alias name=command` will create an alias only for the current running session. To make an alias and/or any other profile settings permanent add it to one of the following: Note: These are loaded in this order by `$CMDER_ROOT/vendor/git-for-windows/etc/profile.d/cmder.sh`. Anything stored in `$CMDER_ROOT` will be a portable setting and will follow cmder to another machine. * `$CMDER_ROOT/config/profile.d/*.sh` * `$CMDER_ROOT/config/user_profile.sh` * `$HOME/.bashrc` If you add bash aliases to `$CMDER_ROOT/config/user_profile.sh` they will be portable and follow your Cmder folder if you copy it to another machine. `$HOME/.bashrc` defined aliases are not portable. #### PowerShell.exe Aliases PowerShell has native simple alias support, for example `[new-alias | set-alias] alias command`, so complex aliases with optional parameters are not supported in PowerShell sessions. Type `get-help [new-alias|set-alias] -full` for help on PowerShell aliases. To make an alias and/or any other profile settings permanent add it to one of the following: Note: These are loaded in this order by `$ENV:CMDER_ROOT\vendor\user_profile.ps1`. Anything stored in `$ENV:CMDER_ROOT` will be a portable setting and will follow cmder to another machine. * `$ENV:CMDER_ROOT\config\profile.d\*.ps1` * `$ENV:CMDER_ROOT\config\user_profile.ps1` ### SSH Agent To start the vendored SSH agent simply call `start-ssh-agent`, which is in the `vendor/git-for-windows/cmd` folder. If you want to run SSH agent on startup, include the line `@call "%GIT_INSTALL_ROOT%/cmd/start-ssh-agent.cmd"` in `%CMDER_ROOT%/config/user_profile.cmd` (usually just uncomment it). ### Vendored Git Cmder is by default shipped with a vendored Git installation. On each instance of launching Cmder, an attempt is made to locate any other user provided Git binaries. Upon finding a `git.exe` binary, Cmder further compares its version against the vendored one _by executing_ it. The vendored `git.exe` binary is _only_ used when it is more recent than the user-installed one. You may use your favorite version of Git by including its path in the `%PATH%` environment variable. Moreover, the **Mini** edition of Cmder (found on the [downloads page](https://github.com/cmderdev/cmder/releases)) excludes any vendored Git binaries. ### Using external Cygwin/Babun, MSys2, WSL, or Git for Windows SDK with Cmder. You may run bash (the default shell used on Linux, macOS and GNU/Hurd) externally on Cmder, using the following instructions: 1. Setup a new task by pressing <kbd>Win</kbd> +<kbd>Alt</kbd> + <kbd>T</kbd>. 1. Click the `+` button to add a task. 1. Name the new task in the top text box. 1. Provide task parameters, this is optional. 1. Add `cmd /c "[path_to_external_env]\bin\bash --login -i" -new_console` to the `Commands` text box. **Recommended Optional Steps:** Copy the `vendor/cmder_exinit` file to the Cygwin/Babun, MSys2, or Git for Windows SDK environments `/etc/profile.d/` folder to use portable settings in the `$CMDER_ROOT/config` folder. Note: MinGW could work if the init scripts include `profile.d` but this has not been tested. The destination file extension depends on the shell you use in that environment. For example: * bash - Copy to `/etc/profile.d/cmder_exinit.sh` * zsh - Copy to `/etc/profile.d/cmder_exinit.zsh` Uncomment and edit the below line in the script to use Cmder config even when launched from outside Cmder. ``` # CMDER_ROOT=${USERPROFILE}/cmder # This is not required if launched from Cmder. ``` ### Customizing user sessions using `init.bat` custom arguments. You can pass custom arguments to `init.bat` and use `cexec.cmd` in your `user_profile.cmd` to evaluate these arguments then execute commands based on a particular flag being detected or not. `init.bat` creates two shortcuts for using `cexec.cmd` in your profile scripts. #### `%ccall%` - Evaluates flags, runs commands if found, and returns to the calling script and continues. ``` ccall=call C:\Users\user\cmderdev\vendor\bin\cexec.cmd ``` Example: `%ccall% /startnotepad start notepad.exe` #### `%cexec%` - Evaluates flags, runs commands if found, and does not return to the calling script. ``` cexec=C:\Users\user\cmderdev\vendor\bin\cexec.cmd ``` Example: `%cexec% /startnotepad start notepad.exe` It is useful when you have multiple tasks to execute `cmder` and need it to initialize the session differently depending on the task chosen. To conditionally start `notepad.exe` when you start a specific `cmder` task: * Press <kbd>win</kbd>+<kbd>alt</kbd>+<kbd>t</kbd> * Click `+` to add a new task. * Add the below to the `Commands` block: ```batch cmd.exe /k ""%ConEmuDir%\..\init.bat" /startnotepad" ``` * Add the below to your `%cmder_root%\config\user_profile.cmd` ```batch %ccall% "/startNotepad" "start" "notepad.exe"` ``` To see detailed usage of `cexec`, type `cexec /?` in cmder. ### Integrating Cmder with [Hyper](https://github.com/zeit/hyper), [Microsoft VS Code](https://code.visualstudio.com/), and your favorite IDEs Cmder by default comes with a vendored ConEmu installation as the underlying terminal emulator, as stated [here](https://conemu.github.io/en/cmder.html). However, Cmder can in fact run in a variety of other terminal emulators, and even integrated IDEs. Assuming you have the latest version of Cmder, follow the following instructions to get Cmder working with your own terminal emulator. For instructions on how to integrate Cmder with your IDE, please read our [Wiki section](https://github.com/cmderdev/cmder/wiki#cmder-integration). ## Upgrading The process of upgrading Cmder depends on the version/build you are currently running. If you have a `[cmder_root]/config/user[-|_]conemu.xml`, you are running a newer version of Cmder, follow the below process: 1. Exit all Cmder sessions and relaunch `[cmder_root]/cmder.exe`, this backs up your existing `[cmder_root]/vendor/conemu-maximus5/conemu.xml` to `[cmder_root]/config/user[-|_]conemu.xml`. * The `[cmder_root]/config/user[-|_]conemu.xml` contains any custom settings you have made using the 'Setup Tasks' settings dialog. 2. Exit all Cmder sessions and backup any files you have manually edited under `[cmder_root]/vendor`. * Editing files under `[cmder_root]/vendor` is not recommended since you will need to re-apply these changes after any upgrade. All user customizations should go in `[cmder_root]/config` folder. 3. Delete the `[cmder_root]/vendor` folder. 4. Extract the new `cmder.zip` or `cmder_mini.zip` into `[cmder_root]/` overwriting all files when prompted. If you do not have a `[cmder_root]/config/user[-|_]conemu.xml`, you are running an older version of cmder, follow the below process: 1. Exit all Cmder sessions and backup `[cmder_root]/vendor/conemu-maximus5/conemu.xml` to `[cmder_root]/config/user[-|_]conemu.xml`. 2. Backup any files you have manually edited under `[cmder_root]/vendor`. * Editing files under `[cmder_root]/vendor` is not recommended since you will need to re-apply these changes after any upgrade. All user customizations should go in `[cmder_root]/config` folder. 3. Delete the `[cmder_root]/vendor` folder. 4. Extract the new `cmder.zip` or `cmder_mini.zip` into `[cmder_root]/` overwriting all files when prompted. ## Current development builds You can download builds of the current development branch by going to AppVeyor via the following link: [](https://ci.appveyor.com/project/MartiUK/cmder/branch/master/artifacts) ## License All software included is bundled with own license The MIT License (MIT) Copyright (c) 2016 Samuel Vasko Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
BholaHrishikesh / Psychic Rotary PhoneIf you like this build I’ve also written other posts on building a simple voice controlled Magic Mirror with the Raspberry Pi and the AIY Projects Voice Kit, and a face-tracking cyborg dinosaur called “Do-you-think-he-saurs” with the Raspberry Pi and the AIY Projects Vision Kit. At the tail end of last month, just ahead of the announcement of the pre-order availability of the new Google AIY Project Voice Kit, I finally decided to take the kit I’d managed to pick up with issue 57 of the MagPi out of its box, and put it together. However inspired by the 1986 Google Pi Intercom build put together Martin Mander, ever since I’ve been thinking about venturing beyond the cardboard box and building my own retro-computing enclosure around the Voice Kit. I was initially thinking about using an old radio until I came across the GPO 746 Rotary Telephone. This is a modern day replica of what must be the most iconic rotary dial phone in the United Kingdom. This is the phone that sat on everybody’s desk, and in their front halls, throughout the 1970’s. It was the standard rental phone, right up until British Telecom was privatised in the middle of the 1980's. The GPO 746 Rotary Telephone. While the GPO 746 is available in the United States it’s half the price, and there are a lot more colours to choose from, if you’re buying the phone in the United Kingdom. A definite business opportunity for someone there because it turns out that, on the inside, it’s a rather interesting bit of hardware. Gathering your Tools For this project you’ll need is a small Philips “00” watch maker’s screwdriver, a craft knife, scissors, a set of small wire snips, a drill and a 2 to 4mm bit, a soldering iron, solder, some jumper wires, female header blocks, a couple of LEDs, some electrical tape, a cable tie, and possibly some Sugru and heat shrink tubing, depending how neat you want to be about things. While I did end up soldering a few things during the build, it is was mostly restricted to joining wires together and should definitely be approachable for beginners. Opening the Box Ahead of the new Voice Kit hitting the shelves next month I managed to get my hands on a few pre-production kits, which fortunately meant that I didn’t have to take my cardboard box apart to put together a new build. The new AIY Project Voice Kit. The new AIY Voice Kit comes comes in a box very similar to the original kit distributed with the Mag Pi magazine. The box might be a bit thinner, but otherwise things look much the same. Missing from my pre-production kits the two little plastic spacers that keep the Voice HAT from bending down and hitting the top of the Raspberry Pi. I’m presuming they’ll include them in the production kits, without them the underside of the HAT tends to push downwards and the solder tails of the speaker screw terminal shorts out against the Raspberry Pi’s HDMI connector. I fixed this by adding some electrical tape to separate the two boards, but the spacers would have worked a lot better and added more stability. The only component swap was the arcade button, gone was the separate lamp, holder, microswitch and button—all four components have been replaced by a single button with everything integrated. Since it was somewhat fiddly to get that assembled last time, this is a definite improvement. While my pre-production kits didn’t include it, I’m told the retail version will have a copy of the MagPi Essentials AIY Projects book written by Lucy Hattersley on how to “Create a Voice Kit with your Raspberry Pi.” Other than that, things went together much as before. At which point I quickly put together the Voice Kit, this time however, I didn’t bother with the cardboard box. Opening the Phone Pulling the replica GPO 746 out of its box you’ll find it comes in two parts, the main phone with the dial, and a separate handset which plugs in underneath the base. The first thing I needed to do was take the base unit of the phone apart and figure out how it worked. Until I knew what I had to work with, it was going to be impossible to figure out a sensible plan to integrate the Voice Kit. Opening up the GPO 746. The main PCB is mounted on the base along with a steel weight to give the impression of “heft” to the replica phone. There’s also a large bell, which makes that distinctive ringing noise familiar to anyone that owned or used a GPO 746 back in the 1970's. The circuitry attached to the base of the GPO 746. To the left of the PCB is the jack socket where the telephone line is connected (two wires, red and green). To the top, two switches. One is for handset, and the other for ringer, volume. At the bottom another jack switch (four wires, red, black, yellow, and green) where the handset is attached. The only thing of real interest on the PCB is the Hualon Microelectronics HM9102D which is a switchable tone/pulse dialer chip, which we’re actually not going to use. In fact, since the line voltage in the UK is +50V, pretty much none of it was going to be any use to me. So after measuring the voltage on the cable connecting the dialer to the PCB, I snipped the wires to the switches and the jacks—leaving them in place with as much trailing wire as possible in case they were going to come in useful,—and then removed both the PCB and the bell. After that, I filed down the plastic moulding that held everything in place leaving me with a large flat area which was perfectly sized for the Raspberry Pi and the Voice HAT. The moulded top of the phone has two assemblies, a simple microswitch toggled using a hinged and sprung plastic plate when the phone handset is taken on and off the hook, and the dialer assembly which is connected to the base and the PCB using a ribbon cable. How the Dialer Works It was time to break out the logic analyser. While I’ve got a Saleae Logic Pro 16 on my desk, if you’re thinking about picking one up for the first time I’d really recommend the much cheaper Logic 8, or even the lower specification Logic 4, rather than splashing out on the higher end model. Either will take you a long way before you get the itch that you have to upgrade. Logic Analyser attached to the dial of the GPO 746 powered up using a Bench Power Supply. Stripping the connector from the cable that connected the dialer to the PCB and powering it up with a bench power supply to +5V—which is more-or-less what I’d measured on the cable and was something I could reasonably expect to get from the Raspberry Pi—I connected the rest of the cables to my logic analyser and started turning the dial confidently expecting to see something interesting going on. I found nothing, I had flat lines, there was no signal going down the wires at all. After playing around with the voltage for a few minutes, with no results, I stripped the dialer assembly out of the case for a closer look. Dialer assembly removed from the GPO 746. The back of the dialer assembly has two LEDs, which I thought was rather odd since there dial isn’t illuminated in any way, at least not from the outside. Interestingly these two LEDs flash briefly when the dial is turned all the way around to hit the stop. Cracking the case brings us to something else interesting, it’s a light box. Designed to keep the light from the LEDs inside, it has a hole which rotates around as you dial a number. Taking apart the dial assembly. The hole exposes one of twelve photoresistors to the light from the LEDs and the number (or symbol) you’re dialing determines which of the resistors will be under the hole when the dial stop is reached. The photoresistors inside the dial assembly. It was all passive circuitry. No wonder I hadn’t seen anything on the logic analyser, there wasn’t any logic to analyse. It was all analogue. Unfortunately for me, the Raspberry Pi has no built in analogue inputs. That means I’d have to pull a Microchip MCP3008, or something similar, from the shelf and build some circuitry. I’d also have to figure out how the resistance for twelve photoresistors ended up travelling down just eight wires, which sort of had me puzzled at this point. That all sounded like a lot of effort. Since I really only wanted to dial a single digit to activate the Voice Kit, and I didn’t care what that was, I decided to ignore the photoresistors and concentrate on the dial stop. The dialer mechanism showing the back of the dial stop (left) with microswitch. Unlike the original GPO 746, the dial stop on this replica moves. It drops when you hit it with the side of your finger when dialling a number. It turned out that it was connected to a microswitch, and when the microswitch was activated, this was the thing that briefly flashed the LEDs and exposed the appropriate photoresistor. It was actually all rather clever. A really neat way to minimise the build of materials costs for the phone. Startups thinking about building hardware could learn a lesson or two in economy from this phone. Using the logic analyser on the microswitch. Just to be sure I had this right, I dialled down the bench power supply to a Raspberry Pi friendly +3.3V and wired up the microswitch to the logic analyser. Applying +3.3V (middle trace) and “dialing” shows the microswitch toggling (lower trace). Dialling a number on the dialer assembly worked as expected. We could ignore the dial itself, and those photoresistors that would be a pain to use with the Raspberry Pi and just make use of the microswitch. In fact we could more-or-less just replace the arcade button with this switch. Integrating the AIY Project Kit Moving on, I really wanted to reuse both the speaker and the microphone already in the handset instead of the ones the came with the Voice Kit. Handset stripped of its speaker and micrphone, Taking apart the handset—the end caps holding the speaker and microphone just screw off—showed that there were four wires inside the curled cable. Two for the speaker, and two for the electret condenser microphone. The Voice Kit makes use of two InvenSense ICS-43434 MEMS microphones which use I2S to communicate. They’re a solid replacement for traditional 2-wire analog microphones like the one we in the handset of the GPO 746. The Voice HAT Microphone daughter board. Looking at the Voice HAT microphone daughter board, it has been designed so that you can break the two microphones away from the board at the perforations and then you can solder the wiring harness directly to the pads. So long as you keep the signals consistent you should be able to place the mics pretty much anywhere, and with a clock rate of ~3MHz, a longer cable should be fine. Unfortunately I2S uses more wires than I had available. Unless I wanted to replace the curled cable, and I didn’t really want to have to do that, I was in trouble. Putting that aside for a moment I decided to start with the dialer assembly. Refitting it to the case, I snipped the wires leading to the microswitch and, grabbing the wiring harness for the arcade button, I soldered the microswitch to the relevant wires in the harness. Soldering the Voice HAT button wiring harness to the phone’s microswitch. I then grabbed a ultra-bright LED and a 220Ω resistor from the shelves and soldered the resistor in-line with the LED. I then attached my new LED assembly to the other two wires in the arcade button wiring harness. At this point I had a replacement for the arcade button that came with the Voice Kit. Attaching a current limiting resistor to my LED. Giving up on putting microphones into the handset I pulled out a drill and measuring the spacing between the two microphones I drilled a couple of holes in the external shell of the phone. Drilling two holes in the shell of the phone. These weren’t going to be visible from the outside as there is a void between the top of the phone, where the handset rests. This is a carrying handle where you can tuck your hand in, and pick up the phone. In the old days this let you pick up the phone and wander around the room—well, so long as the cable tying you to the wall was long enough. Attaching the Voice HAT microphone board to the phone shell. I then went ahead and tucked the microphone board behind the spring which operated the hook mechanism. There was just enough room to secure it there with a cable tie, and some Sugru. After that I plugged the handset into the jack on the base and connected the two wires from the handset jack that were attached to the speaker to the screw terminals on the Voice HAT. The re-wired internals of the modified GPO 746. Microphone board and Voice Kit both fixed in place with Sugru. Stripping the jack out where the phone line originally ran left two upright pillars that used to go on either side of the jack. I threaded the end of a 2.5A micro-USB charger through the hole and tied it around the pillars for strain relief. Which completed the re-wiring. The arcade button had been replaced with the dial stop microswitch and an LED which I was going to tuck just ahead of the microphone board in a convenient clip-like part of the body moulding. The speaker had been swapped out directly with the one in the handset—fortunately the impedance match wasn’t too far off—and the microphone had been mounted somewhere convenient inside the main body of the phone. A Working Phone Screwing everything back together we have once again something that looks like a phone. The assembled phone. I booted the Raspberry Pi, logged in via SSH and went ahead and ran the src/assistant_library_with_button_demo.py script from the dev console. A working build, but it’s not quite there yet. Success. Picking up the handset and dialling a number, any number, let you talk to the Voice Assistant. But it wasn’t quite there yet. While it worked, it didn’t feel like a phone. Adding a Dial Tone What the phone needed was a dial tone. It needed to play when the handset was lifted and shut off when the phone was dialled, or the handset replaced. The phone hook works the opposite way that you might expect, when the handset is in the cradle the microswitch that simulates the hook is open as the bar below it is pushed down by the hook. When the handset is off the hook, then the microswitch is closed as the bar moves upwards. Conveniently the Voice HAT breaks out most of the unused GPIO pins from the Raspberry Pi, so at least in theory wiring the the microswitch attached to the the hook mechanism to one to the Voice HAT should be fairly simple. Available unpopulated connectors on the Voice HAT. (Image credit: Google) Thinking about how to approach this in software however left us with a bit of a quandary. While the underlying Python GPIO library allows us to detect both the rising and falling edge events when a switch is toggled, the AIY wrapper code doesn’t in the Voice Kit doesn’t. While I could have gone in and modified the wrapper code to add that functionality, I decided I didn’t want to mess around with that—perhaps I’ll get around to it later and send them a pull request—instead I decided to fix it in hardware and wire the hook switch into both GPIO4 and GPIO17. That way I could use one pin to monitor for GPIO.RISING, and the other for GPIO.FALLING. Wiring up the phone hook. It’s easy enough to do that using the aiy._drivers._button.Button class, and two callback methods. One called with the handset is taken off the hook, and the other called with it is replaced. All the additional wiring in place and working. We can then use the pygame library to play a WAV file in the background when the handset is lifted, and stop when it is replaced. We also have to add a stop command inside the _on_button_pressed() method so that the dial tone stops when the phone is dialled, and a call to stop_conversation() to stop the Voice Assistant talking if the handset is returned on hook while Google is answering our question. Adding a Greeting and a Hang Up Noise We’re not quite there yet, we can also use aiy.audio.play_wave() to add that distinctive disconnect noise when Google finishes talking and “hangs up” before returning to our dial tone. We can also use aiy.audio.say(‘…’) call to add a greeting when Google “picks up” the phone to talk to us. The final build. It’s surprising how much atmosphere just adding these simple sounds ended up making to the build, and how much the user experience was improved. It now doesn’t just look like a rotary phone, it sort of feels, and perhaps more importantly, sounds like one too. The Script The final version of the script has amazingly small number of modifications away from the original version distributed byGoogle. Which sort of shows how simple it is to build something that looks and feels very different from the original cardboard box with not a lot of effort, at least on the software If you want to replicate the build you can grab the two mono WAV files I used for the build from Dropbox. Although, if you’re outside the the United Kingdom, you might want to replace the standard British dial tone of 350Hz and 450Hz—which you hear any time you lift a phone off the hook—with something more appropriate. Available to Preorder The new kits are being produced by Google, and are available to pre-order at Micro Center and through their resellers like Adafruit, and SeeedStudio. The AIY Voice Kit is priced at $25 on its own, but you can pick one up for free if you order a Raspberry Pi 3 at $35 for in-store pickup from Micro Center. My new retro rotary phone build next to my original Voice Kit. The kit will be available in the United Kingdom through Pimoroni, and cost £25, and you can expect shipping dates for kits ordered in through them to be similar to those ordered from Micro Center.