70 skills found · Page 1 of 3
external-secrets / External SecretsExternal Secrets Operator reads information from a third-party service like AWS Secrets Manager and automatically injects the values as Kubernetes Secrets.
ftchirou / PredicateKit🎯 PredicateKit allows Swift developers to write expressive and type-safe predicates for CoreData using key-paths, comparisons and logical operators, literal values, and functions.
ManojKumarPatnaik / Major Project ListA list of practical projects that anyone can solve in any programming language (See solutions). These projects are divided into multiple categories, and each category has its own folder. To get started, simply fork this repo. CONTRIBUTING See ways of contributing to this repo. You can contribute solutions (will be published in this repo) to existing problems, add new projects, or remove existing ones. Make sure you follow all instructions properly. Solutions You can find implementations of these projects in many other languages by other users in this repo. Credits Problems are motivated by the ones shared at: Martyr2’s Mega Project List Rosetta Code Table of Contents Numbers Classic Algorithms Graph Data Structures Text Networking Classes Threading Web Files Databases Graphics and Multimedia Security Numbers Find PI to the Nth Digit - Enter a number and have the program generate PI up to that many decimal places. Keep a limit to how far the program will go. Find e to the Nth Digit - Just like the previous problem, but with e instead of PI. Enter a number and have the program generate e up to that many decimal places. Keep a limit to how far the program will go. Fibonacci Sequence - Enter a number and have the program generate the Fibonacci sequence to that number or to the Nth number. Prime Factorization - Have the user enter a number and find all Prime Factors (if there are any) and display them. Next Prime Number - Have the program find prime numbers until the user chooses to stop asking for the next one. Find Cost of Tile to Cover W x H Floor - Calculate the total cost of the tile it would take to cover a floor plan of width and height, using a cost entered by the user. Mortgage Calculator - Calculate the monthly payments of a fixed-term mortgage over given Nth terms at a given interest rate. Also, figure out how long it will take the user to pay back the loan. For added complexity, add an option for users to select the compounding interval (Monthly, Weekly, Daily, Continually). Change Return Program - The user enters a cost and then the amount of money given. The program will figure out the change and the number of quarters, dimes, nickels, pennies needed for the change. Binary to Decimal and Back Converter - Develop a converter to convert a decimal number to binary or a binary number to its decimal equivalent. Calculator - A simple calculator to do basic operators. Make it a scientific calculator for added complexity. Unit Converter (temp, currency, volume, mass, and more) - Converts various units between one another. The user enters the type of unit being entered, the type of unit they want to convert to, and then the value. The program will then make the conversion. Alarm Clock - A simple clock where it plays a sound after X number of minutes/seconds or at a particular time. Distance Between Two Cities - Calculates the distance between two cities and allows the user to specify a unit of distance. This program may require finding coordinates for the cities like latitude and longitude. Credit Card Validator - Takes in a credit card number from a common credit card vendor (Visa, MasterCard, American Express, Discoverer) and validates it to make sure that it is a valid number (look into how credit cards use a checksum). Tax Calculator - Asks the user to enter a cost and either a country or state tax. It then returns the tax plus the total cost with tax. Factorial Finder - The Factorial of a positive integer, n, is defined as the product of the sequence n, n-1, n-2, ...1, and the factorial of zero, 0, is defined as being 1. Solve this using both loops and recursion. Complex Number Algebra - Show addition, multiplication, negation, and inversion of complex numbers in separate functions. (Subtraction and division operations can be made with pairs of these operations.) Print the results for each operation tested. Happy Numbers - A happy number is defined by the following process. Starting with any positive integer, replace the number by the sum of the squares of its digits, and repeat the process until the number equals 1 (where it will stay), or it loops endlessly in a cycle which does not include 1. Those numbers for which this process ends in 1 are happy numbers, while those that do not end in 1 are unhappy numbers. Display an example of your output here. Find the first 8 happy numbers. Number Names - Show how to spell out a number in English. You can use a preexisting implementation or roll your own, but you should support inputs up to at least one million (or the maximum value of your language's default bounded integer type if that's less). Optional: Support for inputs other than positive integers (like zero, negative integers, and floating-point numbers). Coin Flip Simulation - Write some code that simulates flipping a single coin however many times the user decides. The code should record the outcomes and count the number of tails and heads. Limit Calculator - Ask the user to enter f(x) and the limit value, then return the value of the limit statement Optional: Make the calculator capable of supporting infinite limits. Fast Exponentiation - Ask the user to enter 2 integers a and b and output a^b (i.e. pow(a,b)) in O(LG n) time complexity. Classic Algorithms Collatz Conjecture - Start with a number n > 1. Find the number of steps it takes to reach one using the following process: If n is even, divide it by 2. If n is odd, multiply it by 3 and add 1. Sorting - Implement two types of sorting algorithms: Merge sort and bubble sort. Closest pair problem - The closest pair of points problem or closest pair problem is a problem of computational geometry: given n points in metric space, find a pair of points with the smallest distance between them. Sieve of Eratosthenes - The sieve of Eratosthenes is one of the most efficient ways to find all of the smaller primes (below 10 million or so). Graph Graph from links - Create a program that will create a graph or network from a series of links. Eulerian Path - Create a program that will take as an input a graph and output either an Eulerian path or an Eulerian cycle, or state that it is not possible. An Eulerian path starts at one node and traverses every edge of a graph through every node and finishes at another node. An Eulerian cycle is an eulerian Path that starts and finishes at the same node. Connected Graph - Create a program that takes a graph as an input and outputs whether every node is connected or not. Dijkstra’s Algorithm - Create a program that finds the shortest path through a graph using its edges. Minimum Spanning Tree - Create a program that takes a connected, undirected graph with weights and outputs the minimum spanning tree of the graph i.e., a subgraph that is a tree, contains all the vertices, and the sum of its weights is the least possible. Data Structures Inverted index - An Inverted Index is a data structure used to create full-text search. Given a set of text files, implement a program to create an inverted index. Also, create a user interface to do a search using that inverted index which returns a list of files that contain the query term/terms. The search index can be in memory. Text Fizz Buzz - Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”. Reverse a String - Enter a string and the program will reverse it and print it out. Pig Latin - Pig Latin is a game of alterations played in the English language game. To create the Pig Latin form of an English word the initial consonant sound is transposed to the end of the word and an ay is affixed (Ex.: "banana" would yield anana-bay). Read Wikipedia for more information on rules. Count Vowels - Enter a string and the program counts the number of vowels in the text. For added complexity have it report a sum of each vowel found. Check if Palindrome - Checks if the string entered by the user is a palindrome. That is that it reads the same forwards as backward like “racecar” Count Words in a String - Counts the number of individual words in a string. For added complexity read these strings in from a text file and generate a summary. Text Editor - Notepad-style application that can open, edit, and save text documents. Optional: Add syntax highlighting and other features. RSS Feed Creator - Given a link to RSS/Atom Feed, get all posts and display them. Quote Tracker (market symbols etc) - A program that can go out and check the current value of stocks for a list of symbols entered by the user. The user can set how often the stocks are checked. For CLI, show whether the stock has moved up or down. Optional: If GUI, the program can show green up and red down arrows to show which direction the stock value has moved. Guestbook / Journal - A simple application that allows people to add comments or write journal entries. It can allow comments or not and timestamps for all entries. Could also be made into a shoutbox. Optional: Deploy it on Google App Engine or Heroku or any other PaaS (if possible, of course). Vigenere / Vernam / Ceasar Ciphers - Functions for encrypting and decrypting data messages. Then send them to a friend. Regex Query Tool - A tool that allows the user to enter a text string and then in a separate control enter a regex pattern. It will run the regular expression against the source text and return any matches or flag errors in the regular expression. Networking FTP Program - A file transfer program that can transfer files back and forth from a remote web sever. Bandwidth Monitor - A small utility program that tracks how much data you have uploaded and downloaded from the net during the course of your current online session. See if you can find out what periods of the day you use more and less and generate a report or graph that shows it. Port Scanner - Enter an IP address and a port range where the program will then attempt to find open ports on the given computer by connecting to each of them. On any successful connections mark the port as open. Mail Checker (POP3 / IMAP) - The user enters various account information include web server and IP, protocol type (POP3 or IMAP), and the application will check for email at a given interval. Country from IP Lookup - Enter an IP address and find the country that IP is registered in. Optional: Find the Ip automatically. Whois Search Tool - Enter an IP or host address and have it look it up through whois and return the results to you. Site Checker with Time Scheduling - An application that attempts to connect to a website or server every so many minute or a given time and check if it is up. If it is down, it will notify you by email or by posting a notice on the screen. Classes Product Inventory Project - Create an application that manages an inventory of products. Create a product class that has a price, id, and quantity on hand. Then create an inventory class that keeps track of various products and can sum up the inventory value. Airline / Hotel Reservation System - Create a reservation system that books airline seats or hotel rooms. It charges various rates for particular sections of the plane or hotel. For example, first class is going to cost more than a coach. Hotel rooms have penthouse suites which cost more. Keep track of when rooms will be available and can be scheduled. Company Manager - Create a hierarchy of classes - abstract class Employee and subclasses HourlyEmployee, SalariedEmployee, Manager, and Executive. Everyone's pay is calculated differently, research a bit about it. After you've established an employee hierarchy, create a Company class that allows you to manage the employees. You should be able to hire, fire, and raise employees. Bank Account Manager - Create a class called Account which will be an abstract class for three other classes called CheckingAccount, SavingsAccount, and BusinessAccount. Manage credits and debits from these accounts through an ATM-style program. Patient / Doctor Scheduler - Create a patient class and a doctor class. Have a doctor that can handle multiple patients and set up a scheduling program where a doctor can only handle 16 patients during an 8 hr workday. Recipe Creator and Manager - Create a recipe class with ingredients and put them in a recipe manager program that organizes them into categories like desserts, main courses, or by ingredients like chicken, beef, soups, pies, etc. Image Gallery - Create an image abstract class and then a class that inherits from it for each image type. Put them in a program that displays them in a gallery-style format for viewing. Shape Area and Perimeter Classes - Create an abstract class called Shape and then inherit from it other shapes like diamond, rectangle, circle, triangle, etc. Then have each class override the area and perimeter functionality to handle each shape type. Flower Shop Ordering To Go - Create a flower shop application that deals in flower objects and use those flower objects in a bouquet object which can then be sold. Keep track of the number of objects and when you may need to order more. Family Tree Creator - Create a class called Person which will have a name, when they were born, and when (and if) they died. Allow the user to create these Person classes and put them into a family tree structure. Print out the tree to the screen. Threading Create A Progress Bar for Downloads - Create a progress bar for applications that can keep track of a download in progress. The progress bar will be on a separate thread and will communicate with the main thread using delegates. Bulk Thumbnail Creator - Picture processing can take a bit of time for some transformations. Especially if the image is large. Create an image program that can take hundreds of images and converts them to a specified size in the background thread while you do other things. For added complexity, have one thread handling re-sizing, have another bulk renaming of thumbnails, etc. Web Page Scraper - Create an application that connects to a site and pulls out all links, or images, and saves them to a list. Optional: Organize the indexed content and don’t allow duplicates. Have it put the results into an easily searchable index file. Online White Board - Create an application that allows you to draw pictures, write notes and use various colors to flesh out ideas for projects. Optional: Add a feature to invite friends to collaborate on a whiteboard online. Get Atomic Time from Internet Clock - This program will get the true atomic time from an atomic time clock on the Internet. Use any one of the atomic clocks returned by a simple Google search. Fetch Current Weather - Get the current weather for a given zip/postal code. Optional: Try locating the user automatically. Scheduled Auto Login and Action - Make an application that logs into a given site on a schedule and invokes a certain action and then logs out. This can be useful for checking webmail, posting regular content, or getting info for other applications and saving it to your computer. E-Card Generator - Make a site that allows people to generate their own little e-cards and send them to other people. Do not use Flash. Use a picture library and perhaps insightful mottos or quotes. Content Management System - Create a content management system (CMS) like Joomla, Drupal, PHP Nuke, etc. Start small. Optional: Allow for the addition of modules/addons. Web Board (Forum) - Create a forum for you and your buddies to post, administer and share thoughts and ideas. CAPTCHA Maker - Ever see those images with letters numbers when you signup for a service and then ask you to enter what you see? It keeps web bots from automatically signing up and spamming. Try creating one yourself for online forms. Files Quiz Maker - Make an application that takes various questions from a file, picked randomly, and puts together a quiz for students. Each quiz can be different and then reads a key to grade the quizzes. Sort Excel/CSV File Utility - Reads a file of records, sorts them, and then writes them back to the file. Allow the user to choose various sort style and sorting based on a particular field. Create Zip File Maker - The user enters various files from different directories and the program zips them up into a zip file. Optional: Apply actual compression to the files. Start with Huffman Algorithm. PDF Generator - An application that can read in a text file, HTML file, or some other file and generates a PDF file out of it. Great for a web-based service where the user uploads the file and the program returns a PDF of the file. Optional: Deploy on GAE or Heroku if possible. Mp3 Tagger - Modify and add ID3v1 tags to MP3 files. See if you can also add in the album art into the MP3 file’s header as well as other ID3v2 tags. Code Snippet Manager - Another utility program that allows coders to put in functions, classes, or other tidbits to save for use later. Organized by the type of snippet or language the coder can quickly lookup code. Optional: For extra practice try adding syntax highlighting based on the language. Databases SQL Query Analyzer - A utility application in which a user can enter a query and have it run against a local database and look for ways to make it more efficient. Remote SQL Tool - A utility that can execute queries on remote servers from your local computer across the Internet. It should take in a remote host, user name, and password, run the query and return the results. Report Generator - Create a utility that generates a report based on some tables in a database. Generates sales reports based on the order/order details tables or sums up the day's current database activity. Event Scheduler and Calendar - Make an application that allows the user to enter a date and time of an event, event notes, and then schedule those events on a calendar. The user can then browse the calendar or search the calendar for specific events. Optional: Allow the application to create re-occurrence events that reoccur every day, week, month, year, etc. Budget Tracker - Write an application that keeps track of a household’s budget. The user can add expenses, income, and recurring costs to find out how much they are saving or losing over a period of time. Optional: Allow the user to specify a date range and see the net flow of money in and out of the house budget for that time period. TV Show Tracker - Got a favorite show you don’t want to miss? Don’t have a PVR or want to be able to find the show to then PVR it later? Make an application that can search various online TV Guide sites, locate the shows/times/channels and add them to a database application. The database/website then can send you email reminders that a show is about to start and which channel it will be on. Travel Planner System - Make a system that allows users to put together their own little travel itinerary and keep track of the airline/hotel arrangements, points of interest, budget, and schedule. Graphics and Multimedia Slide Show - Make an application that shows various pictures in a slide show format. Optional: Try adding various effects like fade in/out, star wipe, and window blinds transitions. Stream Video from Online - Try to create your own online streaming video player. Mp3 Player - A simple program for playing your favorite music files. Add features you think are missing from your favorite music player. Watermarking Application - Have some pictures you want copyright protected? Add your own logo or text lightly across the background so that no one can simply steal your graphics off your site. Make a program that will add this watermark to the picture. Optional: Use threading to process multiple images simultaneously. Turtle Graphics - This is a common project where you create a floor of 20 x 20 squares. Using various commands you tell a turtle to draw a line on the floor. You have moved forward, left or right, lift or drop the pen, etc. Do a search online for "Turtle Graphics" for more information. Optional: Allow the program to read in the list of commands from a file. GIF Creator A program that puts together multiple images (PNGs, JPGs, TIFFs) to make a smooth GIF that can be exported. Optional: Make the program convert small video files to GIFs as well. Security Caesar cipher - Implement a Caesar cipher, both encoding, and decoding. The key is an integer from 1 to 25. This cipher rotates the letters of the alphabet (A to Z). The encoding replaces each letter with the 1st to 25th next letter in the alphabet (wrapping Z to A). So key 2 encrypts "HI" to "JK", but key 20 encrypts "HI" to "BC". This simple "monoalphabetic substitution cipher" provides almost no security, because an attacker who has the encoded message can either use frequency analysis to guess the key, or just try all 25 keys.
egeozcan / Ppipepipes values through functions, an alternative to using the proposed pipe operator ( |> ) for ES
jettbrains / L W3C Strategic Highlights September 2019 This report was prepared for the September 2019 W3C Advisory Committee Meeting (W3C Member link). See the accompanying W3C Fact Sheet — September 2019. For the previous edition, see the April 2019 W3C Strategic Highlights. For future editions of this report, please consult the latest version. A Chinese translation is available. ☰ Contents Introduction Future Web Standards Meeting Industry Needs Web Payments Digital Publishing Media and Entertainment Web & Telecommunications Real-Time Communications (WebRTC) Web & Networks Automotive Web of Things Strengthening the Core of the Web HTML CSS Fonts SVG Audio Performance Web Performance WebAssembly Testing Browser Testing and Tools WebPlatform Tests Web of Data Web for All Security, Privacy, Identity Internationalization (i18n) Web Accessibility Outreach to the world W3C Developer Relations W3C Training Translations W3C Liaisons Introduction This report highlights recent work of enhancement of the existing landscape of the Web platform and innovation for the growth and strength of the Web. 33 working groups and a dozen interest groups enable W3C to pursue its mission through the creation of Web standards, guidelines, and supporting materials. We track the tremendous work done across the Consortium through homogeneous work-spaces in Github which enables better monitoring and management. We are in the middle of a period where we are chartering numerous working groups which demonstrate the rapid degree of change for the Web platform: After 4 years, we are nearly ready to publish a Payment Request API Proposed Recommendation and we need to soon charter follow-on work. In the last year we chartered the Web Payment Security Interest Group. In the last year we chartered the Web Media Working Group with 7 specifications for next generation Media support on the Web. We have Accessibility Guidelines under W3C Member review which includes Silver, a new approach. We have just launched the Decentralized Identifier Working Group which has tremendous potential because Decentralized Identifier (DID) is an identifier that is globally unique, resolveable with high availability, and cryptographically verifiable. We have Privacy IG (PING) under W3C Member review which strengthens our focus on the tradeoff between privacy and function. We have a new CSS charter under W3C Member review which maps the group's work for the next three years. In this period, W3C and the WHATWG have succesfully completed the negotiation of a Memorandum of Understanding rooted in the mutual belief that that having two distinct specifications claiming to be normative is generally harmful for the Web community. The MOU, signed last May, describes how the two organizations are to collaborate on the development of a single authoritative version of the HTML and DOM specifications. W3C subsequently rechartered the HTML Working Group to assist the W3C community in raising issues and proposing solutions for the HTML and DOM specifications, and for the production of W3C Recommendations from WHATWG Review Drafts. As the Web evolves continuously, some groups are looking for ways for specifications to do so as well. So-called "evergreen recommendations" or "living standards" aim to track continuous development (and maintenance) of features, on a feature-by-feature basis, while getting review and patent commitments. We see the maturation and further development of an incredible number of new technologies coming to the Web. Continued progress in many areas demonstrates the vitality of the W3C and the Web community, as the rest of the report illustrates. Future Web Standards W3C has a variety of mechanisms for listening to what the community thinks could become good future Web standards. These include discussions with the Membership, discussions with other standards bodies, the activities of thousands of participants in over 300 community groups, and W3C Workshops. There are lots of good ideas. The W3C strategy team has been identifying promising topics and invites public participation. Future, recent and under consideration Workshops include: Inclusive XR (5-6 November 2019, Seattle, WA, USA) to explore existing and future approaches on making Virtual and Augmented Reality experiences more inclusive, including to people with disabilities; W3C Workshop on Data Models for Transportation (12-13 September 2019, Palo Alto, CA, USA) W3C Workshop on Web Games (27-28 June 2019, Redmond, WA, USA), view report Second W3C Workshop on the Web of Things (3-5 June 2019, Munich, Germany) W3C Workshop on Web Standardization for Graph Data; Creating Bridges: RDF, Property Graph and SQL (4-6 March 2019, Berlin, Germany), view report Web & Machine Learning. The Strategy Funnel documents the staff's exploration of potential new work at various phases: Exploration and Investigation, Incubation and Evaluation, and eventually to the chartering of a new standards group. The Funnel view is a GitHub Project where new area are issues represented by “cards” which move through the columns, usually from left to right. Most cards start in Exploration and move towards Chartering, or move out of the funnel. Public input is welcome at any stage but particularly once Incubation has begun. This helps W3C identify work that is sufficiently incubated to warrant standardization, to review the ecosystem around the work and indicate interest in participating in its standardization, and then to draft a charter that reflects an appropriate scope. Ongoing feedback can speed up the overall standardization process. Since the previous highlights document, W3C has chartered a number of groups, and started discussion on many more: Newly Chartered or Rechartered Web Application Security WG (03-Apr) Web Payment Security IG (17-Apr) Patent and Standards IG (24-Apr) Web Applications WG (14-May) Web & Networks IG (16-May) Media WG (23-May) Media and Entertainment IG (06-Jun) HTML WG (06-Jun) Decentralized Identifier WG (05-Sep) Extended Privacy IG (PING) (30-Sep) Verifiable Claims WG (30-Sep) Service Workers WG (31-Dec) Dataset Exchange WG (31-Dec) Web of Things Working Group (31-Dec) Web Audio Working Group (31-Dec) Proposed charters / Advance Notice Accessibility Guidelines WG Privacy IG (PING) RDF Literal Direction WG Timed Text WG CSS WG Web Authentication WG Closed Internationalization Tag Set IG Meeting Industry Needs Web Payments All Web Payments specifications W3C's payments standards enable a streamlined checkout experience, enabling a consistent user experience across the Web with lower front end development costs for merchants. Users can store and reuse information and more quickly and accurately complete online transactions. The Web Payments Working Group has republished Payment Request API as a Candidate Recommendation, aiming to publish a Proposed Recommendation in the Fall 2019, and is discussing use cases and features for Payment Request after publication of the 1.0 Recommendation. Browser vendors have been finalizing implementation of features added in the past year (view the implementation report). As work continues on the Payment Handler API and its implementation (currently in Chrome and Edge Canary), one focus in 2019 is to increase adoption in other browsers. Recently, Mastercard demonstrated the use of Payment Request API to carry out EMVCo's Secure Remote Commerce (SRC) protocol whose payment method definition is being developed with active participation by Visa, Mastercard, American Express, and Discover. Payment method availability is a key factor in merchant considerations about adopting Payment Request API. The ability to get uniform adoption of a new payment method such as Secure Remote Commerce (SRC) also depends on the availability of the Payment Handler API in browsers, or of proprietary alternatives. Web Monetization, which the Web Payments Working Group will discuss again at its face-to-face meeting in September, can be used to enable micropayments as an alternative revenue stream to advertising. Since the beginning of 2019, Amazon, Brave Software, JCB, Certus Cybersecurity Solutions and Netflix have joined the Web Payments Working Group. In April, W3C launched the Web Payment Security Group to enable W3C, EMVCo, and the FIDO Alliance to collaborate on a vision for Web payment security and interoperability. Participants will define areas of collaboration and identify gaps between existing technical specifications in order to increase compatibility among different technologies, such as: How do SRC, FIDO, and Payment Request relate? The Payment Services Directive 2 (PSD2) regulations in Europe are scheduled to take effect in September 2019. What is the role of EMVCo, W3C, and FIDO technologies, and what is the current state of readiness for the deadline? How can we improve privacy on the Web at the same time as we meet industry requirements regarding user identity? Digital Publishing All Digital Publishing specifications, Publication milestones The Web is the universal publishing platform. Publishing is increasingly impacted by the Web, and the Web increasingly impacts Publishing. Topic of particular interest to Publishing@W3C include typography and layout, accessibility, usability, portability, distribution, archiving, offline access, print on demand, and reliable cross referencing. And the diverse publishing community represented in the groups consist of the traditional "trade" publishers, ebook reading system manufacturers, but also publishers of audio book, scholarly journals or educational materials, library scientists or browser developers. The Publishing Working Group currently concentrates on Audiobooks which lack a comprehensive standard, thus incurring extra costs and time to publish in this booming market. Active development is ongoing on the future standard: Publication Manifest Audiobook profile for Web Publications Lightweight Packaging Format The BD Comics Manga Community Group, the Synchronized Multimedia for Publications Community Group, the Publishing Community Group and a future group on archival, are companions to the working group where specific work is developed and incubated. The Publishing Community Group is a recently launched incubation channel for Publishing@W3C. The goal of the group is to propose, document, and prototype features broadly related to: publications on the Web reading modes and systems and the user experience of publications The EPUB 3 Community Group has successfully completed the revision of EPUB 3.2. The Publishing Business Group fosters ongoing participation by members of the publishing industry and the overall ecosystem in the development of Web infrastructure to better support the needs of the industry. The Business Group serves as an additional conduit to the Publishing Working Group and several Community Groups for feedback between the publishing ecosystem and W3C. The Publishing BG has played a vital role in fostering and advancing the adoption and continued development of EPUB 3. In particular the BG provided critical support to the update of EPUBCheck to validate EPUB content to the new EPUB 3.2 specification. This resulted in the development, in conjunction with the EPUB3 Community Group, of a new generation of EPUBCheck, i.e., EPUBCheck 4.2 production-ready release. Media and Entertainment All Media specifications The Media and Entertainment vertical tracks media-related topics and features that create immersive experiences for end users. HTML5 brought standard audio and video elements to the Web. Standardization activities since then have aimed at turning the Web into a professional platform fully suitable for the delivery of media content and associated materials, enabling missing features to stream video content on the Web such as adaptive streaming and content protection. Together with Microsoft, Comcast, Netflix and Google, W3C received an Technology & Engineering Emmy Award in April 2019 for standardization of a full TV experience on the Web. Current goals are to: Reinforce core media technologies: Creation of the Media Working Group, to develop media-related specifications incubated in the WICG (e.g. Media Capabilities, Picture-in-picture, Media Session) and maintain maintain/evolve Media Source Extensions (MSE) and Encrypted Media Extensions (EME). Improve support for Media Timed Events: data cues incubation. Enhance color support (HDR, wide gamut), in scope of the CSS WG and in the Color on the Web CG. Reduce fragmentation: Continue annual releases of a common and testable baseline media devices, in scope of the Web Media APIs CG and in collaboration with the CTA WAVE Project. Maintain the Road-map of Media Technologies for the Web which highlights Web technologies that can be used to build media applications and services, as well as known gaps to enable additional use cases. Create the future: Discuss perspectives for Media and Entertainment for the Web. Bring the power of GPUs to the Web (graphics, machine learning, heavy processing), under incubation in the GPU for the Web CG. Transition to a Working Group is under discussion. Determine next steps after the successful W3C Workshop on Web Games of June 2019. View the report. Timed Text The Timed Text Working Group develops and maintains formats used for the representation of text synchronized with other timed media, like audio and video, and notably works on TTML, profiles of TTML, and WebVTT. Recent progress includes: A robust WebVTT implementation report poises the specification for publication as a proposed recommendation. Discussions around re-chartering, notably to add a TTML Profile for Audio Description deliverable to the scope of the group, and clarify that rendering of captions within XR content is also in scope. Immersive Web Hardware that enables Virtual Reality (VR) and Augmented Reality (AR) applications are now broadly available to consumers, offering an immersive computing platform with both new opportunities and challenges. The ability to interact directly with immersive hardware is critical to ensuring that the web is well equipped to operate as a first-class citizen in this environment. The Immersive Web Working Group has been stabilizing the WebXR Device API while the companion Immersive Web Community Group incubates the next series of features identified as key for the future of the Immersive Web. W3C plans a workshop focused on the needs and benefits at the intersection of VR & Accessibility (Inclusive XR), on 5-6 November 2019 in Seattle, WA, USA, to explore existing and future approaches on making Virtual and Augmented Reality experiences more inclusive. Web & Telecommunications The Web is the Open Platform for Mobile. Telecommunication service providers and network equipment providers have long been critical actors in the deployment of Web technologies. As the Web platform matures, it brings richer and richer capabilities to extend existing services to new users and devices, and propose new and innovative services. Real-Time Communications (WebRTC) All Real-Time Communications specifications WebRTC has reshaped the whole communication landscape by making any connected device a potential communication end-point, bringing audio and video communications anywhere, on any network, vastly expanding the ability of operators to reach their customers. WebRTC serves as the corner-stone of many online communication and collaboration services. The WebRTC Working Group aims to bringing WebRTC 1.0 (and companion specification Media Capture and Streams) to Recommendation by the end of 2019. Intense efforts are focused on testing (supported by a dedicated hackathon at IETF 104) and interoperability. The group is considering pushing features that have not gotten enough traction to separate modules or to a later minor revision of the spec. Beyond WebRTC 1.0, the WebRTC Working Group will focus its efforts on WebRTC NV which the group has started documenting by identifying use cases. Web & Networks Recently launched, in the wake of the May 2018 Web5G workshop, the Web & Networks Interest Group is chaired by representatives from AT&T, China Mobile and Intel, with a goal to explore solutions for web applications to achieve better performance and resource allocation, both on the device and network. The group's first efforts are around use cases, privacy & security requirements and liaisons. Automotive All Automotive specifications To create a rich application ecosystem for vehicles and other devices allowed to connect to the vehicle, the W3C Automotive Working Group is delivering a service specification to expose all common vehicle signals (engine temperature, fuel/charge level, range, tire pressure, speed, etc.) The Vehicle Information Service Specification (VISS), which is a Candidate Recommendation, is seeing more implementations across the industry. It provides the access method to a common data model for all the vehicle signals –presently encapsulating a thousand or so different data elements– and will be growing to accommodate the advances in automotive such as autonomous and driver assist technologies and electrification. The group is already working on a successor to VISS, leveraging the underlying data model and the VIWI submission from Volkswagen, for a more robust means of accessing vehicle signals information and the same paradigm for other automotive needs including location-based services, media, notifications and caching content. The Automotive and Web Platform Business Group acts as an incubator for prospective standards work. One of its task forces is using W3C VISS in performing data sampling and off-boarding the information to the cloud. Access to the wealth of information that W3C's auto signals standard exposes is of interest to regulators, urban planners, insurance companies, auto manufacturers, fleet managers and owners, service providers and others. In addition to components needed for data sampling and edge computing, capturing user and owner consent, information collection methods and handling of data are in scope. The upcoming W3C Workshop on Data Models for Transportation (September 2019) is expected to focus on the need of additional ontologies around transportation space. Web of Things All Web of Things specifications W3C's Web of Things work is designed to bridge disparate technology stacks to allow devices to work together and achieve scale, thus enabling the potential of the Internet of Things by eliminating fragmentation and fostering interoperability. Thing descriptions expressed in JSON-LD cover the behavior, interaction affordances, data schema, security configuration, and protocol bindings. The Web of Things complements existing IoT ecosystems to reduce the cost and risk for suppliers and consumers of applications that create value by combining multiple devices and information services. There are many sectors that will benefit, e.g. smart homes, smart cities, smart industry, smart agriculture, smart healthcare and many more. The Web of Things Working Group is finishing the initial Web of Things standards, with support from the Web of Things Interest Group: Web of Things Architecture Thing Descriptions Strengthening the Core of the Web HTML The HTML Working Group was chartered early June to assist the W3C community in raising issues and proposing solutions for the HTML and DOM specifications, and to produce W3C Recommendations from WHATWG Review Drafts. A few days before, W3C and the WHATWG signed a Memorandum of Understanding outlining the agreement to collaborate on the development of a single version of the HTML and DOM specifications. Issues and proposed solutions for HTML and DOM done via the newly rechartered HTML Working Group in the WHATWG repositories The HTML Working Group is targetting November 2019 to bring HTML and DOM to Candidate Recommendations. CSS All CSS specifications CSS is a critical part of the Open Web Platform. The CSS Working Group gathers requirements from two large groups of CSS users: the publishing industry and application developers. Within W3C, those groups are exemplified by the Publishing groups and the Web Platform Working Group. The former requires things like better pagination support and advanced font handling, the latter needs intelligent (and fast!) scrolling and animations. What we know as CSS is actually a collection of almost a hundred specifications, referred to as ‘modules’. The current state of CSS is defined by a snapshot, updated once a year. The group also publishes an index defining every term defined by CSS specifications. Fonts All Fonts specifications The Web Fonts Working Group develops specifications that allow the interoperable deployment of downloadable fonts on the Web, with a focus on Progressive Font Enrichment as well as maintenance of WOFF Recommendations. Recent and ongoing work includes: Early API experiments by Adobe and Monotype have demonstrated the feasibility of a font enrichment API, where a server delivers a font with minimal glyph repertoire and the client can query the full repertoire and request additional subsets on-the-fly. In other experiments, the Brotli compression used in WOFF 2 was extended to support shared dictionaries and patch update. Metrics to quantify improvement are a current hot discussion topic. The group will meet at ATypi 2019 in Japan, to gather requirements from the international typography community. The group will first produce a report summarizing the strengths and weaknesses of each prototype solution by Q2 2020. SVG All SVG specifications SVG is an important and widely-used part of the Open Web Platform. The SVG Working Group focuses on aligning the SVG 2.0 specification with browser implementations, having split the specification into a currently-implemented 2.0 and a forward-looking 2.1. Current activity is on stabilization, increased integration with the Open Web Platform, and test coverage analysis. The Working Group was rechartered in March 2019. A new work item concerns native (non-Web-browser) uses of SVG as a non-interactive, vector graphics format. Audio The Web Audio Working Group was extended to finish its work on the Web Audio API, expecting to publish it as a Recommendation by year end. The specification enables synthesizing audio in the browser. Audio operations are performed with audio nodes, which are linked together to form a modular audio routing graph. Multiple sources — with different types of channel layout — are supported. This modular design provides the flexibility to create complex audio functions with dynamic effects. The first version of Web Audio API is now feature complete and is implemented in all modern browsers. Work has started on the next version, and new features are being incubated in the Audio Community Group. Performance Web Performance All Web Performance specifications There are currently 18 specifications in development in the Web Performance Working Group aiming to provide methods to observe and improve aspects of application performance of user agent features and APIs. The W3C team is looking at related work incubated in the W3C GPU for the Web (WebGPU) Community Group which is poised to transition to a W3C Working Group. A preliminary draft charter is available. WebAssembly All WebAssembly specifications WebAssembly improves Web performance and power by being a virtual machine and execution environment enabling loaded pages to run native (compiled) code. It is deployed in Firefox, Edge, Safari and Chrome. The specification will soon reach Candidate Recommendation. WebAssembly enables near-native performance, optimized load time, and perhaps most importantly, a compilation target for existing code bases. While it has a small number of native types, much of the performance increase relative to Javascript derives from its use of consistent typing. WebAssembly leverages decades of optimization for compiled languages and the byte code is optimized for compactness and streaming (the web page starts executing while the rest of the code downloads). Network and API access all occurs through accompanying Javascript libraries -- the security model is identical to that of Javascript. Requirements gathering and language development occur in the Community Group while the Working Group manages test development, community review and progression of specifications on the Recommendation Track. Testing Browser testing plays a critical role in the growth of the Web by: Improving the reliability of Web technology definitions; Improving the quality of implementations of these technologies by helping vendors to detect bugs in their products; Improving the data available to Web developers on known bugs and deficiencies of Web technologies by publishing results of these tests. Browser Testing and Tools The Browser Testing and Tools Working Group is developing WebDriver version 2, having published last year the W3C Recommendation of WebDriver. WebDriver acts as a remote control interface that enables introspection and control of user agents, provides a platform- and language-neutral wire protocol as a way for out-of-process programs to remotely instruct the behavior of Web, and emulates the actions of a real person using the browser. WebPlatform Tests The WebPlatform Tests project now provides a mechanism which allows to fully automate tests that previously needed to be run manually: TestDriver. TestDriver enables sending trusted key and mouse events, sending complex series of trusted pointer and key interactions for things like in-content drag-and-drop or pinch zoom, and even file upload. Since 2014 W3C began work on this coordinated open-source effort to build a cross-browser test suite for the Web Platform, which WHATWG, and all major browsers adopted. Web of Data All Data specifications There have been several great success stories around the standardization of data on the web over the past year. Verifiable Claims seems to have significant uptake. It is also significant that the Distributed Identifier WG charter has received numerous favorable reviews, and was just recently launched. JSON-LD has been a major success with the large deployment on Web sites via schema.org. JSON-LD 1.1 completed technical work, about to transition to CR More than 25% of websites today include schema.org data in JSON-LD The Web of Things description is in CR since May, making use of JSON-LD Verifiable Credentials data model is in CR since July, also making use of JSON-LD Continued strong interest in decentralized identifiers Engagement from the TAG with reframing core documents, such as Ethical Web Principles, to include data on the web within their scope Data is increasingly important for all organizations, especially with the rise of IoT and Big Data. W3C has a mature and extensive suite of standards relating to data that were developed over two decades of experience, with plans for further work on making it easier for developers to work with graph data and knowledge graphs. Linked Data is about the use of URIs as names for things, the ability to dereference these URIs to get further information and to include links to other data. There are ever-increasing sources of open Linked Data on the Web, as well as data services that are restricted to the suppliers and consumers of those services. The digital transformation of industry is seeking to exploit advanced digital technologies. This will facilitate businesses to integrate horizontally along the supply and value chains, and vertically from the factory floor to the office floor. W3C is seeking to make it easier to support enterprise-wide data management and governance, reflecting the strategic importance of data to modern businesses. Traditional approaches to data have focused on tabular databases (SQL/RDBMS), Comma Separated Value (CSV) files, and data embedded in PDF documents and spreadsheets. We're now in midst of a major shift to graph data with nodes and labeled directed links between them. Graph data is: Faster than using SQL and associated JOIN operations More favorable to integrating data from heterogeneous sources Better suited to situations where the data model is evolving In the wake of the recent W3C Workshop on Graph Data we are in the process of launching a Graph Standardization Business Group to provide a business perspective with use cases and requirements, to coordinate technical standards work and liaisons with external organizations. Web for All Security, Privacy, Identity All Security specifications, all Privacy specifications Authentication on the Web As the WebAuthn Level 1 W3C Recommendation published last March is seeing wide implementation and adoption of strong cryptographic authentication, work is proceeding on Level 2. The open standard Web API gives native authentication technology built into native platforms, browsers, operating systems (including mobile) and hardware, offering protection against hacking, credential theft, phishing attacks, thus aiming to end the era of passwords as a security construct. You may read more in our March press release. Privacy An increasing number of W3C specifications are benefitting from Privacy and Security review; there are security and privacy aspects to every specification. Early review is essential. Working with the TAG, the Privacy Interest Group has updated the Self-Review Questionnaire: Security and Privacy. Other recent work of the group includes public blogging further to the exploration of anti-patterns in standards and permission prompts. Security The Web Application Security Working Group adopted Feature Policy, aiming to allow developers to selectively enable, disable, or modify the behavior of some of these browser features and APIs within their application; and Fetch Metadata, aiming to provide servers with enough information to make a priori decisions about whether or not to service a request based on the way it was made, and the context in which it will be used. The Web Payment Security Interest Group, launched last April, convenes members from W3C, EMVCo, and the FIDO Alliance to discuss cooperative work to enhance the security and interoperability of Web payments (read more about payments). Internationalization (i18n) All Internationalization specifications, educational articles related to Internationalization, spec developers checklist Only a quarter or so current Web users use English online and that proportion will continue to decrease as the Web reaches more and more communities of limited English proficiency. If the Web is to live up to the "World Wide" portion of its name, and for the Web to truly work for stakeholders all around the world engaging with content in various languages, it must support the needs of worldwide users as they engage with content in the various languages. The growth of epublishing also brings requirements for new features and improved typography on the Web. It is important to ensure the needs of local communities are captured. The W3C Internationalization Initiative was set up to increase in-house resources dedicated to accelerating progress in making the World Wide Web "worldwide" by gathering user requirements, supporting developers, and education & outreach. For an overview of current projects see the i18n radar. W3C's Internationalization efforts progressed on a number of fronts recently: Requirements: New African and European language groups will work on the gap analysis, errata and layout requirements. Gap analysis: Japanese, Devanagari, Bengali, Tamil, Lao, Khmer, Javanese, and Ethiopic updated in the gap-analysis documents. Layout requirements document: notable progress tracked in the Southeast Asian Task Force while work continues on Chinese layout requirements. Developer support: Spec reviews: the i18n WG continues active review of specifications of the WHATWG and other W3C Working Groups. Short review checklist: easy way to begin a self-review to help spec developers understand what aspects of their spec are likely to need attention for internationalization, and points them to more detailed checklists for the relevant topics. It also helps those reviewing specs for i18n issues. Strings on the Web: Language and Direction Metadata lays out issues and discusses potential solutions for passing information about language and direction with strings in JSON or other data formats. The document was rewritten for clarity, and expanded. The group is collaborating with the JSON-LD and Web Publishing groups to develop a plan for updating RDF, JSON-LD and related specifications to handle metadata for base direction of text (bidi). User-friendly test format: a new format was developed for Internationalization Test Suite tests, which displays helpful information about how the test works. This particularly useful because those tests are pointed to by educational materials and gap-analysis documents. Web Platform Tests: a large number of tests in the i18n test suite have been ported to the WPT repository, including: css-counter-styles, css-ruby, css-syntax, css-test, css-text-decor, css-writing-modes, and css-pseudo. Education & outreach: (for all educational materials, see the HTML & CSS Authoring Techniques) Web Accessibility All Accessibility specifications, WAI resources The Web Accessibility Initiative supports W3C's Web for All mission. Recent achievements include: Education and training: Inaccessibility of CAPTCHA updated to bring our analysis and recommendations up to date with CAPTCHA practice today, concluding two years of extensive work and invaluable input from the public (read more on the W3C Blog Learn why your web content and applications should be accessible. The Education and Outreach Working Group has completed revision and updating of the Business Case for Digital Accessibility. Accessibility guidelines: The Accessibility Guidelines Working Group has continued to update WCAG Techniques and Understanding WCAG 2.1; and published a Candidate Recommendation of Accessibility Conformance Testing Rules Format 1.0 to improve inter-rater reliability when evaluating conformance of web content to WCAG An updated charter is being developed to host work on "Silver", the next generation accessibility guidelines (WCAG 2.2) There are accessibility aspects to most specifications. Check your work with the FAST checklist. Outreach to the world W3C Developer Relations To foster the excellent feedback loop between Web Standards development and Web developers, and to grow participation from that diverse community, recent W3C Developer Relations activities include: @w3cdevs tracks the enormous amount of work happening across W3C W3C Track during the Web Conference 2019 in San Francisco Tech videos: W3C published the 2019 Web Games Workshop videos The 16 September 2019 Developer Meetup in Fukuoka, Japan, is open to all and will combine a set of technical demos prepared by W3C groups, and a series of talks on a selected set of W3C technologies and projects W3C is involved with Mozilla, Google, Samsung, Microsoft and Bocoup in the organization of ViewSource 2019 in Amsterdam (read more on the W3C Blog) W3C Training In partnership with EdX, W3C's MOOC training program, W3Cx offers a complete "Front-End Web Developer" (FEWD) professional certificate program that consists of a suite of five courses on the foundational languages that power the Web: HTML5, CSS and JavaScript. We count nearly 900K students from all over the world. Translations Many Web users rely on translations of documents developed at W3C whose official language is English. W3C is extremely grateful to the continuous efforts of its community in ensuring our various deliverables in general, and in our specifications in particular, are made available in other languages, for free, ensuring their exposure to a much more diverse set of readers. Last Spring we developed a more robust system, a new listing of translations of W3C specifications and updated the instructions on how to contribute to our translation efforts. W3C Liaisons Liaisons and coordination with numerous organizations and Standards Development Organizations (SDOs) is crucial for W3C to: make sure standards are interoperable coordinate our respective agenda in Internet governance: W3C participates in ICANN, GIPO, IGF, the I* organizations (ICANN, IETF, ISOC, IAB). ensure at the government liaison level that our standards work is officially recognized when important to our membership so that products based on them (often done by our members) are part of procurement orders. W3C has ARO/PAS status with ISO. W3C participates in the EU MSP and Rolling Plan on Standardization ensure the global set of Web and Internet standards form a compatible stack of technologies, at the technical and policy level (patent regime, fragmentation, use in policy making) promote Standards adoption equally by the industry, the public sector, and the public at large Coralie Mercier, Editor, W3C Marketing & Communications $Id: Overview.html,v 1.60 2019/10/15 12:05:52 coralie Exp $ Copyright © 2019 W3C ® (MIT, ERCIM, Keio, Beihang) Usage policies apply.
rick2785 / JavaCodeI specifically cover the following topics: Java primitive data types, declaration statements, expression statements, importing class libraries, excepting user input, checking for valid input, catching errors in input, math functions, if statement, relational operators, logical operators, ternary operator, switch statement, and looping. How class variables differ from local variables, Java Exception handling, the difference between run time and checked exceptions, Arrays, and UML Diagrams. Monsters gameboard, Java collection classes, Java ArrayLists, Linked Lists, manipulating Strings and StringBuilders, Polymorphism, Inheritance, Protected, Final, Instanceof, interfaces, abstract classes, abstract methods. You need interfaces and abstract classes because Java doesn't allow you to inherit from more than one other class. Java threads, Regular Expressions, Graphical User Interfaces (GUI) using Java Swing and its components, GUI Event Handling, ChangeListener, JOptionPane, combo boxes, list boxes, JLists, DefaultListModel, using JScrollpane with JList, JSpinner, JTree, Flow, Border, and Box Layout Managers. Created a calculator layout with Java Swing's GridLayout, GridBagLayout, GridBagConstraints, Font, and Insets. JLabel, JTextField, JComboBox, JSpinner, JSlider, JRadioButton, ButtonGroup, JCheckBox, JTextArea, JScrollPane, ChangeListener, pack, create and delete files and directories. How to pull lists of files from directories and manipulate them, write to and read character streams from files. PrintWriter, BufferedWriter, FileWriter, BufferedReader, FileReader, common file exceptions Binary Streams - DataOutputStream, FileOutputStream, BufferedOutputStream, all of the reading and writing primitive type methods, setup Java JDBC in Eclipse, connect to a MySQL database, query it and get the results of a query. JTables, JEditorPane Swing component. HyperlinkEvent and HyperlinkListener. Java JApplet, Java Servlets with Tomcat, GET and POST methods, Java Server Pages, parsing XML with Java, Java XPath, JDOM2 library, and 2D graphics. *Created a Java Paint Application using swing, events, mouse events, Graphics2D, ArrayList *Designed a Java Video Game like Asteroids with collision detection and shooting torpedos which also played sound in a JFrame, and removed items from the screen when they were destroyed. Rotating polygons, and Making Java Executable. Model View Controller (MVC) The Model is the class that contains the data and the methods needed to use the data. The View is the interface. The Controller coordinates interactions between the Model and View. DESIGN PATTERNS: Strategy design patternis used if you need to dynamically change an algorithm used by an object at run time. The pattern also allows you to eliminate code duplication. It separates behavior from super and subclasses. The Observer pattern is a software design pattern in which an object, called the subject (Publisher), maintains a list of its dependents, called observers (Subscribers), and notifies them automatically of any state changes, usually by calling one of their methods. The Factory design pattern is used when you want to define the class of an object at runtime. It also allows you to encapsulate object creation so that you can keep all object creation code in one place The Abstract Factory Design Pattern is like a factory, but everything is encapsulated. The Singleton pattern is used when you want to eliminate the option of instantiating more than one object. (Scrabble letters app) The Builder Design Pattern is used when you want to have many classes help in the creation of an object. By having different classes build the object you can then easily create many different types of objects without being forced to rewrite code. The Builder pattern provides a different way to make complex objects like you'd make using the Abstract Factory design pattern. The Prototype design pattern is used for creating new objects (instances) by cloning (copying) other objects. It allows for the adding of any subclass instance of a known super class at run time. It is used when there are numerous potential classes that you want to only use if needed at runtime. The major benefit of using the Prototype pattern is that it reduces the need for creating potentially unneeded subclasses. Java Reflection is an API and it's used to manipulate classes and everything in a class including fields, methods, constructors, private data, etc. (TestingReflection.java) The Decorator allows you to modify an object dynamically. You would use it when you want the capabilities of inheritance with subclasses, but you need to add functionality at run time. It is more flexible than inheritance. The Decorator Design Pattern simplifies code because you add functionality using many simple classes. Also, rather than rewrite old code you can extend it with new code and that is always good. (Pizza app) The Command design pattern allows you to store a list of commands for later use. With it you can store multiple commands in a class to use over and over. (ElectronicDevice app) The Adapter pattern is used when you want to translate one interface of a class into another interface. Allows 2 incompatible interfaces to work together. It allows the use of the available interface and the target interface. Any class can work together as long as the Adapter solves the issue that all classes must implement every method defined by the shared interface. (EnemyAttacker app) The Facade pattern basically says that you should simplify your methods so that much of what is done is in the background. In technical terms you should decouple the client from the sub components needed to perform an operation. (Bank app) The Bridge Pattern is used to decouple an abstraction from its implementation so that the two can vary independently. Progressively adding functionality while separating out major differences using abstract classes. (EntertainmentDevice app) In a Template Method pattern, you define a method (algorithm) in an abstract class. It contains both abstract methods and non-abstract methods. The subclasses that extend this abstract class then override those methods that don't make sense for them to use in the default way. (Sandwich app) The Iterator pattern provides you with a uniform way to access different collections of Objects. You can also write polymorphic code because you can refer to each collection of objects because they'll implement the same interface. (SongIterator app) The Composite design pattern is used to structure data into its individual parts as well as represent the inner workings of every part of a larger object. The composite pattern also allows you to treat both groups of parts in the same way as you treat the parts polymorphically. You can structure data, or represent the inner working of every part of a whole object individually. (SongComponent app) The flyweight design pattern is used to dramatically increase the speed of your code when you are using many similar objects. To reduce memory usage the flyweight design pattern shares Objects that are the same rather than creating new ones. (FlyWeightTest app) State Pattern allows an object to alter its behavior when its internal state changes. The object will appear to change its class. (ATMState) The Proxy design pattern limits access to just the methods you want made accessible in another class. It can be used for security reasons, because an Object is intensive to create, or is accessed from a remote location. You can think of it as a gate keeper that blocks access to another Object. (TestATMMachine) The Chain of Responsibility pattern has a group of objects that are expected to between them be able to solve a problem. If the first Object can't solve it, it passes the data to the next Object in the chain. (TestCalcChain) The Interpreter pattern is used to convert one representation of data into another. The context cantains the information that will be interpreted. The expression is an abstract class that defines all the methods needed to perform the different conversions. The terminal or concrete expressions provide specific conversions on different types of data. (MeasurementConversion) The Mediator design pattern is used to handle communication between related objects (Colleagues). All communication is handled by a Mediator Object and the Colleagues don't need to know anything about each other to work together. (TestStockMediator) The Memento design pattern provides a way to store previous states of an Object easily. It has 3 main classes: 1) Memento: The basic object that is stored in different states. 2) Originator: Sets and Gets values from the currently targeted Memento. Creates new Mementos and assigns current values to them. 3) Caretaker: Holds an ArrayList that contains all previous versions of the Memento. It can store and retrieve stored Mementos. (TestMemento) The Visitor design pattern allows you to add methods to classes of different types without much altering to those classes. You can make completely different methods depending on the class used with this pattern. (VisitorTest)
nyaundid / EC2 AWS AND SHELLSEIS 665 Assignment 2: Linux & Git Overview This week we will focus on becoming familiar with launching a Linux server and working with some basic Linux and Git commands. We will use AWS to launch and host the Linux server. AWS might seem a little confusing at this point. Don’t worry, we will gain much more hands-on experience with AWS throughout the course. The goal is to get you comfortable working with the technology and not overwhelm you with all the details. Requirements You need to have a personal AWS account and GitHub account for this assignment. You should also read the Git Hands-on Guide and Linux Hands-on Guide before beginning this exercise. A word about grading One of the key DevOps practices we learn about in this class is the use of automation to increase the speed and repeatability of processes. Automation is utilized during the assignment grading process to review and assess your work. It’s important that you follow the instructions in each assignment and type in required files and resources with the proper names. All names are case sensitive, so a name like "Web1" is not the same as "web1". If you misspell a name, use the wrong case, or put a file in the wrong directory location you will lose points on your assignment. This is the easiest way to lose points, and also the most preventable. You should always double-check your work to make sure it accurately reflects the requirements specified in the assignment. You should always carefully review the content of your files before submitting your assignment. The assignment Let’s get started! Create GitHub repository The first step in the assignment is to setup a Git repository on GitHub. We will use a special solution called GitHub Classroom for this course which automates the process of setting up student assignment repositories. Here are the basic steps: Click on the following link to open Assignment 2 on the GitHub Classroom site: https://classroom.github.com/a/K4zcVmX- (Links to an external site.)Links to an external site. Click on the Accept this assignment button. GitHub Classroom will provide you with a URL (https) to access the assignment repository. Either copy this address to your clipboard or write it down somewhere. You will need to use this address to set up the repository on a Linux server. Example: https://github.com/UST-SEIS665/hw2-seis665-02-spring2019-<your github id>.git At this point your new repository to ready to use. The repository is currently empty. We will put some content in there soon! Launch Linux server The second step in the assignment is to launch a Linux server using AWS EC2. The server should have the following characteristics: Amazon Linux 2 AMI 64-bit (usually the first option listed) Located in a U.S. region (us-east-1) t2.micro instance type All default instance settings (storage, vpm, security group, etc.) I’ve shown you how to launch EC2 instances in class. You can review it on Canvas. Once you launch the new server, it may take a few minutes to provision. Log into server The next step is to log into the Linux server using a terminal program with a secure shell (SSH) support. You can use iTerm2 (Links to an external site.)Links to an external site. on a Mac and GitBash/PuTTY (Links to an external site.)Links to an external site. on a PC. You will need to have the private server key and the public IP address before attempting to log into the server. The server key is basically your password. If you lose it, you will need to terminate the existing instance and launch a new server. I recommend reusing the same key when launching new servers throughout the class. Note, I make this recommendation to make the learning process easier and not because it is a common security practice. I’ve shown you how to use a terminal application to log into the instance using a Windows desktop. Your personal computer or lab computer may be running a different OS version, but the process is still very similar. You can review the videos on the Canvas. Working with Linux If you’ve made it this far, congratulations! You’ve made it over the toughest hurdle. By the end of this course, I promise you will be able to launch and log into servers in your sleep. You should be looking at a login screen that looks something like this: Last login: Mon Mar 21 21:17:54 2016 from 174-20-199-194.mpls.qwest.net __| __|_ ) _| ( / Amazon Linux AMI ___|\___|___| https://aws.amazon.com/amazon-linux-ami/2015.09-release-notes/ 8 package(s) needed for security, out of 17 available Run "sudo yum update" to apply all updates. ec2-user@ip-172-31-15-26 ~]$ Your terminal cursor is sitting at the shell prompt, waiting for you to type in your first command. Remember the shell? It is a really cool program that lets you start other programs and manage services on the Linux system. The rest of this assignment will be spent working with the shell. Note, when you are asked to type in a command in the steps below, don’t type in the dollar-sign ($) character. This is just meant to represent the command prompt. The actual commands are represented by the characters to the right of the command prompt. Let’s start by asking the shell for some help. Type in: $ help The shell provides you with a list of commands you can run along with possible command options. Next, check out one of the pages in the built-in manual: $ man ls A man page will appear with information on how to use the ls command. This command is used to list the contents of file directories. Either space through the contents of the man page or hit q to exit. Most of the core Linux commands have man pages available. But honestly, some of these man pages are a bit hard to understand. Sometimes your best bet is to search on Google if you are trying to figure out how to use a specific command. When you initially log into Linux, the system places you in your home directory. Each user on the system has a separate home directory. Let’s see where your home directory is located: $ pwd The response should be /home/ec2-user. The pwd command is handy to remember if you ever forget what file directory you are currently located in. If you recall from the Linux Hands-on Guide, this directory is also your current working directory. Type in: $ cd / The cd command let’s you change to a new working directory on the server. In this case, we changed to the root (/) directory. This is the parent of all the other directories on the file system. Type in: $ ls The ls command lists the contents of the current directory. As you can see, root directory contains many other directories. You will become familiar with these directories over time. The ls command provides a very basic directory listing. You need to supply the command with some options if you want to see more detailed information. Type in: $ ls -la See how this command provides you with much more detailed information about the files and directories? You can use this detailed listing to see the owner, group, and access control list settings for each file or directory. Do you see any files listed? Remember, the first character in the access control list column denotes whether a listed item is a file or a directory. You probably see a couple files with names like .autofsck. How come you didn’t see this file when you typed in the lscommand without any options? (Try to run this command again to convince yourself.) Files names that start with a period are called hidden files. These files won’t appear on normal directory listings. Type in: $ cd /var Then, type in: $ ls You will see a directory listing for the /var directory. Next, type in: $ ls .. Huh. This directory listing looks the same as the earlier root directory listing. When you use two periods (..) in a directory path that means you are referring to the parent directory of the current directory. Just think of the two dots as meaning the directory above the current directory. Now, type in: $ cd ~ $ pwd Whoa. We’re back at our home directory again. The tilde character (~) is another one of those handy little directory path shortcuts. It always refers to our personal home directory. Keep in mind that since every user has their own home directory, the tilde shortcut will refer to a unique directory for each logged-in user. Most students are used to navigating a file system by clicking a mouse in nested graphical folders. When they start using a command-line to navigate a file system, they sometimes get confused and lose track of their current position in the file system. Remember, you can always use the pwd command to quickly figure out what directory you are currently working in. Let’s make some changes to the file system. We can easily make our own directories on the file system. Type: mkdir test Now type: ls Cool, there’s our new test directory. Let’s pretend we don’t like that directory name and delete it. Type: rmdir test Now it’s gone. How can you be sure? You should know how to check to see if the directory still exists at this point. Go ahead and check. Let’s create another directory. Type in: $ mkdir documents Next, change to the new directory: $ cd documents Did you notice that your command prompt displays the name of the current directory? Something like: [ec2-user@ip-172-31-15-26 documents]$. Pretty handy, huh? Okay, let’s create our first file in the documents directory. This is just an empty file for training purposes. Type in: $ touch paper.txt Check to see that the new file is in the directory. Now, go back to the previous directory. Remember the double dot shortcut? $ cd .. Okay, we don’t like our documents directory any more. Let’s blow it away. Type in: $ rmdir documents Uh oh. The shell didn’t like that command because the directory isn’t empty. Let’s change back into the documents directory. But this time don’t type in the full name of the directory. You can let shell auto-completion do the typing for you. Type in the first couple characters of the directory name and then hit the tab key: $ cd doc<tab> You should use the tab auto-completion feature often. It saves typing and makes working with the Linux file system much much easier. Tab is your friend. Now, remove the file by typing: $ rm paper.txt Did you try to use the tab key instead of typing in the whole file name? Check to make sure the file was deleted from the directory. Next, create a new file: $ touch file1 We like file1 so much that we want to make a backup copy. Type: $ cp file1 file1-backup Check to make sure the new backup copy was created. We don’t really like the name of that new file, so let’s rename it. Type: $ mv file1-backup backup Moving a file to the same directory and giving it a new name is basically the same thing as renaming it. We could have moved it to a different directory if we wanted. Let’s list all of the files in the current directory that start with the letter f: $ ls f* Using wildcard pattern matching in file commands is really useful if you want the command to impact or filter a group of files. Now, go up one directory to the parent directory (remember the double dot shortcut?) We tried to remove the documents directory earlier when it had files in it. Obviously that won’t work again. However, we can use a more powerful command to destroy the directory and vanquish its contents. Behold, the all powerful remove command: $ rm -fr documents Did you remember to use auto-completion when typing in documents? This command and set of options forcibly removes the directory and its contents. It’s a dangerous command wielded by the mightiest Linux wizards. Okay, maybe that’s a bit of an exaggeration. Just be careful with it. Check to make sure the documents directory is gone before proceeding. Let’s continue. Change to the directory /var and make a directory called test. Ugh. Permission denied. We created this darn Linux server and we paid for it. Shouldn’t we be able to do anything we want on it? You logged into the system as a user called ec2-user. While this user can create and manage files in its home directory, it cannot change files all across the system. At least it can’t as a normal user. The ec2-user is a member of the root group, so it can escalate its privileges to super-user status when necessary. Let’s try it: $ sudo mkdir test Check to make sure the directory exists now. Using sudo we can execute commands as a super-user. We can do anything we want now that we know this powerful new command. Go ahead and delete the test directory. Did you remember to use sudo before the rmdir command? Check to make sure the directory is gone. You might be asking yourself the question: why can we list the contents of the /var directory but not make changes? That’s because all users have read access to the /var directory and the ls command is a read function. Only the root users or those acting as a super-user can write changes to the directory. Let’s go back to our home directory: $ cd ~ Editing text files is a really common task on Linux systems because many of the application configuration files are text files. We can create a text file by using a text editor. Type in: $ nano myfile.conf The shell starts up the nano text editor and places your terminal cursor in the editing screen. Nano is a simple text-based word processor. Type in a few lines of text. When you’re done writing your novel, hit ctrl-x and answer y to the prompt to save your work. Finally, hit enter to save the text to the filename you specified. Check to see that your file was saved in the directory. You can take a look at the contents of your file by typing: $ cat myfile.conf The cat command displays your text file content on the terminal screen. This command works fine for displaying small text files. But if your file is hundreds of lines long, the content will scroll down your terminal screen so fast that you won’t be able to easily read it. There’s a better way to view larger text files. Type in: $ less myfile.conf The less command will page the display of a text file, allowing you to page through the contents of the file using the space bar. Your text file is probably too short to see the paging in action though. Hit q to quit out of the less text viewer. Hit the up-arrow key on your keyboard a few times until the commmand nano myfile.conf appears next to your command prompt. Cool, huh? The up-arrow key allows you to replay a previously run command. Linux maintains a list of all the commands you have run since you logged into the server. This is called the command history. It’s a really useful feature if you have to re-run a complex command again. Now, hit ctrl-c. This cancels whatever command is displayed on the command line. Type in the following command to create a couple empty files in the directory: $ touch file1 file2 file3 Confirm that the files were created. Some commands, like touch. allow you to specify multiple files as arguments. You will find that Linux commands have all kinds of ways to make tasks more efficient like this. Throughout this assignment, we have been running commands and viewing results on the terminal screen. The screen is the standard place for commands to output results. It’s known as the standard out (stdout). However, it’s really useful to output results to the file system sometimes. Type in: $ ls > listing.txt Take a look at the directory listing now. You just created a new file. View the contents of the listing.txt file. What do you see? Instead of sending the output from the ls command to the screen we sent it to a text file. Let’s try another one. Type: $ cat myfile.conf > listing.txt Take a look at the contents of the listing.txt file again. It looks like your myfile.conf file now. It’s like you made a copy of it. But what happened to the previous content in the listing.txt file? When you redirect the output of a command using the right angle-bracket character (>), the output overwrites the existing file. Type this command in: $ cat myfile.conf >> listing.txt Now look at the contents of the listing.txt file. You should see your original content displayed twice. When you use two angle-bracket characters in the commmand the output appends (or adds to) the file instead of overwriting it. We redirected the output from a command to a text file. It’s also possible to redirect the input to a command. Typically we use a keyboard to provide input, but sometimes it makes more sense to input a file to a command. For example, how many words are in your new listing.txt file? Let’s find out. Type in: $ wc -w < listing.txt Did you get a number? This command inputs the listing.txt file into a word count program called wc. Type in the command: $ ls /usr/bin The terminal screen probably scrolled quickly as filenames flashed by. The /usr/bin directory holds quite a few files. It would be nice if we could page through the contents of this directory. Well, we can. We can use a special shell feature called pipes. In previous steps, we redirected I/O using the file system. Pipes allow us to redirect I/O between programs. We can redirect the output from one program into another. Type in: $ ls /usr/bin | less Now the directory listing is paged. Hit the spacebar to page through the listing. The pipe, represented by a vertical bar character (|), takes the output from the ls command and redirects it to the less command where the resulting output is paged. Pipes are super powerful and used all the time by savvy Linux operators. Hit the q key to quit the paginated directory listing command. Working with shell scripts Now things are going to get interesting. We’ve been manually typing in commands throughout this exercise. If we were running a set of repetitive tasks, we would want to automate the process as much as possible. The shell makes it really easy to automate tasks using shell scripts. The shell provides many of the same features as a basic procedural programming language. Let’s write some code. Type in this command: $ j=123 $ echo $j We just created a variable named j referencing the string 123. The echo command printed out the value of the variable. We had to use a dollar sign ($) when referencing the variable in another command. Next, type in: $ j=1+1 $ echo $j Is that what you expected? The shell just interprets the variable value as a string. It’s not going to do any sort of computation. Typing in shell script commands on the command line is sort of pointless. We want to be able to create scripts that we can run over-and-over. Let’s create our first shell script. Use the nano editor to create a file named myscript. When the file is open in the editor, type in the following lines of code: #!/bin/bash echo Hello $1 Now quit the editor and save your file. We can run our script by typing: $ ./myscript World Er, what happened? Permission denied. Didn’t we create this file? Why can’t we run it? We can’t run the script file because we haven’t set the execute permission on the file. Type in: $ chmod u+x myscript This modifies the file access control list to allow the owner of the file to execute it. Let’s try to run the command again. Hit the up-arrow key a couple times until the ./myscript World command is displayed and hit enter. Hooray! Our first shell script. It’s probably a bit underwhelming. No problem, we’ll make it a little more complex. The script took a single argument called World. Any arguments provided to a shell script are represented as consecutively numbered variables inside the script ($1, $2, etc). Pretty simple. You might be wondering why we had to type the ./ characters before the name of our script file. Try to type in the command without them: $ myscript World Command not found. That seems a little weird. Aren’t we currently in the directory where the shell script is located? Well, that’s just not how the shell works. When you enter a command into the shell, it looks for the command in a predefined set of directories on the server called your PATH. Since your script file isn’t in your special path, the shell reports it as not found. By typing in the ./ characters before the command name you are basically forcing the shell to look for your script in the current directory instead of the default path. Create another file called cleanup using nano. In the file editor window type: #!/bin/bash # My cleanup script mkdir archive mv file* archive Exit the editor window and save the file. Change the permissions on the script file so that you can execute it. Now run the command: $ ./cleanup Take a look at the file directory listing. Notice the archive directory? List the contents of that directory. The script automatically created a new directory and moved three files into it. Anything you can do manually at a command prompt can be automated using a shell script. Let’s create one more shell script. Use nano to create a script called namelist. Here is the content of the script: #!/bin/bash # for-loop test script names='Jason John Jane' for i in $names do echo Hello $i done Change the permissions on the script file so that you can execute it. Run the command: $ ./namelist The script will loop through a set of names stored in a variable displaying each one. Scripts support several programming constructs like for-loops, do-while loops, and if-then-else. These building blocks allow you to create fairly complex scripts for automating tasks. Installing packages and services We’re nearing the end of this assignment. But before we finish, let’s install some new software packages on our server. The first thing we should do is make sure all the current packages installed on our Linux server are up-to-date. Type in: $ sudo yum update -y This is one of those really powerful commands that requires sudo access. The system will review the currently installed packages and go out to the Internet and download appropriate updates. Next, let’s install an Apache web server on our system. Type in: $ sudo yum install httpd -y Bam! You probably never knew that installing a web server was so easy. We’re not going to actually use the web server in this exercise, but we will in future assignments. We installed the web server, but is it actually running? Let’s check. Type in: $ sudo service httpd status Nope. Let’s start it. Type: $ sudo service httpd start We can use the service command to control the services running on the system. Let’s setup the service so that it automatically starts when the system boots up. Type in: $ sudo chkconfig httpd on Cool. We installed the Apache web server on our system, but what other programs are currently running? We can use the pscommand to find out. Type in: $ ps -ax Lots of processes are running on our system. We can even look at the overall performance of our system using the topcommand. Let’s try that now. Type in: $ top The display might seem a little overwhelming at first. You should see lots of performance information displayed including the cpu usage, free memory, and a list of running tasks. We’re almost across the finish line. Let’s make sure all of our valuable work is stored in a git repository. First, we need to install git. Type in the command: $ sudo yum install git -y Check your work It’s very important to check your work before submitting it for grading. A misspelled, misplaced or missing file will cost you points. This may seem harsh, but the reality is that these sorts of mistakes have consequences in the real world. For example, a server instance could fail to launch properly and impact customers because a single required file is missing. Here is what the contents of your git repository should look like before final submission: ┣archive ┃ ┣ file1 ┃ ┣ file2 ┃ ┗ file3 ┣ namelist ┗ myfile.conf Saving our work in the git repository Next, make sure you are still in your home directory (/home/ec2-user). We will install the git repository you created at the beginning of this exercise. You will need to modify this command by typing in the GitHub repository URL you copied earlier. $ git clone <your GitHub URL here>.git Example: git clone https://github.com/UST-SEIS665/hw2-seis665-02-spring2019-<your github id>.git The git application will ask you for your GitHub username and password. Note, if you have multi-factor authentication enabled on your GitHub account you will need to provide a personal token instead of your password. Git will clone (copy) the repository from GitHub to your Linux server. Since the repository is empty the clone happens almost instantly. Check to make sure that a sub-directory called "hw2-seis665-02-spring2019-<username>" exists in the current directory (where <username> is your GitHub account name). Git automatically created this directory as part of the cloning process. Change to the hw2-seis665-02-spring2019-<username> directory and type: $ ls -la Notice the .git hidden directory? This is where git actually stores all of the file changes in your repository. Nothing is actually in your repository yet. Change back to the parent directory (cd ..). Next, let’s move some of our files into the repository. Type: $ mv archive hw2-seis665-02-spring2019-<username> $ mv namelist hw2-seis665-02-spring2019-<username> $ mv myfile.conf hw2-seis665-02-spring2019-<username> Hopefully, you remembered to use the auto-complete function to reduce some of that typing. Change to the hw2-seis665-02-spring2019-<username> directory and list the directory contents. Your files are in the working directory, but are not actually stored in the repository because they haven’t been committed yet. Type in: $ git status You should see a list of untracked files. Let’s tell git that we want these files tracked. Type in: $ git add * Now type in the git status command again. Notice how all the files are now being tracked and are ready to be committed. These files are in the git staging area. We’ll commit them to the repository next. Type: $ git commit -m 'assignment 2 files' Next, take a look at the commit log. Type: $ git log You should see your commit listed along with an assigned hash (long string of random-looking characters). Finally, let’s save the repository to our GitHub account. Type in: $ git push origin master The git client will ask you for your GitHub username and password before pushing the repository. Go back to the GitHub.com website and login if you have been logged out. Click on the repository link for the assignment. Do you see your files listed there? Congratulations, you completed the exercise! Terminate server The last step is to terminate your Linux instance. AWS will bill you for every hour the instance is running. The cost is nominal, but there’s no need to rack up unnecessary charges. Here are the steps to terminate your instance: Log into your AWS account and click on the EC2 dashboard. Click the Instances menu item. Select your server in the instances table. Click on the Actions drop down menu above the instances table. Select the Instance State menu option Click on the Terminate action. Your Linux instance will shutdown and disappear in a few minutes. The EC2 dashboard will continue to display the instance on your instance listing for another day or so. However, the state of the instance will be terminated. Submitting your assignment — IMPORTANT! If you haven’t already, please e-mail me your GitHub username in order to receive credit for this assignment. There is no need to email me to tell me that you have committed your work to GitHub or to ask me if your GitHub submission worked. If you can see your work in your GitHub repository, I can see your work.
qiskit-community / Povm ToolboxA toolbox for the implementation of positive operator-valued measures (POVMs).
shams-ahmed / SAStepperControlUIStepper subclass that display's current value between Increment/decrement operators
AI4Science-WestlakeU / Beno[ICLR24] A boundary-embedded neural operator that incorporates complex boundary shape and inhomogeneous boundary values
BlockchainLabs / SpreadCoinSpreadCoin October 5, 2014 Introduction In proof-of-work cryptocurrencies new coins are generated by the network through the process of mining. One of the purposes of mining is to protect network from double spending attacks and history rewriting. Miners generate new blocks and check contents of the blocks generated by other peers for conformation to the network rules. However, many miners now delegate all the checking work crucial to cryptocurrency security to pools. This means that pool operators do not have any large hashing power but have control over generation of new blocks. This brings unnecessary centralization to otherwise decentralized system. Controlling more than 50% of mining power allows to perform double-spending attacks with 100% chance of success but even with less than 50% control it is possible to perform attacks which have chances to succeed1 . The core idea of SpreadCoin is to prevent creation of pools and thus make mining more decentralized and the whole system more secure. Pool Prevention In pooled mining miners perform only the work which is necessary to fulfill the proof-of-work requirements and pools take care of block generation and broadcasting and distribute reward among miners according to the shares they submit. In this scheme miner has two alternatives: 1. Solo mining. In this case miner cannot send shares to the pool because they will not be accepted. 2. Pooled mining. Miner’s shares will be accepted by the pool but in the case miner will actually generate a new block its reward will go to the pool which will redistribute it to all miners. This allows organization of pools because miners has no way to cheat and steal generated money. To prevent creation of pools we must remove this possibility so that if pool will be created than miner can mine in a pool, submit shares as usual and get reward for them but in the case of actually finding a block miner can send it directly to the network instead of the pool and get full reward for it. In SpreadCoin mining is organized in such way that miner must know the following things: 1. Private key corresponding to the coinbase transaction. 2. Whole block, not only its header. This ensures that miner can broadcast mined block and spend coins generated in that block. It may seem that it is necessary to know only the private key to spend coinbase transaction. If two conflicting transactions will appear on the network then the one that was broadcasted first will have much higher probability to be included in a block because each peer remembers and retransmits only the first one of the conflicting transactions. If both miner and pool know private key but only pool knows the content of the block than pool can generate and broadcast spending transaction earlier than miner. If both miner 1 Double-spending. Bitcoin Wiki. https://en.bitcoin.it/wiki/Double-spending and pool know content of the block than miner will be the first one who can broadcast block and spending transaction. To prove knowledge of the private key and whole block there are two new fields in the block header: MinerSignature and hashWholeBlock. MinerSignature is a digital signature of all fields of the block header except for the hashWholeBlock. Changing any information in the block requires regeneration of this signature which means that it is necessary to recalculate it during each iteration of the mining process. This implies that miner must be able to sign any arbitrary data. hashWholeBlock is a SHA-256 hash of the block data arranged as follows: Padding ensures that there is no incentive to mine empty blocks without transactions. Padding values are computed using simple algorithm which initializes last 32 bytes (8 uint32) with hashPrevBlock and then goes backward and computes remaining uint32 values using the following recursive formula: 𝐼𝑖 = 𝐼𝑖+3 ∙ 𝐼𝑖+7. This algorithm ensures that there is no efficient way to compute padding values on the fly during hash computation which otherwise could potentially give some advantage to mine empty blocks in certain computing environments. It is important that block is hashed twice. If it was hashed only once then pool could hash the beginning of the block and send resulting hash state to the miners. Each miner would then modify some information in the end of the block and recalculate the hash based on the known state without actual knowledge about what is contained in the beginning of the block. Appending block data to itself make it necessary to know the whole block to recalculate hashWholeBlock. Pool may detect and ban cheating miners. However, many miners may still prefer to cheat so that pool will be completely unusable for honest miners. Miners that have low probability of finding a block will get more profit by stealing reward for accidentally found block even if pool will ban them thereafter. Miners that have enough mining power to find blocks consistently can still connect to a pool and submit shares for some time but steal the first found block. This way they can get both reward for their shares and the actual mined block. Given all this it is expected that no one will create a pool. But even if someone will than it can be countered by releasing stealing miner software which many miners will switch to. Compact Transactions SpreadCoin as well as Bitcoin uses ECDSA signatures. Each address in Bitcoin is a hash of an ECDSA public key. To spend coins sent to an address it is necessary to provide public key matching to that hash and a signature. This results in 139 or 107 bytes for each transaction input script (scriptSig) depending on Block Padding MAX_BLOCK_SIZE Block Padding whether compact public key is used. However, it is possible to recover public key from the signature2 which means that it is not necessary to provide it in transaction input. Together with using compact representation of the signature3 it allows to reduce size of transaction input script from 139 or 107 bytes in Bitcoin to 67 bytes in SpreadCoin. Recovering public key has almost no extra CPU cost compared to the usual signature verification process used in Bitcoin. This is important because the CPU cost of ECDSA signature verification is a bottleneck for Bitcoin transaction processing. Usual output script (scriptPubKey) in Bitcoin looks as follows: OP_DUP OP_HASH160 5bd18804e4bb43a4bb8b6bc88408970bafaf4a38 OP_EQUALVERIFY OP_CHECKSIG In SpreadCoin the semantics of the OP_CHECKSIG instruction was changed to checking signature by hash of the public key (it recovers public key and compares its hash with the provided one). This results in a much simpler script in SpreadCoin: 5bd18804e4bb43a4bb8b6bc88408970bafaf4a38 OP_CHECKSIG This results in additional minor space saving because this script is 3 bytes smaller. Smooth Supply Block reward in Bitcoin is computed using the following formula: 𝑅ℎ = 𝑅0 ∙ 2 −⌊ ℎ 𝑝 ⌋ , where ℎ – block height, 𝑝 – reward halving period, 𝑅0 – initial reward, 𝑅ℎ – reward for block ℎ, ⌊ ⌋ – floor function. This method results in abrupt reward changes near halving points. SpreadCoin uses simple linear interpolation between halving points to make reward decrease much smother. This is achieved by modifying reward using the following formula: 𝑅ℎ ′ = 4 3 (𝑅ℎ − 𝑅ℎ ∙ ℎ mod 𝑝 2𝑝 ). SpreadCoin uses 𝑝 = 2 ∙ 106 as its reward halving period. 2 ECDSA Signatures allow recovery of the public key. Bitcoin Forum. https://bitcointalk.org/?topic=6430.0%29%3F 3 Why the signature is always 65 (1+32+32) bytes long? Bitcoin Stack Exchange. https://bitcoin.stackexchange.com/questions/12554/why-the-signature-is-always-65-13232-bytes-long | NO YEAR 2106 PROBLEM The time stamp field in the block header is now 64 bit instead of 32 bit (Bitcoin) so that much farther date times are possible (>Year 2106) Upcoming features that are in development and will be introduced over the next weeks and months: SERVICENODES A servicenode is a node which runs continuously (24/7) on a server and which provides services within the spreadcoin network. You have to pay a collateral to be able to install a servernode (in return your servicenode will earn a steady income). This collateral is determined by a free market price discovery. (No fix collateral. The price is allowed to fluctuate over time.) COMPETITIVE COLLATERAL Furthermore, to introduce a competitive nature to the servicenodes there will only ever be a limited number of allowed servicenodes worldwide. Since the collateral isn't set in stone, but the amount of servicenodes is fixed, the price of a servicenode will be determined by the participants themselves. It is expected that the price will vary widely over time, which exposes it to the same market forces that hashrate and currency value are exposed to too. SERVICE APPS There are a number of decentralized applications that will run on servicenodes. Most likely those apps will include: 1) "Spread the message" (an in-wallet encrypted messaging system, which allows you to send a message to an SPR address) 2) "Spread the Search" (A decentralized search engine that lets the servicenodes crawl and map the entire internet.) . SPREADX11 SpreadX11 is different from plain X11 by introducing a sophisticated pool prevention mechanism. With SpreadX11 every block header contains additional information (MinerSignature and hashWholeBlock). With the help of this information the protocol ensures that the miner of a new block is always also the first one to know the content of the whole block and the private key to spend the coinbase transaction. (contrary to pool mining where the pool operator is the first one to know those things) So when a miner finds a block, he must himself sign and transmit the block to the network (like solo mining), instead of having a pool handle this for him. This effectively prevents pools by making their rules non-enforceable, since any miner in any assumed pool can always just steal the block reward instead of following the rules set up by the pool. COMPACT TRANSACTIONS SpreadCoin uses a more compact representation for signatures in transactions. SpreadCoin as well as Bitcoin uses ECDSA signatures. While bitcoin keeps a copy of the public key of the corresponding signature around, SpreadCoin ommits this by recovering the public key on the fly directly from the signature. This way it is not necessary to keep the public key of every ECDSA signature in the blockchain, so this leads to *smaller transactions and hence a smaller blockchain (at the cost of a few CPU cycles more). (*reduction in size of transaction from 139 or 107 bytes in Bitcoin to 67 bytes in SpreadCoin.) SMOOTH HALVING Unlike Bitcoin, there are no abrupt reward halvings in SpreadCoin. Block reward is smoothly decreasing over time. UNIQUE DESIGN WITH IN-WALLET VANITYGEN One of the first apps to be built into the wallet is the vanity generator (or vanity gen) which allows anyone to create personalised payment addresses. The easy to use wallet lets you search through trillions of payment addresses allowing you to find one or multiple vanity addresses, which are then stored safely along with the private keys on your own computer - and nowhere else. Searching using the vanity gen is probabilistic, so the amount of time required to find your chosen address patterns depends on how complex the pattern is, the speed of your computer, and a little bit of luck. You can use the vanity gen for a bit of fun, to make your address standout from the crowd or to create a link to a brand, business or other organisation. You can even search for addresses that others might be willing to buy from you. SpreadCoin is a new cryptocurrency which is more decentralized than Bitcoin. It prevents centralization of hashing power in pools, which is one of the main concerns of Bitcoin security. SpreadCoin was fairly launched on 29 July 2014, 9:00 UTC with no premine.
HlG4399 / FRSVTI implemented the fllowing article by Matlab.Refrence:Oh T H, Matsushita Y, Tai Y W, et al. Fast Randomized Singular Value Thresholding for Low-rank Optimization[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2015, PP(99):1-1.Abstract:Rank minimization can be converted into tractable surrogate problems, such as Nuclear Norm Minimization (NNM) and Weighted NNM (WNNM). The problems related to NNM, or WNNM, can be solved iteratively by applying a closed-form proximal operator, called Singular Value Thresholding (SVT), or Weighted SVT, but they suffer from high computational cost of Singular Value Decomposition (SVD) at each iteration. We propose a fast and accurate approximation method for SVT, that we call fast randomized SVT (FRSVT), with which we avoid direct computation of SVD. The key idea is to extract an approximate basis for the range of the matrix from its compressed matrix. Given the basis, we compute partial singular values of the original matrix from the small factored matrix. In addition, by developping a range propagation method, our method further speeds up the extraction of approximate basis at each iteration. Our theoretical analysis shows the relationship between the approximation bound of SVD and its effect to NNM
gyfora / StreamKVA streaming key-value store implementation using native Flink Streaming operators
operalib / OperalibLearning with operator-valued kernels
louisdh / Microcontroller KitValue types, operators and functions for tinkering with microcontrollers in software.
Coly010 / Rxjs Debug OperatorDebug operator to log observable values
merantix-momentum / Gnn Bvp SolverSource code for paper "Learning the Solution Operator of Boundary Value Problems using Graph Neural Networks"
ultranet1 / APACHE AIRFLOW DATA PIPELINESProject Description: A music streaming company wants to introduce more automation and monitoring to their data warehouse ETL pipelines and they have come to the conclusion that the best tool to achieve this is Apache Airflow. As their Data Engineer, I was tasked to create a reusable production-grade data pipeline that incorporates data quality checks and allows for easy backfills. Several analysts and Data Scientists rely on the output generated by this pipeline and it is expected that the pipeline runs daily on a schedule by pulling new data from the source and store the results to the destination. Data Description: The source data resides in S3 and needs to be processed in a data warehouse in Amazon Redshift. The source datasets consist of JSON logs that tell about user activity in the application and JSON metadata about the songs the users listen to. Data Pipeline design: At a high-level the pipeline does the following tasks. Extract data from multiple S3 locations. Load the data into Redshift cluster. Transform the data into a star schema. Perform data validation and data quality checks. Calculate the most played songs for the specified time interval. Load the result back into S3. dag Structure of the Airflow DAG Design Goals: Based on the requirements of our data consumers, our pipeline is required to adhere to the following guidelines: The DAG should not have any dependencies on past runs. On failure, the task is retried for 3 times. Retries happen every 5 minutes. Catchup is turned off. Do not email on retry. Pipeline Implementation: Apache Airflow is a Python framework for programmatically creating workflows in DAGs, e.g. ETL processes, generating reports, and retraining models on a daily basis. The Airflow UI automatically parses our DAG and creates a natural representation for the movement and transformation of data. A DAG simply is a collection of all the tasks you want to run, organized in a way that reflects their relationships and dependencies. A DAG describes how you want to carry out your workflow, and Operators determine what actually gets done. By default, airflow comes with some simple built-in operators like PythonOperator, BashOperator, DummyOperator etc., however, airflow lets you extend the features of a BaseOperator and create custom operators. For this project, I developed several custom operators. operators The description of each of these operators follows: StageToRedshiftOperator: Stages data to a specific redshift cluster from a specified S3 location. Operator uses templated fields to handle partitioned S3 locations. LoadFactOperator: Loads data to the given fact table by running the provided sql statement. Supports delete-insert and append style loads. LoadDimensionOperator: Loads data to the given dimension table by running the provided sql statement. Supports delete-insert and append style loads. SubDagOperator: Two or more operators can be grouped into one task using the SubDagOperator. Here, I am grouping the tasks of checking if the given table has rows and then run a series of data quality sql commands. HasRowsOperator: Data quality check to ensure that the specified table has rows. DataQualityOperator: Performs data quality checks by running sql statements to validate the data. SongPopularityOperator: Calculates the top ten most popular songs for a given interval. The interval is dictated by the DAG schedule. UnloadToS3Operator: Stores the analysis result back to the given S3 location. Code for each of these operators is located in the plugins/operators directory. Pipeline Schedule and Data Partitioning: The events data residing on S3 is partitioned by year (2018) and month (11). Our task is to incrementally load the event json files, and run it through the entire pipeline to calculate song popularity and store the result back into S3. In this manner, we can obtain the top songs per day in an automated fashion using the pipeline. Please note, this is a trivial analyis, but you can imagine other complex queries that follow similar structure. S3 Input events data: s3://<bucket>/log_data/2018/11/ 2018-11-01-events.json 2018-11-02-events.json 2018-11-03-events.json .. 2018-11-28-events.json 2018-11-29-events.json 2018-11-30-events.json S3 Output song popularity data: s3://skuchkula-topsongs/ songpopularity_2018-11-01 songpopularity_2018-11-02 songpopularity_2018-11-03 ... songpopularity_2018-11-28 songpopularity_2018-11-29 songpopularity_2018-11-30 The DAG can be configured by giving it some default_args which specify the start_date, end_date and other design choices which I have mentioned above. default_args = { 'owner': 'shravan', 'start_date': datetime(2018, 11, 1), 'end_date': datetime(2018, 11, 30), 'depends_on_past': False, 'email_on_retry': False, 'retries': 3, 'retry_delay': timedelta(minutes=5), 'catchup_by_default': False, 'provide_context': True, } How to run this project? Step 1: Create AWS Redshift Cluster using either the console or through the notebook provided in create-redshift-cluster Run the notebook to create AWS Redshift Cluster. Make a note of: DWN_ENDPOINT :: dwhcluster.c4m4dhrmsdov.us-west-2.redshift.amazonaws.com DWH_ROLE_ARN :: arn:aws:iam::506140549518:role/dwhRole Step 2: Start Apache Airflow Run docker-compose up from the directory containing docker-compose.yml. Ensure that you have mapped the volume to point to the location where you have your DAGs. NOTE: You can find details of how to manage Apache Airflow on mac here: https://gist.github.com/shravan-kuchkula/a3f357ff34cf5e3b862f3132fb599cf3 start_airflow Step 3: Configure Apache Airflow Hooks On the left is the S3 connection. The Login and password are the IAM user's access key and secret key that you created. Basically, by using these credentials, we are able to read data from S3. On the right is the redshift connection. These values can be easily gathered from your Redshift cluster connections Step 4: Execute the create-tables-dag This dag will create the staging, fact and dimension tables. The reason we need to trigger this manually is because, we want to keep this out of main dag. Normally, creation of tables can be handled by just triggering a script. But for the sake of illustration, I created a DAG for this and had Airflow trigger the DAG. You can turn off the DAG once it is completed. After running this DAG, you should see all the tables created in the AWS Redshift. Step 5: Turn on the load_and_transform_data_in_redshift dag As the execution start date is 2018-11-1 with a schedule interval @daily and the execution end date is 2018-11-30, Airflow will automatically trigger and schedule the dag runs once per day for 30 times. Shown below are the 30 DAG runs ranging from start_date till end_date, that are trigged by airflow once per day. schedule
bhaveshjaggi / PestDetectionPEST DETECTION USING IMAGE PROCESSING e The principal idea which empowered us to work on the project PEST DETECTION USING IMAGE PROCESSING is to ensure improved and better farming techniques for farmers. Our Solution: The techniques of image analysis are extensively applied to agricultural science, and it provides maximum protection to crops and also much less use of pesticides which can ultimately lead to better crop management and production. The following softwares are required for the project: OpenCV with C++/Python : It is a library which is designed for computational efficiency with a strong focus on real time applications. Pest Detection System Following are the image processing steps which are used in the proposed system. >Color Image to Gray Image Conversion Therefore, images are converted into gray scale images so that they can be handled easily and require less storage. The following equation shows how images are converted into gray scale images. I(x,y)=0.2989*B +0.5870*G +0.1140*B > Image Filtering The PSNR value is calculated for both the average and median resulting images .The average filter provides better result as compared to the median filter. So this paper uses average filter for further processing. > Image Segmentation To detect the pests from the images, the image background is calculated using morphological operators which is most critical after this image is subtracted from the original image. So the resulting image will only have the objects with pixel values 1 and background pixel values 0. >Noise Removal Noise contains dew drops, dust and other visible parts of leaves. As only the object of interest was to be visible on the images,so the aim was to remove the noise to get better and effective results. The Erosion algorithm has been used to remove isolated noisy pixels and to smoothen object boundaries . After noise removal,the next goal was to enhance the detected pests after segmentation which was performed by using the dilation algorithm. >Feature Extraction Different properties of the images are calculated on the basis of those attributes using which image is classified. For image properties, gray level co-occurrence matrix and regional properties of the images are calculated. These properties are used to train the support vector machine to classify images. >Counting of the pests on the leaves is the main purpose, so that it can give an idea of how much pests are there on a leaf.It uses Moore neighborhood tracing algorithm and Jacob's stopping criterion Feasibility: The present framework of pest detection is quite tedious and laborious for the farmers as they have to carry out their acre-acres surveys themselves and it requires a lot of vigorous efforts to achieve the same.Image analysis provides a realistic opportunity for the automation of insect pest detection.Through this system, crop technicians can easily count the pests from the collected specimens, and right pests’ management can be applied to increase both the quantity and quality of production. Using the automated system, crop technicians can make the monitoring process easier. So in order to bring enhancements in the system,we came up with more productive and well organised system with our idea .Due to this automaton applied,lucrativeness increases and labour is reduced.
jainsee24 / Parallel Face DetectionImage segmentation is the process of dividing an image into multiple parts. It is typically used to identify objects or other relevant information in digital images. There are many ways to perform image segmentation including Thresholding methods, Color-based segmentation, Transform methods among many others. Alternately edge detection can be used for image segmentation and data extraction in areas such as image processing, computer vision, and machine vision. Image thresholding is a simple, yet effective, way of partitioning an image into a foreground and background. This image analysis technique is a type of image segmentation that isolates objects by converting grayscale images into binary images. Image thresholding is most effective in images with high levels of contrast. Otsu's method, named after Nobuyuki Otsu, is one such implementation of Image Thresholding which involves iterating through all the possible threshold values and calculating a measure of spread for the pixel levels each side of the threshold, i.e. the pixels that either fall in foreground or background. The aim is to find the threshold value where the sum of foreground and background spreads is at its minimum. Edge detection is an image processing technique for finding the boundaries of objects within images. It works by detecting discontinuities in brightness. An image can have horizontal, vertical or diagonal edges. The Sobel operator is used to detect two kinds of edges in an image by making use of a derivative mask, one for the horizontal edges and one for the vertical edges. 1. Introduction Face detection is a computer technology being used in a variety of applications that identifies human faces in digital images. Face detection also refers to the psychological process by which humans locate and attend to faces in a visual scene. Face detection can be regarded as a specific case of object-class detection. In object-class detection, the task is to find the locations and sizes of all objects in an image that belong to a given class. Examples include upper torsos, pedestrians, and cars. Face-detection algorithms focus on the detection of frontal human faces. It is analogous to image detection in which the image of a person is matched bit by bit. Image matches with the image stores in database. Any facial feature changes in the database will invalidate the matching process. 2. Needs/Problems There have been widely applied many researches related to face recognition system. The system is commonly used for video surveillance, human and computer interaction, robot navigation, and etc. Along with the utilization of the system, it leads to the need for a faster system response, such as robot navigation or application for public safety. A number of classification algorithms have been applied to face recognition system, but it still has a problem in terms of computing time. In this system, computing time of the classification or feature extraction is an important thing for further concern. To improve the algorithmic efficiency of face detection, we combine the eigenface method using Haar-like features to detect both of eyes and face, and Robert cross edge detector to locate the human face position. Robert Cross uses the integral image representation and simple rectangular features to eliminate the need of expensive calculation of multi-scale image pyramid. 3. Objectives Some techniques used in this application are 1. Eigen-face technique 2. KLT Algorithm 3. Parallel for loop in openmp 4. OpenCV for face detection. 5. Further uses of the techniques