SkillAgentSearch skills...

Mdxfind

Multi-threaded multi-algorithm hash search engine

Install / Use

/learn @Cynosureprime/Mdxfind
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <img src="logo.png" alt="mdxfind" width="128"> </p>

mdxfind

Multi-threaded, multi-algorithm hash search engine. Searches wordlists against large hash collections across 994 hash types simultaneously, using Judy arrays for memory-efficient hash storage and SIMD acceleration on supported platforms. Includes mdsplit, a companion tool that separates solved hashes by type into organized output files.

See HASH_TYPES.md for the complete list of supported hash types with hashcat mode mappings. See docs/HOWTO.md for a practical guide to hash recovery workflows. See docs/EXAMPLES.md for detailed examples of iterations, rotations, salts, and advanced features. See docs/RULES.md for the complete rule reference. See docs/BENCHMARK.md for performance comparisons.

Uses yarn.c for threading, libJudy for compressed hash lookup, and hashpipe for hash verification.

When to use mdxfind

  • Mixed hash types — you have a pile of hashes and don't know (or don't care) what types they are
  • Large hash collections — 100M+ hashes, where O(1) Judy array lookups shine
  • Quick triage — rapidly cull common passwords from a massive hashlist before using other tools
  • Arbitrary iteration — try thousands of iteration counts in a single run
  • Unknown algorithms — let mdxfind try 994+ types simultaneously and tell you what matched
  • CPU-friendly algorithms — bcrypt, PBKDF2, scrypt, and other algorithms that don't benefit from GPU acceleration
  • Salted hashes without known types — mdxfind can try many salted algorithms with auto-generated salt combinations

When NOT to use mdxfind

  • Mask/brute-force attacks on GPU-friendly algorithms — use hashcat
  • Distributed cracking clusters — hashcat + hashtopolis is better suited
  • Single known hash type with a small hash list — hashcat's GPU speed wins here

GPU acceleration

mdxfind supports OpenCL GPU acceleration for salted hash types, using multiple GPUs simultaneously across AMD and NVIDIA hardware. On a 5-GPU system (2x AMD RDNA3 + RTX 4070 Ti + RTX 3080 + AMD iGPU), mdxfind solved 1,000,000 salted MD5 hashes against the rockyou wordlist in 40 seconds at 69 GH/s. See docs/BENCHMARK.md for detailed GPU performance comparisons.

Antivirus note

Some antivirus vendors occasionally flag mdxfind as a coin miner due to its hashing features. This is a false positive — mdxfind does not mine cryptocurrency.

Acknowledgments

Many thanks to @tychotithonus for years of hosting mdxfind at techsolvency.com, maintaining the changelog, generating metadata, and writing comprehensive documentation that helped countless users get started with mdxfind. The previous distribution site remains available as an archive.

History

MDXfind was created as a result of frustration. I renewed my interest in hashes after being absent for more than a quarter of a century, and found it quite enjoyable as a pastime, but I quickly grew frustrated with the tools available. Around 2011 or so, I started working with John, and later hashcat (and a host of other programs), and in 2011, on one of the forums of the time, I encountered a file of 50 Million "MD5" hashes. Well, no tools of the day could process that easily, and worse, I found that though all of the hashes were 32-hex, not all of them were MD5. Some were MD5x2, and there were even more at higher counts. So, I created a quick program to test for "MD5x" iterations of MD5 — thus the name MDXfind. And find them, I did: MD5x01 through MD5x99. And it kept going. MD5x200. MD5x1000. MD5x1000000! But that was just the start, and there were hundreds of different algorithms, mixed into that "MD5" list. In 2013, the first early versions of MDXfind appeared. They were trivial, but I continued to work on it over the years, adding more and more algorithms, and improving the speed. Processing 500M hashes was no longer a problem. Sometime around June of 2013, I found that hashcat was dropping or mangling passwords with control characters in them (like 0x0a, or 0x0d), and I wanted to fix that, so I created the $HEX[] encoding. This was memorialized on the Hashcat forum, where I laid out the reasoning for it, and later, most programs adopted it.

But for various reasons, MDXfind was just a personal project, and could not be released in source form. Those reasons have now ceased to exist, so here it is. It was not created from the ground up to be a perfect, ideal codebase — it was written in my spare time, and with ideas that occurred to me as I encountered issues. It has been quite resilient, and thanks to the efforts of the CynosurePrime team, and others (in particular @tychotithonus), mdxfind has had a home and a small group of people giving feedback. Thank you to each of you.

Likewise mdsplit was born out of absolute frustration, dealing with large lists. It gives a way to split out "solved" hashes from an unsolved list, and runs orders of magnitude faster than trying to do this in other applications. Now, with hashpipe, mdsplit, rling, and mdxfind — you can finally really deal with vast quantities of hash lists, and process them effectively. Enjoy!

Overview

mdxfind is designed for processing very large hash collections (100+ million hashes) against wordlists, with optional rules, salts, usernames, peppers, and hybrid mask attacks. It can:

  • Test every word against every loaded hash across all selected algorithms in a single pass
  • Apply password mangling rules in concatenated or dot-product form
  • Append or prepend character masks to each candidate
  • Handle salts, usernames, peppers, and suffixes from separate files or embedded in the hash file
  • Deduplicate wordlists on the fly
  • Expand passwords to Unicode, XML-escape special characters, or munge email addresses
  • Rotate calculated hashes to match truncated or manipulated input hashes
  • Output results in a standardized TYPE hash[:salt]:password format consumed by mdsplit

Typical workflow

                 wordlists
                    |
                    v
hash file ---> mdxfind ---> stdout (solved) ---> mdsplit ---> per-type .txt files
                    |
                    v
               stderr (progress/stats)
  1. Load hashes from stdin or -f file
  2. Load salts (-s), usernames (-u), peppers (-j), suffixes (-k) if needed
  3. Process wordlists with optional rules (-r, -R) and masks (-n, -N)
  4. Pipe solved results through mdsplit to organize by hash type

Usage

mdxfind [options] [wordlist ...] < hashfile
mdxfind -f hashfile [options] [wordlist ...]

Options

Hash selection:

| Option | Description | |--------|-------------| | -h REGEX | Select hash types by regex, comma-separated. Use ! to negate, . for all. Multiple -h allowed | | -m MODE | Select by hashcat mode or internal index: -m 0 (MD5), -m e1-e10 (range), -m 0,100,e369 (mixed) | | -M TYPE | Select type for -F embedded-salt loading (e.g., -M e373) |

Hash input:

| Option | Description | |--------|-------------| | -f FILE | Read hashes from file (instead of stdin). Allows stdin to be used for wordlists | | -F FILE | Read hashes with embedded salts in hash:salt format. Requires -M to select type | | -s FILE | Read salts from file (one per line) | | -u FILE | Read usernames from file | | -j FILE | Read peppers (prefixes) from file | | -k FILE | Read suffixes from file | | -i N | Iteration count for iterated hash types |

Password manipulation:

| Option | Description | |--------|-------------| | -r FILE | Apply rules (concatenated form) | | -R FILE | Apply rules (dot-product form) | | -n SPEC | Append mask/digits: -n 2 (2 digits), -n 3x (3 hex), -n '?l?d' (letter+digit), -n '?[0-9a-f]?[0-9a-f]' | | -N SPEC | Prepend mask/digits (same syntax as -n) | | -a | Email address munging (try local part, domain, variations) | | -b | Expand each word to Unicode (UTF-16LE), best effort | | -c | Replace special chars (<>&, etc.) with XML equivalents | | -d | Deduplicate wordlists, best effort |

Search behavior:

| Option | Description | |--------|-------------| | -e | Extended search for truncated hashes | | -g | Rotate calculated hashes to attempt match against input | | -q N | Internal iteration count for composed types (SHA1MD5x, etc.) | | -v | Do not mark salts as found (continue searching all salts) | | -w N | Skip N lines from first wordlist | | -y | Enable directory recursion for wordlists |

GPU acceleration:

| Option | Description | |--------|-------------| | -G list | List available GPU devices and exit | | -G 0,2,4 | Use only the specified GPU devices (comma-separated indices or ranges like 0-2,5) | | -G none | Disable GPU acceleration entirely |

Output and control:

| Option | Description | |--------|-------------| | -t N | Number of threads (default: number of CPUs) | | -p | Print source filename of found plaintexts | | -l | Append CR/LF/CRLF and print in hex | | -z | Debug mode: print all computed hash results | | -Z | Histogram of rule hits | | -V | Display version and exit |

Hash Type Selection

The -h option accepts Perl-compatible regular expressions to filter hash types by name:

# All MD5 variants
mdxfind -h MD5 -f hashes.txt wordlist.txt

# SHA1 and SHA256 only
mdxfind -h 'SHA1$,SHA256$' -f hashes.txt wordlist.txt

# Everything except NTLM
mdxfind -h '!NTLM' -f hashes.txt wordlist.txt

# All types
mdxfind -h '.' -f

Related Skills

View on GitHub
GitHub Stars11
CategoryDevelopment
Updated13h ago
Forks1

Languages

C

Security Score

90/100

Audited on Mar 31, 2026

No findings