GithubHelp home page GithubHelp logo

colseph / frwl Goto Github PK

View Code? Open in Web Editor NEW

This project forked from mrcbax/frwl

0.0 0.0 0.0 246 KB

From Russia with love, lets traceroute the coming shutdown.

License: GNU General Public License v3.0

Shell 97.13% Dockerfile 2.87%

frwl's Issues

[iteration branch] ITER and COMP_ITER not unique per server

the ITER and COMP_ITER values are shared by all servers
so when one gets tarballed, they all reset.
so you end up w/ stuff that would look similar to:

total 160K
drwxr-sr-x    2 root group 4.0K Feb 14 16:34 ./
drwxr-sr-x 3927 root group 136K Feb 14 16:20 ../
-rw-r--r--    1 root group  653 Feb 14 16:20 0.1550161127.old
-rw-r--r--    1 root group  634 Feb 14 16:24 1.1550161127.new
-rw-r--r--    1 root group  634 Feb 14 16:34 1.1550161900.old
-rw-r--r--    1 root group  576 Feb 14 16:36 2.1550161900.new
-rw-r--r--    1 root group  634 Feb 15 11:00 1.1550167000.new <-- these files
-rw-r--r--    1 root group  634 Feb 15 11:00 1.1550167000.old <--

you still get the the needed info(the utc timestamp in the name and the actual file timestamp) but it might make parsing after the fact harder?
although with everyone pooling data for processing there will be lots of
files with the same ITER but different timestamps, so idk how important it actually is

the timesstamps shown above are obv made up and not accurate

duplicate lines in `$ITER_SAVE_FILE`

duplicate lines in $ITER_SAVE_FILE when using tmux_wrapper.sh

also seems to affect $COMP_ITER_SAVE_FILE(was only affected because du was checking the size $WORKING_DIR as a whole and not individual $SERVER directories, which caused _tarBall() to run too often

only seems to happen sometimes.
it happens a lot just dosent show in the save file very often, most of the time its with the actual filenames.

from tests it seems like its usually only the first 1 or 2 addresses, so it seems to be the initial creation of the save file that causes it

from _increment()

 *)
            #~adds to save file it doesnt exist
            [[ "$(cat "$1")" == *"${SERVER}${SAVE_FILE_DELIM}"* ]] || printf '%s\n' "${SERVER}${SAVE_FILE_DELIM}0" >> "$1"
            #~grabs current iter state from list
            RETURN_VAL=$(cat "$1" | grep "${SERVER}$(_escapeString "$SAVE_FILE_DELIM")")
            _log date "[_increment]RETURN_VAL: '$RETURN_VAL' (pre parse)"
            RETURN_VAL=${RETURN_VAL#*"$SAVE_FILE_DELIM"}
            _log date "[_increment]RETURN_VAL: '$RETURN_VAL' (post parse)"
            #~increments value in the save file for next read
            sed -i "s/${SERVER}$(_escapeString "$SAVE_FILE_DELIM")\([0-9]\+\)/${SERVER}$(_escapeString "$SAVE_FILE_DELIM")$(($RETURN_VAL + 1))/g" "$1"
            printf '%s' "$RETURN_VAL" #retuns value read from $1
            _log date "[_increment]read value for $SERVER in $1 as $RETURN_VAL, and set new as $(($RETURN_VAL + 1)) "
            ;;
    esac

if the value doesnt exist in the file, an initial value is added.
if the save file does not exist then it is created by adding the initial value.

then the value is read from the file

the problem occurs when the the script creates the file, then another instance tries to read it before its all the way "created" and so it thinks it needs to add an initial value as well, and you end up with two.

this is only my guess as to whats happening

right now the logical thing to do would be to test touching the file before the loop starts, and see if it still occurs

it is caused by too many scripts reading the file at the same time

to fix it, ill change the defaults in ping_russia.sh so when tmux_wrapper.sh is used the raw data and save file will be seperated by instance.

the $COMP_ITER_SAVE_FILE will still be shared as it shouldnt be called enough to really mess with it. ill still do some testing just in case.

the "filesystem issue"

having too many items (files or folders, i would assume it also applies to links) in a single flat directory makes file systems sluggish and angry.
from what ive read, this usually happens when you start getting close to the 30K range. ive actually never hit that many files in a single directory so i cant say 100 percent what happens other than a drop in performance.

this has already been implemented in the original repo owned by @LogoiLab
implimented by @morrowc in #17
you can also find information in this comment

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.