GithubHelp home page GithubHelp logo

linux-bench's People

Contributors

chuckleb avatar codewizardry avatar handruin avatar stevensb avatar sth-dev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

linux-bench's Issues

Add flag for e-mail address

Capture e-mail address via command line -email flag

Suggestion from Ian @ Anandtech - after the benchmark ends, send an e-mail with the result link so that one knows that the benchmark completed and has easy access to the result.

Working on results viewer

Will still need to figure out how to parse the log file and get it in the DB but:
Here is the quick proof of concept for the results viewer: http://gmapv3.sql01.com/

Hopefully done by Friday.

Also hoping to hear if the most alive drive can be recovered. Even if not, STH the main site will be up by this weekend and the forums (in some form) maybe Sunday/ Monday.

lscpu options not supported in Ubuntu

I run into an error with lscpu. Tested inside a clean container and also a VM. Will try a few more machines as well.

root@eeb42c70dadb:/# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04 LTS
Release: 14.04
Codename: trusty

root@eeb42c70dadb:/# lscpu -V
lscpu: invalid option -- 'V'

Usage:
lscpu [options]

Options:
-h, --help print this help
-p, --parse print out a parsable instead of a readable format
-s, --sysroot

use directory DIR as system root
-x, --hex print hexadecimal masks rather than lists of CPUs

root@eeb42c70dadb:/# lscpu -e
lscpu: invalid option -- 'e'

Usage:
lscpu [options]

Options:
-h, --help print this help
-p, --parse print out a parsable instead of a readable format
-s, --sysroot

use directory DIR as system root
-x, --hex print hexadecimal masks rather than lists of CPUs

Error OpenSSL Ubuntu

My AWS instance tests (Ubuntu 14.04 AMI) are dying here:

2014-06-22 18:04:19 (1.00 MB/s) - ‘openssl-1.0.1g.tar.gz’ saved [4509047/4509047]

Building OpenSSL
ar: creating ../libcrypto.a
Use of uninitialized value $output in pattern match (m//) at asm/md5-x86_64.pl line 115.

ARM - Beter detection of cores and apps use them properly

https://forums.servethehome.com/index.php?threads/arm-benchmarks.4634/#post-39358

Thanks for looking into the issue(s)! Another quick observation: something is keeping UnixBench from recognizing the multiple cores in multi-core ARM chips. I've run the bench on a Banana Pi (2 Cores) and an ODROID-C1 (4 cores) but both times UnixBench only ran the single thread benchmarks. Looking at the logs:

SysInfo (lscpu) reports the correct # of cpus (cores/threads/sockets) in its test.
HardInfo doesn't report the number of cores/cpus.
UnixBench self-reports 0 (zero) CPUs and runs only the single-thread benchmark
C-Ray greps /proc/cpuinfo to find out how many threads to use
Stream greps /proc/cpuinfo
OpenSSL calls nproc
SysBench calls nproc - even through nproc returns the correct number of CPUs/cores, sysbench runs the multi-thread benchmark on one core
REDIS doesn't appear to care about threads/cores
NPB is non-functional, but greps /proc/cpuinfo for cpus/cores
NAMD is non-functional, but greps /proc/cpuinfo for cpus/cores
p7zip does not seem to care

Maybe there needs to be a standardized way of determining the core/cpu count and passing that to the benchmarks, to ensure that all cores are used on the multi-core benchmarks.

Anonomize Hardinfo Output

Need to pull out:

  • Partitions
  • IP addresses
  • Usernames

Likely will still want to keep system configuration information.

NPB Error on ARM (Rpi)

Error in NPB module on Raspberry Pi.

make[2]: Entering directory /root/NPB3.3.1/NPB3.3-OMP/BT' gfortran -c -O3 -fopenmp -mcmodel=medium bt.f f951: error: unrecognized command line option ‘-mcmodel=medium’ make[2]: *** [bt.o] Error 1 make[2]: Leaving directory/root/NPB3.3.1/NPB3.3-OMP/BT'
make[1]: *** [../bin/bt.A.x] Error 2
make[1]: Leaving directory `/root/NPB3.3.1/NPB3.3-OMP/BT'
Running NPB tests
linux-bench.sh: line 470: bin/bt.A.x: No such file or directory
linux-bench.sh: line 471: bin/ft.A.x: No such file or directory

Log parser/ web viewer next step

I sent Patriot the file (hopefully) on Jakarta airport WiFi.

New version should have a function to upload the results to a test server at 199.195.128.138

It should also give a reference ID that users can put into the box and see their results (try 123456 on the test server)

Hopefully he can add the new code to this branch. From there we can move over to the production server and update the script/ change to linux-bench.com ID's.

Will still need to figure out how to get these into the public results viewer/ do the UI for the application, but getting closer.

New level for C-ray

Perhaps only for >8 threads so we don't kill the low end boxes.
But a harder level for the heavy iron to distinguish itself.

Unable to upload results in docker

ref_link: 56271422388196
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (26) couldn't open file "./tmpbench//data/linux-bench12.12_be6d6e72b0de_2015_0127_183956.log"

Virtualization Check

I think a Docker version would be awesome. Can we add something in the script to identify if it is being run in Docker?

The other bit is we should have something to check and figure out if it is being run in some sort of virtualization. That way we can easily categorize results and ensure that we know what type of processor environment we are working on.

Firefox Compile Benchmark

I think this is a good one to add and Ian @ Anandtech requested it also.

Basic idea:
Download FF source to RAM disk
Compile FF.

Parser errors

Noticed a few errors in the parser:

  1. If the benchmark is run a second time on a system, it gets an error 500 upon upload. Need to reboot from a LiveCD to make this work.
  2. UnixBench is missing two values in the parsing
  3. sysbench is parsing the wrong multi-threaded value. It should be parsing the value for time in seconds
  4. c-ray 1.1 should be the label not "Cray" on the parser output

Enhancement:
Need command line ability to upload a log file to the parser. Unsure if this already exists.

Will see if I can get someone to fix.

Markers between tests: time and progress (e.g. Test #/ 15)

Adding this one to the list since we captured it on the forums.

The idea is if we need to profile tests, having timestamps between them is probably good.

Also, having text that says "Now running test 5 of 15" or something similar lets one know the progress.

Enhancement: Add -e flag to specify e-mail

Enhancement request to add a flag -e so a user can specify an e-mail address to get mailed to after a run request.

The we would likely need the parser to then take this information and generate an e-mail with the ref id or a direct link.

Script fails to run in Ubuntu LTS 14.04 Live CD

This was reported on the forums by user americanparks.

http://forums.servethehome.com/index.php?threads/sthbench-2014-updated-linux-benchmark-suite.3741/page-2#post-32997

The cause is that the LTS Live CD has a very limited selection of apt repositories and the apt-get installs fail. This could be addressed by additional checking and adding in additional repos needed.

There will be many different variations, need to decide how to be consistent on these as well.

screen shot 2014-05-17 at 8 14 16 pm

screen shot 2014-05-17 at 8 11 16 pm

NAMD Error on ARM (RPi)

NAMD Fails on Raspberry Pi

Using 2 threads
Running NAMD benchmark... (will take a while)
linux-bench.sh: line 504: ./namd2: cannot execute binary file

Name change

Silent revision time?
STHbench-2014-Dev

If you want to know the revision thats what -R and -V are for...
Perhaps keeping a printout of revhist in that thread might be nice.

This would allow people to pull from github for dev version directly and have main releases at the site.

Move downloads into the test section

Propose that we move the downloads to the respective tests? That way if we remove tests, we don't have to track down the downloads. You don't download what you don't need, especially when you get to choosing specific tests. Also makes it easier to see what version we are using. Allows me to pre-download and build for an image as well.

STREAM for ARM architectures

Line 353 we need to change the compile settings for STREAM using ARM (x86 current way is good):
if x86 run current: gcc stream.c -O3 -march=native -fopenmp -o stream-me
if ARM run: gcc stream.c -O3 -fopenmp -o stream-me

Potential GPGPU bench

The Parboil benchmarks are a set of throughput computing applications...Each benchmark includes several implementations. Some implementations we provide as readable base implementations from which new optimization efforts can begin, and others as examples of the current state-of-the-art targeting specific CPU and GPU architectures.

http://impact.crhc.illinois.edu/Parboil/parboil.aspx

Logins to store results

I think it could be nice to have logins so you can associate results. This would help with the email feature since you can save email credentials. Oauth2 and Google accounts could be an option? Otherwise local accounts.

Of course anonymous by default.

With accounts you could have notes per machine/run as well that are personal and don't clutter the full results.

Remove "update/upgrade" steps from script

We should not have to update the system to run benchmarks. When you yum install or apt-get install, you will install the latest version of whatever you need anyway since it pulls from the live repos. If there are dependancies, it will update those.

It is not the job of the tester to force a system update if not needed or desired, that decision lies with the admin. Any vulnerabilities or such are their responsibility.

Pros: Speeds up runtime on systems that don't need an update. Removes potential to break a system that is stable and wants to avoid updates.

Screenfetch

More information gathering about the system the bench is run on...
I think its cool... we may not want it... I will play with it tomorrow. I think my lab lost power... :/

Features:

  • Detects Desktop Environment, Kernel, Uptime, Window Manager, Window Manager Theme, GTK Theme, GTK Icon Theme, and GTK Font. It also attempts to detect running distribution, but it's a little sketchy at the moment and needs a lot of testing.
  • Supported distros: Arch Linux (yay!), Linux Mint, LMDE, Ubuntu, Debian, Fedora (untested), SuSe (untested), Mandriva/Mandrake (untested), Crunchbang (somewhat tested), Gentoo (untested), Red Hat Linux (untested), and Tiny Core.
  • Supported Desktop Environments: KDE, GNOME, XFCE, LXDE, and "Not Present".
  • Supported Window Managers: PekWM, FluxBox, OpenBox, BlackBox, Xfwm4, Metacity, Kwin (reported as not working), IceWM, FVWM, DWM, Awesome, WindowMaker, XMonad, Musca, i3, Ratpoison, ScrotWM, WMFS, wmii, subtle, E16, Sawfish, and Beryl (if you can find files or syntaxes to detect the theme of more window managers, let me know!)
  • Outputs ASCII logo with information, though you may specify to only output information and no ASCII art.
  • Flag to execute a scrot command to take screenshot named 'screenFetch-(YEAR)-(MONTH)-(DAY).png'
  • Ability to specify custom distribution name and custom screenshot command in execution on command-line.
  • Verbose output

Redis benchmark connection refused

Just tried on a clean 14.04 installation (and yes you can see my super secret test system user name):

2014-05-03 13:28:00 (751 KB/s) - ‘6379.conf’ saved [29602/29602]

cp: cannot stat ‘/home/patrick/6379.conf’: No such file or directory
Starting Redis server...
[5789] 03 May 13:28:00.493 # Fatal error, can't open config file '/etc/redis/6379.conf'
Running Redis test
Writing to socket: Connection refused
Writing to socket: Connection refused
Writing to socket: Connection refused

This was done using the Dev012.11 version.

STREAM fails on ARM

From RPi run.

Building STREAM
--2015-01-25 08:31:28-- http://www.cs.virginia.edu/stream/FTP/Code/stream.c
Resolving www.cs.virginia.edu (www.cs.virginia.edu)... 128.143.137.29
Connecting to www.cs.virginia.edu (www.cs.virginia.edu)|128.143.137.29|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 19967 (19K) [text/plain]
Saving to: `stream.c'

 0K .......... .........                                  100%  133K=0.1s

2015-01-25 08:31:28 (133 KB/s) - `stream.c' saved [19967/19967]

cc1: error: bad value (native) for -march switch
0--1
Running STREAM test
/dev/fd/63: line 316: ./stream-me: No such file or directory

Add -hide flag as a command line option

The new Linux-Bench web application has a comparison tool. We also added the ability to hide results from public view. Need to add a flag so any pre-production hardware that gets benchmarked can be hidden before public release.

Additional compile benchmarks

Anand used to run a Chromium compile benchmark (in windows) which was well received.
Obviously with all of this there needs to be a constant build version for repeatability, so it might be worth hosting elsewhere...

VLC? VideoLAN - VideoLAN developers - VLC media player
OpenOffice?
Second Life viewer? Open Source Portal - Second Life Wiki

TODO:
Ensure we are always running against same code base, may mean cloning a copy off.
Time the compile from start of make to end
Maximize number of cores assigned to test
Store results

<4GB Memory - 7-zip fails at compile.

I have yet to figure out why, but running 7-Zip on the t1.micro instance type causes the SSH session to terminate.

Over a dozen other instance types an no issues. I even made a mini script just for 7z and it fails.

Marking as a wontfix because I do not want to waste more time on this.

OpenSSL test broken

./apps/openssl speed rsa4096 -multi $NUM_CPU_CORES

$NUM_CPU_CORES is undefined.

This results in:
bad multi count

Move required files into a branch of tests

We should not be dependent on links being available, Github will host up to 100MB file so anything that we can move locally will be better. This also removes the burden to maintain the files that are currently hosted at STH to a shared responsibility.

Proposed:

\source\redis
\source\p7zip
\source\namd
... etc

If it's too large for Github then link to original source as first choice and STH as last resort.

Also will need to update the script to point to new location.

Uninstall in each routine for one area?

Should discuss leaving uninstall in each routine or global. I commented on it at
573cd02

If in each call, then you don't get errors when removing en-mass and keeps everything together. This is similar to the install routine where it only installs when needed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.