Comments (7)
Hello, does your scoring program includes a metadata
file? If so, what is its content?
from codalab-competitions.
Yes it does. It contains python $program/score.py $output
. I admit I don't fully understand where $output is actually pointing to, so I'm unsure if there's additional information I need in the scoring program.
from codalab-competitions.
Hi,
I am not sure I understand your problem. Where does the error message come from?
You can draw inspiration from this example bundle if it helps:
The scoring program is here:
Note that the metadata
file contains command: python $program/score.py $input $output
. $input
points to the reference data, so it's usually useful for scoring submissions (but it depends on your specific problem).
I admit I don't fully understand where $output is actually pointing to
The variables $program
and $output
points to the right folder when the submission is running (the scoring program and the path where results are saved).
You can learn more in this Wiki page:
from codalab-competitions.
This error comes from submitting an example submission as I am testing the scoring portion of the competition before its publication.
For this competition, there is no input data needed as the reinforcement learning agent is just run on a scenario specified by the custom Docker container it is running in. I know the agent is being evaluated and scored, but the results are being written to a folder different from the normal output folder (/cage/CybORG/Evaluation/) just because of how the validation script is written. Iām trying to open the results file in score.py and writing it out to scores.txt which is (hopefully) being written to the output folder.
I know that the submission is for sure being evaluated, I am just unsure if the exception is because there is an issue with score.py, my metadata file, or something else.
Sorry for any confusion.
from codalab-competitions.
To have the scores reflected on the leaderboard, you need to write them in the right format, and in the scores.txt
file in the right folder.
The format is:
score_1: 0.44
score_2: 0.77
The numbers are given for the example. score_1
and score_2
should be replaced by the keys of your leaderboard column, as defined in the competition.yaml
file.
For the path, it should be:
# Scoring program
import sys, os, os.path
input_dir = sys.argv[1]
output_dir = sys.argv[2]
submit_dir = os.path.join(input_dir, 'res')
truth_dir = os.path.join(input_dir, 'ref')
output_filename = os.path.join(output_dir, 'scores.txt')
This bundle examples may be more relevant, as it is lighter, and does not involve ground truth, like in your problem:
https://github.com/codalab/competition-examples/blob/master/codalab/Compute_pi/compute_pi_competition_bundle/program/evaluate.py
I hope this helps.
from codalab-competitions.
So I have updated the score.py script I am using for the scoring program to extract and write out the score.txt file. It now looks as such:
import sys, os, os.path
import re
input_dir = sys.argv[1]
output_dir = sys.argv[2]
submit_dir = os.path.join(input_dir, 'res')
truth_dir = os.path.join(input_dir, 'ref')
output_filename = os.path.join(output_dir, 'scores.txt')
results_file_name = f"/cage/CybORG/Evaluation/"
evaluation_files = os.listdir(f"{results_file_name}")
for f in evaluation_files:
if "summary_text" in f:
results_file_name = f"{results_file_name}{f}"
print(f'{results_file_name}')
with open(f"{results_file_name}", "r") as fin:
results = fin.read()
with open(f"{output_filename}", "w+") as fout:
# Regex to find the reward
reward = re.findall(
r"Average reward is: (-?[0-9]\d*\.\d+?) with a standard",
results,
)
if reward:
fout.write(f"avg_reward: {reward[0]}")
print(f'{reward}')
My metadata file for the scoring program just contains:
command: python $program/score.py $input $output
I am no longer getting the "Program command is not specified" error, but there is still no score being pulled to the leaderboard. The logic the Docker image follows has the submission evaluation being done immediately and writes out a summary file to a specified file path (/cage/CybORG/Evaluation/), as well as prints it out to the console. The score.py I've written looks for the file in that folder and should be writing out "avg_rewards: some_reward" to $output/scores.txt. However, the submission fails at the scoring step and the console contents are printed out to the output log, so I am not sure if score.py is actually being run. I have also verified that the leaderboard key does match up wtih what is being written to scores.txt.
from codalab-competitions.
the submission fails at the scoring step
Probably your scoring program crashes. Try to check the "scoring error logs".
from codalab-competitions.
Related Issues (20)
- AssertionError: Prediction incomplete, cannot find prediction: /tmp/codalab/tmpHX66LE/run/input/res/seq21/Labels/000600.png HOT 1
- [BUG] Error of username validation on username HOT 2
- Cannot receive verification email HOT 2
- error for submission HOT 6
- username change HOT 1
- "Detailed results" link on "Results" page does nothing... HOT 1
- DIV8K dataset download links no longer working HOT 1
- ImportError: No module named open3d HOT 11
- Uavid AssertionError: Prediction incomplete, cannot find prediction HOT 1
- Disappearing submissions + stuck submissions HOT 6
- IOError: [Errno 28] No space left on device HOT 6
- "[Errno 28] No space left on device" in semantic scene completion competition HOT 1
- SemanticKITTI: Moving Object Segmentation- IOError: [Errno 28] No space left on device HOT 1
- "Edit Competitions" very slow and loads a huge resource list HOT 3
- Change username HOT 2
- Username HOT 1
- no space left on device HOT 2
- datasets HOT 1
- Inconsistency in leaderboard's computed field HOT 1
- pending HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
š Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ššš
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ā¤ļø Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from codalab-competitions.