GithubHelp home page GithubHelp logo

openwebwork / pg Goto Github PK

View Code? Open in Web Editor NEW
43.0 43.0 74.0 11.34 MB

Problem rendering engine for WeBWorK

Home Page: http://webwork.maa.org/wiki/Category:Authors

License: Other

Perl 81.65% Prolog 6.65% HTML 2.20% CSS 0.06% JavaScript 7.45% Raku 1.13% Dockerfile 0.02% SCSS 0.53% TeX 0.11% Mathematica 0.21%

pg's People

Contributors

alex-jordan avatar andrew-gardener avatar apizer avatar aubreyja avatar bldewolf avatar cubranic avatar d-torrance avatar dependabot[bot] avatar djun-kim avatar dlglin avatar doombot-exe avatar dpvc avatar drdrew42 avatar drgrice1 avatar drjt avatar duffee avatar glennricster avatar goehle avatar heiderich avatar jasongrout avatar jwj61 avatar kellyblack avatar mgage avatar mikeshulman avatar paultpearson avatar pschan-gh avatar pstaabp avatar selinger avatar somiaj avatar taniwallach avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pg's Issues

Refreshing page resubmits answers

If a student submits an answer on a problem, and then refreshes the page after the problem reloads with the results and confirms form resubmission, then the browser will resubmit the form from its cache. The server then grades the problem again and adds an attempt on the problem. So one of the students' attempts on the problem is wasted.

Of course that is only a problem if the problem is set up with a limited number of attempts. However, things get worse if the course is set up to request a new version of the problem after a certain number of attempts. In this case, if the student submits answers until the "Request New Version" button appears and then refreshes the page and confirms form resubmission, the server will still grade the problem again and increment the number of attempts and reload that same problem again with 2 more attempts left. A new seed and problem is not generated. Using this the student can effectively work the same problem an unlimited number of times.

Complex number simplification error

I noticed the following incorrect simplification.

DOCUMENT();
loadMacros(
    "PGstandard.pl",
    "MathObjects.pl",
    "PGML.pl",
);

TEXT(beginproblem());

Context("Complex");
$answer = Compute("sqrt((-2)^2-8)");
$answer2 = Compute("sqrt(-4)");

BEGIN_PGML
This gives [`[$answer]`] and [`[$answer2]`].
END_PGML

ENDDOCUMENT();

This shows that $answer is -2i instead of 2i. However, $answer2 is 2i. What is going on here?

A cheat to the exact_no_trig

A student told me that they were able to hack the no decimals by instead of entering .25 they enter 25*10^(-2). Since it has no decimals it was excepted by the answer checker.

from Nathan Wallach -- comments on getUniqueName, problemUUID -- refactoring will be needed eventually

The current change is certainly necessary so different images will have different names, just as the older version of macros/PGgraphmacros.pl made use of $imageNum in the line:

my $resource = $main::PG->{PG_alias}->make_resource_object("image$imageNum","png");

However, I suspect that some additional work may be needed here. The subroutine getUniqueName() being fixed here which was recently added in #432 . It is intended to give a unique file name, and the current code may not be sufficiently general for its purpose. Prior experience is that additional care is needed when creating such names, as not all use contexts have the same data available to them to differentiate different problems.

A new item called problemUUID was added in webwork/PG 2.15 to make sure that the unique ID's work well also in the html2xml framework, where is often no meaningful value for setNumber and
probNum as there would be in the regular setting inside a course.

See:

openwebwork/webwork2#976
#415

The code here has $prob_name depending on

studentLogin,
problemSeed,
setNumber
probNum

However, in the html2xml context that last 2 parameters will often fail to have meaningul values which serve the purpose of differentiating problems.

In contrast, in lib/PGalias.pg we have $unique_id_seed depend also on

psvn,
problemUUID, and
courseID.

(psvn is an alternative to problemSeed for seeing the PRNG in cases where several problems in a set/context need to share parameters. It can be set in the html2xml context, and is provided in the regular context from the set_user table.)

I suspect that it might be necessary to add problemUUID and probably also psvn to the definition of $prob_name in the setting under discussion, unless it is certain that these values will be taken into account elsewhere in the path/filename generation (probably based on the definition of $unique_id_stub which is made during the initialize() method in /lib/PGalias.pm ). I did not trace the code flow to see if that will happen.

If the path already depends on the courseID somehow then that parameter may not be needed in the current setting; if not - it probably should also be added to the definition of $prob_name in getUniqueName().

Perl 5.18 Cant Render Problems

As far as I can tell in Perl 5.18, Translator.pm is either unable to load pg macro files and keep them around, or test to see if they are actually loaded. This behavior occurs in both Fedora 20 and Ubuntu 14.04. What happens is when you try to render any problem you get the error.

Undefined subroutine &main::_PG_init called at Translator.pm line 581

If I change line 554 of Translator.pm from

my $macro_file_loaded = ref($init_subroutine) =~ /CODE/;

to

my $macro_file_loaded = 0;

then I dont get those errors and problems render correctly. Of course, this is forcing the macro files to be loaded all the time so its probably not a permanent solution. Any insight?

Two solutions using Context("Inequalities")

Hello!

Consider the following problem:

DOCUMENT();

loadMacros(
  "PGstandard.pl",
  "MathObjects.pl",
  "contextInequalities.pl"
);

Context("Inequalities");

$ans = Compute("x=1 or x=2");

TEXT(beginproblem());
BEGIN_TEXT
\{ ans_rule(20) \}
END_TEXT

ANS($ans->cmp());

ENDDOCUMENT();

The solution "{1,2}" is marked correct, but "x=1 or x=2" (which we used to define the answer in the first place) is marked incorrect.

This is an issue for many of the problems in [1].

[1] https://github.com/openwebwork/webwork-open-problem-library/tree/master/OpenProblemLibrary/CollegeOfIdaho/setAlgebra_04_03_AbsoluteValue

Develop is behind master

At the moment develop is behind master by 39 commits.

master...develop

It seems to me that this should happen rarely. Some of the commits are empty "merge" commits but some of them are real changes -- such as the fix that prevents printing a problem source for students when an error occurs (Made on Dec 4) -- and as far as I can tell this is not in develop although it is in release/2.8.1.

On the other hand that missing commit doesn't show up in the comparison

release;2.8.1...develop

release/2.8.1 seems to be clean additions to master.
master...release;2.8.1
all changes made since Jan 29.

Some of this is probably because I don't properly understand github compare.
Merging release/2.8.1 into develop and into master (which we should do pretty soon
-- at the very least we should merge it into develop very soon )
will still leave a few commits currently in master that have not made it over to develop.

My suggestion is that one of us merge release/2.8.1 into develop right away (what could go wrong? :-) ) and then run the comparisons again to see whether develop is missing important commits that exist in master.

Comments?

-- Mike

MathQuill not compatible with area units

Not sure if this belongs here or webwork2.

As discovered by a student today, if MathQuill is on, one cannot correctly answer a question with area units like ft^2.

The problem is that MathQuill automatically adds parentheses around exponents when rendering.

So if you type 5ft^2, MathQuill turns this into 5(ft)^(2). If you type 5 ft^2, it becomes 5 ft^(2).

Either way, the parentheses prevent this from being recognized as a number with units, making it impossible for a student to correctly enter a correct answer in this context.

Graph Issue

I think there is an issue with PG again. I'm seeing some strange behavior. There seems to be two develop lines on the graph, a green one which I think should be develop, and a brown one which actually has the develop tag. I also am seeing that release/2.8.1 is ahead of master by 25 commits, even though I am sure I pulled it into master at least a week ago.

keeping pg up to date

pg is looking fairly good. release/2.8.1 is not missing anything from the master branch.
The commits missing from develop will be restored when we merge release/2.8.1 back in to develop. (I think. :-) )

Add a method to set the mathQuillOpts flag to disabled for an input item

Certain types of input may not be handled well by MathQuill, so it would be helpful if an input item could declare that MathQuill should not handle it. In my opinion, this is a feature needed in the long term, but is not very urgent.

Adding an option for an input item to set the mathQuillOpts flag to disabled would allow an easy method for a specific input item to avoid being handled by MathQuill when it is enabled in a course.

See: openwebwork/webwork2#1071 (comment)

An alternate approach, should it be difficult to enable setting this flag, would be to set a special CSS class to trigger the bypass. As @drgrice1 explained in the reference above that would be harder to do.

error in PGmatrixmacros.pl

The culprit is line 771 of PGmatrixmacros.pl

causes an error in
$L= Matrix([[2,5],[2,7]]
BEGIN_TEXT
{display_matrix($L)}
END_TEXT

see bugzilla [bug4411]

Hints and Hardcopies

In dealing with bug 2707 (instructor's PDF can not omit Solution), the issue was that the always_show_solutions and always_show_hints were working as intended. Mike and I talked about it and decided that the rational behind those settings (that it helps problem writers so they can easily check their work) doesn't really apply to hardcopies.

As a result I changed the behavior of always_show_solutions and always_show_hints so that they are not triggered in TeX mode. This brought up another issue though. The current behavior was that hints are never shown in TeX mode unless it was because of always_show_hints. So now it is impossible to show hints in TeX mode.

There are a couple of fixes for this:

  1. Lazy appraoch: remove the hints checkbox from the hardcopy page. It never did anything anyway since the hints were only displayed via "always_show_hints"
  2. Controversial Approach: Print the hint if its available and if the user has permissions to see hints. The issue with this is that you have to ignore the number of attempts because its a hard copy. It doens't make as much sense to show the hint "after so many attempts".

Thoughts?

draggableMacros.pl migration and javascript support

draggableProof.pl (currently living in OPL/macros/MC/) has a hardcoded reference to webwork2_course_files rather than webwork2_files, meaning that anyone who wants to use draggableProof has to store a local version of the javascript.

The draggableProof macro currently lives in the OPL, and should probably be migrated here, to PG; and along with that, the associated jQuery module should also be included in the webwork2/htdocs/ tree.

I don't know how worried we should be, but the suggested migration poses a potential issue:

  • suppose a site upgrades PG without upgrading ww2
  • draggableProof macro from PG now has priority over macro in the OPL
  • the PG macro points to the global webwork2/htdocs/ location
  • but unless ww2 is upgraded along with PG, the requested jQuery module is missing
  • situation cannot be remedied (as it is now) by adding the jQuery module to the course/html location

Looking for suggestions on handling this migration... I've tried testing for the existence of the javascript file from within the draggableProof.pl macro, but WWSafe prevents file tests such as if (-e $file_path) { ... }.

Disabling autocomplete in problem answer fields

Autocomplete is fairly frustrating when attempting to type in an answer (and being prompted previous answers). I do not believe there is any scenario where having other answers autocomplete is beneficial.

Note: I am not entirely sure this is the right repo for this, let me know if it should be moved elsewhere.

basis_checker_columns() from pg/macros/MatrixCheckers.pl is not as dependable as would be desired

I have been coding some Linear Algebra problems, and started to make use of basis_checker_columns() from pg/macros/MatrixCheckers.pl. I have found that it is not as dependable as would be desired. It also gives minimal feedback and does not give partial credit.

In order to overcome these, I found that I had need for a reasonably dependable rank() function for MathObject matrices.

There is a rudimentary rank.pl at https://github.com/openwebwork/webwork-open-problem-library/blob/master/OpenProblemLibrary/NAU/setLinearAlgebra/rank.pl which seems to work reasonably well for matrices with integer or fractions as the entries. It was not intended to handle uglier numbers well. There is also the `$M->order_LR' function, but as explained by @dpvc in a forum post  that misbehaves due to doing exact comparisons to 0 when floating point linear algebra algorithms need to have some tolerance to them. (Matlab's rank function claims to be an approximation to the rank for similar reasons, and takes a tolerance as an optional second argument.)

So I coded a new rank.pl based on @dvpc's suggestions, and with additional changes so it can handle non-square matrices. As it used fuzzy comparisons it is not 100% accurate, but seems pretty reasonable in my testing so far.

Using the rank() function, I could redesign basis_checker_columns() to be more reliable (in my testing).

Below are

  • rank.pl,
  • CustomBasisChecker1.pl
    • (with both the original basis_checker_columns() and my proposed basis_checker_columns_tani() in one file, so they can be easily compared by just changing the value of checker), and
  • a test problem with some discussion of what problems I encountered with the original basis_checker_columns() and some of the limitations of the proposed alternative: test_CustomBasisChecker1.pg.

Some additional testing of the rank function from rank.pl would be helpful, as well as a review of basis_checker_columns_tani(). The checker can either replace the old

If and when the proposed alternative is considered good enough, similar changes can be made to the other checkers in pg/macros/MatrixCheckers.pl and provided either as replacements or alternatives to the existing ones.


rank.pl follows:

# Coded by Nathan Wallach, May 2019
# based on Davide Cervone's recommendation from
# http://webwork.maa.org/moodle/mod/forum/discuss.php?d=3194
# as the current $Matrix->order_LR does not use fuzzy comparisons
# so does not give good results.

sub rank { # It assumes that a MathObject matrix object is sent
  my $MM1 = shift;

  if ( $MM1->class ne "Matrix") {
    return -1; # Not a matrix
  }

  # For the case it is square
  my $MM = $MM1;

  my ($Rrows,$Rcols) = $MM1->dimensions;

  if ( ( $Rrows <= 0 ) || ( $Rcols <= 0 ) ) {
    return -1; # Not a matrix
  }

  if ( $Rrows < $Rcols ) {
    # pad to make it square
    my @rows = ();
    my $i = 1;
    for ( $i = 1 ; $i <= $Rrows ; $i++ ) {
      push( @rows, $MM1->row($i) );
    }
    while ( $i <= $Rrows ) {
      # pad with zero rows
      push( @rows, ( 0 * $MM1->row(1) ) );
      $i++;
    }
    $MM = Matrix( @rows );
  } elsif ( $Rrows > $Rcols ) {
    return( rank( $MM1->transpose ) );
  }

  # Davide's approach from http://webwork.maa.org/moodle/mod/forum/discuss.php?d=3194
  my $tempR = $MM->R;
  ($Rrows,$Rcols) = $tempR->dimensions;
  my $rank;

  for ( $rank = $Rrows ; $rank >= 1; $rank-- ) {
        last if ( $tempR->element($rank,$rank) != Real("0") );
  }
  return( $rank );
}

1;

CustomBasisChecker1.pl follows:

# Proposed redesign of basis_checker_columns() from pg/macros/MatrixCheckers.pl
# to overcome problems with the behavior of that function.

# The original code of pg/macros/MatrixCheckers.pl is 
# by Paul Pearson, Hope College, Department of Mathematics
# and was the original basis and the inspiration for much
# of that is in the proposed new checker.

# For not the proposed new checker is called basis_checker_columns_tani()
# and the local version of basis_checker_columns() has a bit of debugging
# code added.\

sub _MatrixCheckers_init {}; # don't reload this file

loadMacros("MathObjects.pl");

# ============================================================

sub concatenate_columns_into_matrix {

  my @c = @_;
  my @temp = ();
  for my $column (@c) {
    push(@temp,Matrix($column)->transpose->row(1));
  }
  return Matrix(\@temp)->transpose;

}

# The original code more directly based on Paul Pearson's original code 
# was having some difficulty handling certain incorrect answers.

# Space where I had problems had correct basis as (1,0,3,4) and (0,1,-1,-1)
# The answer (1,2,1,2) and (sqrt(253),2sqrt(253),sqrt(253),2sqrt(253)+t).
#    The second of these vectors has a shift "t" in the last coordinate
#    from being sqrt(253) times the first vector.
# For t=0 and t=0.00000001 the original code saw the vectors as dependent.
# For t=0.1,0.01 the original code marked the answer as incorrect (it could 
#    tell that the second vector was not in the space.)
# However, for t in { 0.001, 0.0001, 0.00001, 0.000001, 0.0000001 }
#    the answer was being accepted as correct when it is NOT.
# The issues seem to relate to be a result of too much "tolerance" in some
#    calculations.

# The new version does not accept the incorrect answers accepted by the prior
# version of the code. For t in { 0.1, 0.01, ..., 0.0000000001 } the second
# vector is recognized as not in the space. However, for a very small values of
# t, such as t=0.00000000001 the new code sees the 2 vectors in the answer as
# dependent (as if t were really 0).

sub basis_checker_columns_tani {

      my ( $correct, $student, $self, $answerHash ) = @_;
      my @c = @{$correct};
      my @s = @{$student};

      my $dimSpace = scalar( @c );
      my $numStudV = scalar( @s );

      # Most of the answer checking is done on integers 
      # or on decimals like 0.24381729, so we will set the
      # tolerance accordingly in a local context.

      # the tolerance was set to be much smaller than in the old code

      my $context = Context()->copy;
      $context->flags->set(
        tolerance => 0.0000001,
        tolType => "absolute",
      );

      return 0 if ( $numStudV < $dimSpace ); # count the number of vector inputs

      my $C = concatenate_columns_into_matrix(@c);
      my $S = concatenate_columns_into_matrix(@s);

      # Put $C and $S into the local context so that
      # all of the computations that follow will also be in
      # the local context.
      $C = Matrix($context,$C);
      $S = Matrix($context,$S);

      # $self->{ans}[0]->{ans_message} .= "C = $C$BR";
      # $self->{ans}[0]->{ans_message} .= "S = $S$BR";

      $rankC = rank($C);
      $rankS = rank($S);

      #  Check that the professor's vectors are, in fact, linearly independent.
      # The original approach based on
      #     Theorem: A^T A is invertible if and only if A has linearly independent columns.
      # was more likely to ignore small shifts, as the determinant would end up very small
      # We now use the improved rank.pl to test this.

      warn "Correct answer is a linearly dependent set." if ( $rankC < $dimSpace );

      my @notInSpaceMessage = (  );
      my @wasInSpace = ();
      my $notInSpace = 0;

      # Check each student vector to see if it is in the required space.
      my $j;
      for( $j = 0 ; $j < $numStudV ; $j++ ) {
        my @c1 = ( @{$correct} ); 
        push( @c1, $s[$j] );

        my $C1 = concatenate_columns_into_matrix(@c1);

        if ( rank( $C1 ) > $dimSpace ) {
          my $tmp1 = $j + 1;
	  $notInSpace++;
          push( @notInSpaceMessage, "Vector number $tmp1 is not in the space.$BR" );
        } elsif ( ! $s[$j]->isZero ) {
          push( @wasInSpace, $s[$j] );
        }
      }

      # How many independent were in the space
      my $goodCount = 0;
      my $secondaryDependenceTest1 = 0;
      if ( @wasInSpace ) { 
        my $C1 = concatenate_columns_into_matrix( @wasInSpace );
        $goodCount = rank( $C1 );

        # Add a second test for a linear dependence of this part of the students answers, 
        # in case the rank code misbehaves. This is a revised version of the test originally used.

        # It was needed for the case t=0.00000000002 in the test example discussed above
        my $dd = (($C1->transpose) * $C1)->det;
        if ( ( $dd == Real(0) ) && ($goodCount == scalar( @wasInSpace ) ) ) {
          $secondaryDependenceTest1 = 1;
          # warn "secondaryDependenceTest1 turned on";
        }
      }

      if ( ( $goodCount == $dimSpace ) && ( $goodCount == $numStudV ) && ($secondaryDependenceTest1 == 0 ) ) {
        # There are the correct number of independent vectors from the required space, and no others
        return 1;
      }

      my $depWarn = "";
      if ( $secondaryDependenceTest1 == 1 ) {
        # The value of $goodCount was WRONG. Decrease it by one, and add a warning if the result is still > 1
        if ( ( --$goodCount ) > 1 ) {
          $depWarn = "The software may have an incorrect count of the number of indepenent vectors from the space in your answer.$BR";
        }
      }

      if ( $goodCount == 1 ) {
        unshift( @notInSpaceMessage, "Your answer contains only one independent vector from the space.$BR$depWarn" );
      } elsif ( $goodCount > 1 ) {
        unshift( @notInSpaceMessage, "Your answer contains $goodCount independent vectors from the space.$BR$depWarn" );
      }

      # Add a second test for a linear dependence of ALL of the students answers,
      # in case the rank code misbehaves. This is a revised version of the test originally used.
      my $secondaryDependenceTest2 = 0;

      $dd = (($S->transpose) * $S)->det;
      # To debug
      # warn "The determinant tested against zero to check linearly dependence had the value $dd";

      if ( $dd == Real(0) ) {
        $secondaryDependenceTest2 = 1;
        # warn "secondaryDependenceTest2 turned on";
      }

      #  Check that the student's vectors are linearly independent
      if ( ( $rankS < $numStudV ) || ( ( $secondaryDependenceTest1 + $secondaryDependenceTest2 ) > 0 ) ) {
	# There is a linear dependence among the students answers.
	# Sometimes the detection of linear dependence conflicts with that of a vector not in the space.
        # So give the message only if nothing was found to be outside the space.
        if ( $notInSpace == 0 ) {
          unshift( @notInSpaceMessage, "Your vectors are linearly dependent.$BR");
        }
      } 
      # else {
      #    The students vectors are linearly independent.
      # }

      $self->{ans}[0]->{ans_message} = join("", @notInSpaceMessage );

      my $score = ( $goodCount / $dimSpace ) - ( ( $notInSpace / $dimSpace / 4 ) ) ;
      $score = 0 if ( $score < 0 ); # in case penalties exceed credit for good vectors
      # $self->{ans}[0]->{ans_message} .= "$BR$BR score = $score";

      # Scale due to MultiAnswer issue with fractional scores
      $score *= ( $dimSpace )/( -1 + $dimSpace );

      return $score;
}

##########################################

sub basis_checker_columns {

      my ( $correct, $student, $answerHash ) = @_;
      my @c = @{$correct};
      my @s = @{$student};

      # Most of the answer checking is done on integers
      # or on decimals like 0.24381729, so we will set the
      # tolerance accordingly in a local context.
      my $context = Context()->copy;
      $context->flags->set(
        tolerance => 0.001,
        tolType => "absolute",
      );

      return 0 if scalar(@s) < scalar(@c);  # count the number of vector inputs

      my $C = concatenate_columns_into_matrix(@c);
      my $S = concatenate_columns_into_matrix(@s);

      # Put $C and $S into the local context so that
      # all of the computations that follow will also be in
      # the local context.
      $C = Matrix($context,$C);
      $S = Matrix($context,$S);

      #  Theorem: A^T A is invertible if and only if A has linearly independent columns.

      #  Check that the professor's vectors are, in fact, linearly independent.
      $CTC = ($C->transpose) * $C;
      warn "Correct answer is a linearly dependent set." if ($CTC->det == 0);

      #  Check that the student's vectors are linearly independent
      if ( (($S->transpose) * $S)->det == 0) {
        Value->Error("Your vectors are linearly dependent");
        return 0;
      }

      # Next 2 lines added for local testing... 
      my $dd = (($S->transpose) * $S)->det;
      # warn "The determinant tested against zero to check linearly dependence had the value $dd";

      # S = student, C = correct, X = change of basis matrix
      # Solve S = CX for X using (C^T C)^{-1} C^T S = X.
      $X = ($CTC->inverse) * (($C->transpose) * $S);
      return $S == $C * $X;

}


#############################################


1;

test_CustomBasisChecker1.pg follows:

# Test problem to test/debug issues I had with basis_checker_columns()
# from pg/macros/MatrixCheckers.pl. 

DOCUMENT();        # This should be the first executable line in the problem.

loadMacros(
  "PGstandard.pl",
  "MathObjects.pl",
  "parserMultiAnswer.pl",
  #"MatrixCheckers.pl",
  "rank.pl",
  "CustomBasisChecker1.pl",	# includes a slightly modified version of basis_checker_columns 
				# and a redesigned basis_checker_columns_tani which depends on
				# the new rank.pl macro
);

TEXT(beginproblem());

$showPartialCorrectAnswers = 1;
$showPartialCredit = 1;

Context('Matrix');

$vec1 = Matrix([[ 1,0,3,4 ]])->transpose;
$vec2 = Matrix([[ 0,1,-1,-1 ]])->transpose;

$vec3 = ($vec1 + 2*$vec2)->transpose;

$shifta = Matrix([[ 0,0,0,0.00000000001 ]]);
$vec4a = ( sqrt(253) * $vec3 ) + $shifta ;

$r34a = rank( Matrix( [
  $vec3->row(1),
  $vec4a->row(1)
]));

$shiftb = Matrix([[ 0,0,0,0.0000000001 ]]);
$vec4b = ( sqrt(253) * $vec3 ) + $shiftb;

$r34b = rank( Matrix( [
  $vec3->row(1),
  $vec4b->row(1)
]));

$multians1 = MultiAnswer($vec1, $vec2)->with(
  singleResult => 1,
  separator => ',',
  tex_separator => ',',
  allowBlankAnswers=>0,
#  checker => ~~&basis_checker_columns_tani,
  checker => ~~&basis_checker_columns, # Had issues fixed by the _tani version
);

Context()->texStrings;
BEGIN_TEXT

Try this problem with both of the options for the checker function.
$PAR

Test answers where the first element is \( $vec3 \) and the second one is \( \sqrt{253} $vec3 \)
plus a small change to one coordinate. I did testing with changes to the last coordinate.
$PAR

$HR

Find a basis \( \mathcal{B} \) of \( \text{Span}\left\lbrace  $vec1, $vec2 \right\rbrace \).
$BR


$BCENTER
\( \mathcal{B} = \left\lbrace \;\; \rule{0pt}{50pt} \right. \;\; b_1 = \)
\{ $multians1->ans_array(1) \}
\( , \;\; b_2 = \)
\{ $multians1->ans_array(25) \}
\( \left. \;\; \rule{0pt}{50pt} \;\; \right\rbrace \)
$ECENTER
$PAR
$HR

When the original basis_checker_columns grader is being used:
$BR
When \( 0.0000001 \) or even \( 0.0022 \) is added to the last coordinate, the original basis_checker_columns
will ${BBOLD}incorrectly${EBOLD} treat the answer as ${BBOLD}correct${EBOLD}.
$PAR
When \( 0.0000002 \) is added the last coordinate, the vectors are reported as being linearly dependent.
$PAR

$HR

When the proposed replacement basis_checker_columns_tani grader is being used:
$BR
When a shift of \( 0.0000000001 \) or more is added to the last coordinate, the new code
will detect that the second vector is not in the space. It then gives partial credit for the
good vector less a penalty for the bad one.$BR

When a shift of \( 0.00000000001 \) is added to the last coordinate, the new code
will ignore the small change and treat the second vector as being a multiple of the
first vector and report there being a linear dependence among the vectors of the answer.
$BR

The code will not report a linear dependence among the vectors of the student's answer
when it has detected a vector not being in the space. If both messages had been permitted,
sometimes both would have been given in a contradictory way. An addition of \( 0.0000000001 \)
to the last coordinate was triggering such behavior.

$HR

The new code depends on a new rank function, and that function uses fuzzy comparisions of the
diagonal entries in the R from the LR decomposition, and this leads it to behaving reasonably
well but not perfectly.
$BR
The rank of the matrix whose rows are \( $vec3 \) and
\( (\sqrt{253} $vec3 ) + $shiftb \) will be computed to be \( $r34b\)  (comes out \(2\) on my PC).
$BR
The rank of the matrix whose rows are \( $vec3 \) and
\( (\sqrt{253} $vec3 ) + $shifta \) will be computed to be \( $r34a\)  (comes out \(1\) on my PC).


END_TEXT
Context()->normalStrings;

ANS( $multians1->cmp() );

ENDDOCUMENT();        # This should be the last executable line in the problem.

vec_dot not working

In PGmatrixmacros.pl it seems to me like =cut is missing and so vec_dot doesn't work (or well, it's not working for some reason anyway). Yeah I know it's deprecated, so probably not a huge deal, but existing problems do use it.

GatewayQuiz doesn't handle PopUp questions

When I try to use a popUp function in a GatewayQuiz I get a lot of errors from PG and it won't register the correct answer. My test problem and the errors follow

loadMacros("parserPopUp.pl",);
Context("Numeric");
$P = PopUp(["?", "one", "two", "three"], "three");
BEGIN_TEXT
\{$P->ans_array\}
END_TEXT
ANS($P->cmp);
    Use of uninitialized value $ans_name in hash element at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1203
    Use of uninitialized value in scalar assignment at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1452
    Use of uninitialized value $ans_name in hash element at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1207
    Use of uninitialized value $ans_name in hash element at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1208
    Use of uninitialized value $ans_name in concatenation (.) or string at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1225
    Use of uninitialized value $ans_name in hash element at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1245
    Use of uninitialized value $ans_name in hash element at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1203
    Use of uninitialized value in scalar assignment at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1452
    Use of uninitialized value $ans_name in hash element at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1207
    Use of uninitialized value $ans_name in hash element at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1208
    Use of uninitialized value $ans_name in concatenation (.) or string at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1225
    Use of uninitialized value $ans_name in hash element at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1245
    Use of uninitialized value $_ in hash element at /opt/webwork/webwork2/lib/WeBWorK/ContentGenerator/GatewayQuiz.pm line 1788.
    Use of uninitialized value $_ in hash element at /opt/webwork/webwork2/lib/WeBWorK/ContentGenerator/GatewayQuiz.pm line 1788.
    Use of uninitialized value $_ in hash element at /opt/webwork/webwork2/lib/WeBWorK/ContentGenerator/GatewayQuiz.pm line 323.
    Use of uninitialized value $_ in hash element at /opt/webwork/webwork2/lib/WeBWorK/ContentGenerator/GatewayQuiz.pm line 323.
    Use of uninitialized value $name in hash element at /opt/webwork/webwork2/lib/WeBWorK/ContentGenerator/GatewayQuiz.pm line 368.
    Use of uninitialized value $name in hash element at /opt/webwork/webwork2/lib/WeBWorK/ContentGenerator/GatewayQuiz.pm line 368.
    Use of uninitialized value in join or string at /opt/webwork/webwork2/lib/WeBWorK/ContentGenerator/GatewayQuiz.pm line 484.
    Use of uninitialized value in join or string at /opt/webwork/webwork2/lib/WeBWorK/ContentGenerator/GatewayQuiz.pm line 484.

Changes to parserMultiAnswer breaks problems

Previously, @dpvc helped me write this modification to parserMultiAnswer.pl in order to pull a GeoGebra answerBox into a multiAnswer object. The update to the multi-answer library has broken all my problems written with this code.

The following code used to initiate a multi answer object with the first answerName matching that of the GeoGebra applet (answerBox); and after that, it would create answerNames according to the usual approach.

Can the old functionality be replicated under the modified version of parserMultiAnswer.pl?

#######################
### ggbApplet MultiAns
#######################
#
#  Make a subclass of MultiAnswer
#
package myMultiAnswer;
our @ISA = ('MultiAnswer');

sub new {
  my $self = shift;
  my $ma = $self->SUPER::new(@_);
  $ma->{part} = 1;
  $ma->{answerName} = 'answerBox';
  $ma->{id} = $MultiAnswer::answerPrefix.$ma->{answerName};
  $ma->{singleResult} = 1;
  $ma->{namedRules}  = 1;
  return $ma;
}

sub ANS_NAME {
  my $self = shift;
  my $i = shift;
  return ($i == 0 ? $self->{answerName} : $self->{id}.'_'.$i);
}

package main;

For one, it seems that answerName is no longer an appropriate hash key. Instead, answerNames keys to an array now?

Also, it seems that ANS_NAME has changed as well, and this replacement will no longer suffice.

The PG error message I'm seeing is:
---- main (eval 3679) 489 ------ Error in NAMED_ANSWER_RULE_EXTENSION: every call to this subroutine needs to have $options{answer_group_name} defined. For a single answer blank this is usually the same as the answer blank name. Answer blank name: MuLtIaNsWeR_answerBox_1

For testing purposes, an example problem that uses this method can be found here:

Contrib/CUNY/CityTech/Calculus/setExplore_-_Intermediate_Value_Theorem/geogebra-IVT.pg

Possible issue in how num_points and test_at are handled in lib/Value/Formula.pm

According to the documentation at https://webwork.maa.org/wiki/Context_flags the value of num_points should be

The number of random test points to use during function comparisons. This is in addition to any test_at points, but if test_points are given, no random points are given.

The generation of the test points seems to be in lib/Value/Formula.pm. In particular, the code in sub compare uses:

  my $points  = $l->{test_points} || $l->createRandomPoints(undef,$l->{test_at});

and sub createRandomPoints of lib/Value/Formula.pm seems to add the points from test_at into the array of points

    push(@{$points},@{$include});

before it runs a loop to add the random points, whose primary end condition is scalar(@{$points}) < $num_points+$num_undef.

As such, it seems that the points provided by test_at reduce the number of random test points used, unlike what was apparently intended.

It seems to me that to obtain the behavior intended, the value of $num_points should be increased by scalar(@{$include}) before starting the while loop.

@dpvc - Can you take a look?

Context("Fraction") does not allow approximate answers.

According to the documentation in contextFraction.pl when using
Context("Fraction") students should be able to submit decimal answers. However
when using this context decimal answers are required to be exactly equal which
makes it impossible to enter fractions with repeating decimal expansions. I
did find a problem in the wild that uses contextFraction.pl in this way. In
Library/Rochester/setIntegrals14Substitution/csuf_in_14_1.pg with seed 1247 the
answer is -20/7, which has a repeating decimal expansion, and the problem does
not ask for a fraction answer. I don't know what side effects there would be
(if any) of adding a tolerance to the fraction comparator.

Test code:

DOCUMENT();      

loadMacros(
    'PGstandard.pl',
    'PGML.pl',
    'contextFraction.pl',
);

TEXT(beginproblem());

Context('Fraction');

$A = Fraction(1,3)->reduce;
$B = Fraction(1,2)->reduce;

BEGIN_PGML
Enter .3333333333 :
[`` [$A] = ``] [______________________]{$A} 

Enter .5 :
[`` [$B] = ``] [______________________]{$B} 
END_PGML

ENDDOCUMENT();

weightedGrader.pl and parserMultiAnswer.pl problems when singleResult => 0

The current weightedGrader.pl (currently in the OPL, but I have suggested it be added to pg - see #411) only partially works with parserMultiAnswer.pl based on my testing.

When a MultiAnswer object uses singleResult => 1 things seem to work.

However, when a MultiAnswer object uses singleResult => 0 things do not seem to work properly.

Sample problems demonstrating this will be added here in the future.

Fixing this would be nice. Until then, it would be nice if a comment about the issue were added to the relevant page of Wiki documentation.

bug with MathObjects reduction

We've hit a bug with a reduction rule in MathObjects. Something like -5/(-2x) is incorrectly reducing to 5/2*x. Here is a MWE.

DOCUMENT();

loadMacros(
  "PGstandard.pl",
  "MathObjects.pl",
  "PGML.pl",
);

TEXT(beginproblem());

$answer = Formula("-5/(-2 x)");
$reduced = $answer->reduce;

BEGIN_PGML

The answer is [$answer], which reduces to [$reduced].

END_PGML

ENDDOCUMENT();

(I'm having trouble logging in to the forum while the transition is underway, or else I might have posted this there.)

Accessibility: proposal to consider changing the HTML output of $BBOLD be <strong>

As far as I understand, WCAG 2.0 accessibility compliance recommends/requires using the HTML <strong> tag to generate "strong emphasis" by a screen reader when necessary, while the older <b> tag for bold does not provide any screen-reader hint about emphasis.

Although it is not 100% true, much author use of bold will be to emphasize visually with an implicit "semantic" meaning.

As such, I would like to recommend considering whether the existing $BBOLD (and the corresponding $EBOLD) be changed to use <strong> instead of <b> by default.

Alternately, we could add $BSTRONG and $ESTRONG and recommend that problem authors make changes on their own.

If HTML strong does become the default interpretation of $BBOLD we could add a $BJUSTBOLD or the like for when plain bold (<B>) is the "right thing" in HTML output.

To some extent, the same question may apply to $BITALIC giving HTML <em> instead of <i>, but I suspect that much "mathematical use" of italics is not for emphasis but to set of special terms, so that change might be less clear than that for bold.

@Alex-Jordan - you seem to be the local accessibility expert or have access to accessibility experts - what do you/they say?


References:

AskSage is partially broken

Hi,

AskSage seems to have stopped working — at least when it is using the fancier WEBWORK user variable return method.

I think the code needs to be revisited — in particular I suspect there are changes on the Sage end that will make the
communication over JSON simpler. I’m hoping someone will see some suggestions.

All of the relevant WeBWorK code is at https://github.com/openwebwork/pg/blob/master/lib/WeBWorK/PG/IO.pm
the function query_sage_server https://github.com/openwebwork/pg/blob/master/lib/WeBWorK/PG/IO.pm#L223
and most importantly the function AskSage https://github.com/openwebwork/pg/blob/master/lib/WeBWorK/PG/IO.pm#L259

The formatting being done in lines 273–296 return output that looks like this:

IO::askSage: We have some kind of value |{"user_variables": {"WEBWORK": {"status": "ok", "data": {"text/plain": "{'diff': [4_u_v^2_cos(2_u^2_v^2) + 2_v^2_cos(2_u_v^2) 4_u^2_v_cos(2_u^2_v^2) + 4_u_v_cos(2_u_v^2)]\n [ 4/u 4/v]\n [ 14_u_v^2 14_u^2_v],\n 'function': (sin(2_u^2_v^2) + sin(2_u_v^2), log(9_u^2_v^2) + log(6_u^2_v^2), 7_u^2_v^2)}"}, "metadata": {}}}, "success": true, "execute_reply": {"status": "ok", "execution_count": 1, "payload": [], "user_expressions": {}, "user_variables": {"WEBWORK": {"status": "ok", "data": {"text/plain": "{'diff': [4_u_v^2_cos(2_u^2_v^2) + 2_v^2_cos(2_u_v^2) 4_u^2_v_cos(2_u^2_v^2) + 4_u_v_cos(2_u_v^2)]\n [ 4/u 4/v]\n [ 14_u_v^2 14_u^2_v],\n 'function': (sin(2_u^2_v^2) + sin(2_u_v^2), log(9_u^2_v^2) + log(6_u^2_v^2), 7_u^2_v^2)}"}, "metadata": {}}}}}| returned from sage at [PG]/lib/WeBWorK/PG/IO.pm line 306
This is relatively complicated and the only part we are using is:
my $sage_WEBWORK_data = $decoded->{execute_reply}{user_variables}{WEBWORK}{data}{'application/json'};
sage_WEBWORK_data: {'diff': [4_u_v^2_cos(2_u^2_v^2) + 2_v^2_cos(2_u_v^2) 4_u^2_v_cos(2_u^2_v^2) + 4_u_v_cos(2_u_v^2)] [ 4/u 4/v] [ 14_u_v^2 14_u^2_v], 'function': (sin(2_u^2_v^2) + sin(2_u_v^2), log(9_u^2_v^2) + log(6_u^2_v^2), 7_u^2*v^2)} at [PG]/lib/WeBWorK/PG/IO.pm line 333
There is an error in this output — the value part of the key/value pairs in JSON should be
enclosed in quotes. Trying to decode $sage_WEBWORK_data doesn’t work because decode_json
doesn’t recognize the value parts properly. Changing the value of sage_WEBWORK_data to

sage_WEBWORK_data: {"function":"(sin(2_u^2_v^2) + sin(2_u_v^2), log(9_u^2_v^2) + log(6_u^2_v^2), 7_u^2_v^2)","diff":"[4_u_v^2_cos(2_u^2_v^2) + 2_v^2_cos(2_u_v^2) 4_u^2_v_cos(2_u^2_v^2) + 4_u_v_cos(2_u_v^2)]\n [ 4/u 4/v]\n [ 14_u_v^2 14_u^2*v]"}

My suspicion is that the formatting done in 273-296 is being done with old-fashioned code that doesn’t measure up to stricter standards for JSON these days. Also that there are now sage tools
that make that formatting code much simpler.

Can some you with more sage and python experience help me out with this?
A few pointers to a more recent way to encode application/json output in python/sage might
be sufficient.

A functioning (actually non-functioning) problem you can test with is at

https://hosted2.webwork.rochester.edu/webwork2/2014_07_UR_demo/askSage/4/?effectiveUser=profa&key=av8WVLauceIMqcnOeBQEo7AOftk8p2dW&user=profa

you can use the usual profa/profa login. The other problems in that homework set work.

I also have a more extensive testing homework set for AskSage if you need it.

Take care,

Mike

Round() function

PGauxiliaryFunctions.pl has the function Round(x,n). It doesn't always work as you'd want it to. This is understandable, given rounding issues with floating point.

When floating point rounding errors accumulate in just the right way, they can move something like 0.5 to 0.4999...9 or similar. I am seeing that as an issue with:

$x = 3.3625/0.025;
$rounded = Round($x,0);
TEXT("When you round $x you get $rounded.");

with seed 4098 (if that even matters) which outputs

When you round 134.5 you get 134

In defaults.conf, there is numZeroLevelDefault => 1E-14. Would there be any objection to me redefining Round() to make use of this? One way to characterize what I would do is to add this to a number before rounding. So if it were internally 0.499999999999992, then it would become 0.500000000000002 and then get rounded to 1. It would be an approach that assumes something like 0.499999999999992 is really supposed to be 0.5, but rounding error crept in.

Everything I said would be adjusted if not rounding to a whole number.

Alternatively I could make a new subroutine with a different name.

Changes to Numeric

Occasionally some limitation has been grandfathered in, and it hinders making helpful improvements. Right now I am thinking of how the Numeric context does not understand things like root(n,x) or log(b,x). There is parserRoot.pl, and maybe something similar for logarithms, but of course most problems out there are not loading this, so you cannot promote root as a generally available tool to students. Typically a problem that uses that needs to have extra instructions "you can type root...".

Not having root can cause issues with the palette tools under development, like WIRIS and MathQuill. Currently they assume that root(3,x) is the same as x^(1/3), but that causes problems when x is negative.

Now, you could add root to the default Numeric context, except then you change the behavior of tens of thousands of problems that maybe were assuming there was no available root function.

So I have been thinking about making something like a Numeric2 context that could be explicitly loaded, but more importantly there would be a configuration option where you could basically tell WeBWorK "I want all references to Numeric to actually load Numeric2", as long as you are aware of the consequences.

Another thing that I might put into "Numeric2" would be a third tolType: tolType=>sigfig, and sigfig would be the default with tolerance=>4.

I wanted to discuss this, including alternative ideas to address the basic issue with root. I wasn't sure where to do that, but an issue here seems like maybe an OK place.

Differentiation for parserRoot.pl

This is a feature request. Can we make it so that the D method applies to a Formula that has the root(,) function in it (once enabled by parserRoot,pl)?

Reducing perl warnings from existing problems

We see a lot of perl output into our Apache logs from problem rendering. To reduce this, I created a script that can headlessly render problems and collect their error output, if any. The script is available in:
https://github.com/bldewolf/webwork-error-scanner

The scanner runs using checkouts of the webwork repos and only takes an hour or two to "render" every problem.

I've fixed a large amount of issues in these two branches:
https://github.com/bldewolf/pg/tree/fix-perl-warnings
https://github.com/bldewolf/webwork-open-problem-library/tree/fix-perl-warnings

There are still plenty of problems producing output but I haven't had time to work on it and I don't want the fixes to simply languish on my machine. Should I make pull requests for what I have fixed so far? The OPL branch has at least one very large commit, as I fixed a single class of problem in a lot of files.

Some of the errors are more complicated than simple perl style issues. For example, NUM_CMP is misused in lots of places, with missing or incorrect arguments. Fixing the problems to use NUM_CMP correctly is impractical, but fixing NUM_CMP to behave as it always has but without warnings is also a tricky task.

Anyway, this issue is more or less to make the team aware that this script exists and to ask for feedback on where to go from here.

No student answer preview generated in "limited" contexts

If a student submits a response that violates the "rules" of the context, no answer preview is generated, even when clicking "preview answer". This makes it difficult for the student to ascertain whether or not they have a syntax error in entering their response, or if they need to "simplify" their answer in order to meet the requirements of the context.

I've seen this happen in the LimitedPolynomial and RationalFunction contexts - I'm sure this is not an exhaustive list.

Is there a way to prioritize the parsing of the student response (in order to generate an answer preview) before passing to the context for interpretation? Particularly in the case when a student is only attempting to preview their answer?

Attempting to preview answer when working in a LimitedPolynomial context:
screen shot 2018-06-13 at 3 45 37 pm
(error message thrown before preview is generated)

Add feature to provide additional attributes to form elements

I would be desirable to add a feature to provide additional additional attributes to form elements such as input boxes for multiple purposes:

  1. Better "aria-label" values for accessibility purposes to override the default automatically set values.
    - See the discussion thread at http://webwork.maa.org/moodle/mod/forum/discuss.php?d=4694
  2. Providing CSS styling code to be attached (to achieve results such as nice formatting of limits of integration, without dependence of the element id).
    - See http://webwork.maa.org/moodle/mod/forum/discuss.php?d=4462

answerHints option checkTypes does nothing

There is a typo on line 126 of the answerHints.pl file that makes the checkTypes option completely useless. The line is:
next if $options{checkType} && $correct->type ne $student->type;
and should be
next if $options{checkTypes} && $correct->type ne $student->type;

UTF-8 trouble with parserPopup.pl in certain settings (may apply to other MathObjects with UTF-8 string answers)

When parserPopUp.pl is used with non-English UTF-8 answer strings trouble occurs in some setting:

  1. "images" display mode fails to work properly: when answers are submitted - the rendering of the answers table fails and the page fails to display.
  2. html2xml fails to render problems. The problem seems to be that the <methodResponse> data includes data like
<member><name>correct_ans_latex_string</name><value><string>{\verb+JUNK DELETED+}</string></value></member>

where I replaced the contents of the string block as the generated data inside \verb is manged.

A sample response when trying to submit an answer to such a problem with images equation display mode in use is:

WeBWorK error
An error occured while processing your request. For help, please send mail to this site's webmaster ([email protected]), including all of the following information as well as what what you were doing when the error occured.

Thu Dec 12 13:08:07 2019

Warning messages
Error messages
Wide character in subroutine entry at /opt/webwork/pg/lib/WeBWorK/EquationCache.pm line 89.
Call stack
The information below can help locate the source of the problem.

in WeBWorK::EquationCache::lookup called at line 271 of /opt/webwork/pg/lib/WeBWorK/PG/ImageGenerator.pm
in WeBWorK::PG::ImageGenerator::add called at line 352 of /opt/webwork/webwork2/lib/WeBWorK/Utils/AttemptsTable.pm
in WeBWorK::Utils::AttemptsTable::previewAnswer called at line 250 of /opt/webwork/webwork2/lib/WeBWorK/Utils/AttemptsTable.pm
in WeBWorK::Utils::AttemptsTable::formatAnswerRow called at line 320 of /opt/webwork/webwork2/lib/WeBWorK/Utils/AttemptsTable.pm
in WeBWorK::Utils::AttemptsTable::answerTemplate called at line 350 of /opt/webwork/webwork2/lib/WeBWorK/ContentGenerator/Problem.pm
in WeBWorK::ContentGenerator::Problem::attemptResults called at line 1935 of /opt/webwork/webwork2/lib/WeBWorK/ContentGenerator/Problem.pm
in WeBWorK::ContentGenerator::Problem::output_summary called at line 155 of /opt/webwork/webwork2/lib/WeBWorK/Template.pm
in WeBWorK::Template::template called at line 610 of /opt/webwork/webwork2/lib/WeBWorK/ContentGenerator.pm
in WeBWorK::ContentGenerator::content called at line 374 of /opt/webwork/webwork2/lib/WeBWorK/ContentGenerator/Problem.pm
in WeBWorK::ContentGenerator::Problem::content called at line 233 of /opt/webwork/webwork2/lib/WeBWorK/ContentGenerator.pm
in WeBWorK::ContentGenerator::go called at line 386 of /opt/webwork/webwork2/lib/WeBWorK.pm
Request information

An abbreviated and redacted version of the HTML response received from an html2xml page which hits this error is:

<h2>WebworkClient Errors</h2><p>Errors: <br /> <blockquote style="color:red"><code>
not well-formed (invalid token) at line 2, column 1454, byte 1623 at /usr/lib/x86_64-linux-gnu/perl5/5.26/XML/Parser.pm line 187.
<?xml version="1.0" encoding="UTF-8"?><methodResponse>

LOTS OF XML REMOVED

</methodResponse> at /opt/webwork/webwork2/lib/WebworkClient.pm line 302.
</code></blockquote> <br /> End Errors</p>
<!DOCTYPE html>
<html lang="he" dir="rtl">
<head>
<meta charset='utf-8'>
<base href="https://URL_REMOVED">
<link rel="shortcut icon" href="/webwork2_files/images/favicon.ico"/>

LOTS OF LINES REMOVED

</head>
<body>
<div class="container-fluid">
<div class="row-fluid">
<div class="span12 problem">

        <form id="problemMainForm" class="problem-main-form" name="problemMainForm" action="https://URL_REMOVED/webwork2/html2xml" method="post">
<div id="problem_body" class="problem-content" lang="en" dir="ltr" >
                        Unable to decode problem text<br/>
xmlrpcCall to renderProblem returned no result for PATH_TO_PG_FILE_REMOVED/q02.pg



</div>

LOTS OF LINES REMOVED
</form>
</div>
</div></div>

<div id="footer">
WeBWorK &copy; 1996-2019 | host: URL_REMOVED | course: COURSE_REMOVED | format: simple | theme: math4
</div>


</body>
</html>

parserPopUp and Hardcopies

Right now parserPopUp just prints out a box with a ? mark on hard copies. It would probably be better if it printed out a list of the popup entries.

release/2.8.1a

I have created a new release/2.8.1a off the current marker for master.

Let's add the newer commits (already added to 2.8.1) to this branch and see if we can the tree look less complicated.

It's possible to change the names -- so eventually we will delete release/2.8.1 (or rename it) and rename release/2.8.1a to release/2.8.1. This can't be done from the github GUI however (AFAK) but requires working on the laptop and pushing changes.

What I have done here is the simplest change. The next step is to try to add the most recent changes to release/2.8.1a branch in a way which keeps the network graph simple.

postFilter Issues -- doesn't respect ambient Context?

postFilter issues in develop
by Alex Jordan - Thursday, 24 September 2015, 09:22 PM
Since pulling to the develop branches, some problems of ours are broken, and it appears to me that the reason is consistently when
there is a postFilter applied to the answer checker
which uses some kind of MathObject creator command (like Compute or Formula)
and within the postFilter, it's not respecting the ambient Context from the problem. I assume it's using the default Numeric instead.

Unfortunately the examples are complicated by their nature. Below is code for one such problem that is breaking. The intent is to give two points, and get the student to write a line equation in point-slope form. parserImplicitPlane is used for this, and the special answer checking makes sure that it is in the real point-slope form. I should note that the underlying issue is in other problems of ours too that do not use parserImplicitPlane, since I know that has been a troublesome macro in the past. I don't think it has anything to do with the issue.

In the problem below, with seed 1234, when answers are submitted, the error message is:
Error in Translator.pm::process_answers: Answer AnSwEr0001: |y-10=2/3(x-1)|
Variable 'y' is not defined in this context; see position 1 of formula at line 94 of (eval 2110)
Died within main::Formula called at line 94 of (eval 2110)
Error in Translator.pm::process_answers: Answer AnSwEr0001:
Answer evaluators must return a hash or an AnswerHash type, not type || at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1241
Error in Translator.pm::process_answers: Answer AnSwEr0002: |y-13=2/3(x-2)|
You can see in the postFilter code below, that it uses Formula several times. And this error message is objecting to the 'y' in that Formula call. But 'y' is definitely a variable in the ambient ImplicitPlane context.

I'm posting this here, because I'm starting to think that something may have changed in a bad way with postFilters. Normally I assume it's my shoddy hack-y code that wasn't written well enough to survive improvements at deeper levels. But this may be a bug with develop/2.11.

DOCUMENT();

loadMacros(
"PGstandard.pl",
"MathObjects.pl",
"PGML.pl",
"parserImplicitPlane.pl",
"PGcourse.pl",
);

##############################################

Context("Numeric");
Context()->variables->add(y=>'Real');
Context()->noreduce('(-x)-y','(-x)+y');
Context()->flags->set(showExtraParens=>0);

$m=random(2,5,1);
$b=random(1,10,1);

$x1=random(1,5,1);
$y1=$m$x1+$b;
$x2=random(1,5,1);
while ($x2==$x1) {$x2=random(1,5,1);}
$y2=$m$x2+$b;

Context()->texStrings;
$ansPSstringTeX1 = "y-$y1=$m(x-$x1)"; 
Context()->normalStrings;
$ansPSstring1 = "y-$y1=$m(x-$x1)";

Context()->texStrings;
$ansPSstringTeX2 = "y-$y2=$m(x-$x2)"; 
Context()->normalStrings;
$ansPSstring2 = "y-$y2=$m(x-$x2)";

Context("ImplicitPlane");
Context()->variables->are(x=>'Real',y=>'Real');
Context()->flags->set(showExtraParens=>0);
Context()->flags->set(showExtraParens=>0);
$ansPS1 = ImplicitPlane("$ansPSstring1");
$ansPS2 = ImplicitPlane("$ansPSstring2");

##############################################

TEXT(beginproblem());

BEGIN_PGML

A line passes through the points [([$x1],[$y1])] and [([$x2],[$y2])]. Find this line's equation in point-slope form.

Using the point [([$x1],[$y1])], this line's point-slope form equation is [___________________].

Using the point [([$x2],[$y2])], this line's point-slope form equation is [___________________].

END_PGML

##############################################

Context()->flags->set(reduceConstants=>0);
Context()->flags->set(reduceConstantFormulas=>0);

ANS($ansPS1->cmp(correct_ans_latex_string => $ansPSstringTeX1
) -> withPostFilter(sub {
my $ansHash = shift;
my $student = $ansHash->{original_student_ans};
my @sides = split('=',"$student");

#if it's an implicit plane object, reset how student's answer is displayed:
if ($ansHash->{student_formula}->cmp_class eq "an Implicit line") {
$ansHash->{preview_text_string} = "$sides[0]=$sides[1]";
my $leftTex = Formula("$sides[0]")->TeX;
my $rightTex = Formula("$sides[1]")->TeX;
$ansHash->{preview_latex_string} = "$leftTex=$rightTex";
$ansHash->{student_ans} = $ansHash->{original_student_ans};
}

#if they have the line correct, then check each side to see if the sides are correct
if ($ansHash->{score}) {
if (Formula("y-$y1") != Formula("$sides[0]") and Formula("y-$y1") != Formula("$sides[1]")) {
$ansHash->{score} = 0; 
$ansHash->{ans_message} = "This is an equation for the line, but it is not the point-slope equation that uses the given point"; 
} 
}
return $ansHash;
}));

ANS($ansPS2->cmp(correct_ans_latex_string => $ansPSstringTeX2
) -> withPostFilter(sub {
my $ansHash = shift;
my $student = $ansHash->{original_student_ans};
my @sides = split('=',"$student");

#if it's an implicit plane object, reset how student's answer is displayed:
if ($ansHash->{student_formula}->cmp_class eq "an Implicit line") {
$ansHash->{preview_text_string} = "$sides[0]=$sides[1]";
my $leftTex = Formula("$sides[0]")->TeX;
my $rightTex = Formula("$sides[1]")->TeX;
$ansHash->{preview_latex_string} = "$leftTex=$rightTex";
$ansHash->{student_ans} = $ansHash->{original_student_ans};
}

#if they have the line correct, then check each side to see if the sides are correct
if ($ansHash->{score}) {
if (Formula("y-$y2") != Formula("$sides[0]") and Formula("y-$y2") != Formula("$sides[1]")) {
$ansHash->{score} = 0; 
$ansHash->{ans_message} = "This is an equation for the line, but it is not the point-slope equation that uses the given point"; 
} 
}
return $ansHash;
}));

$s1=$y2-$y1;
$s2=$x2-$x1;
$s3=$m*$x1;
$s4=-$s3;

$outputy1 = $y1<0 ? "($y1)" : $y1;
$outputx1 = $x1<0 ? "($x1)" : $x1;

BEGIN_PGML_SOLUTION

A line's equation in point-slope form looks like [y-y_{1}=m(x-x_{1})] where [m] is the slope of the line and [(x_{1},y_{1})] is a point that the line passes through. We first need to find the line's slope.

To find a line's slope, we can use the slope formula:

[\text{slope}=\frac{y_{2}-y_{1}}{x_{2}-x_{1}}]

We mark which number corresponds to which variable in the formula:

[([$x1],[$y1]) \longrightarrow (x_{1},y_{1})]

[([$x2],[$y2]) \longrightarrow (x_{2},y_{2})]

Now we substitute these values into the corresponding variables in the slope formula:

[\begin{aligned}\text{slope}&=\frac{y_{2}-y_{1}}{x_{2}-x_{1}}\\&=\frac{[$y2]-[$outputy1]}{[$x2]-[$outputx1]}\\&=\frac{[$s1]}{[$s2]}\\&=[$m]\end{aligned}]

Now we have [y-y_{1}=[$m](x-x_{1})]. The next step is to use a point that we know the line passes through.

If we choose to use the point [([$x1],[$y1])], we have:

[ \begin{aligned} y-y_{1} &= m(x-x_{1}) \\ y-[$y1] &= [$m](x-[$x1]) \end{aligned} ]

If we choose to use the point [([$x2],[$y2])], we have:

[ \begin{aligned} y-y_{1} &= m(x-x_{1}) \\ y-[$y2] &= [$m](x-[$x2]) \end{aligned} ]

Note that these two equations are equivalent. You will see why once you change both equations to slope-intercept form. This is left as an exercise.

END_PGML_SOLUTION

ENDDOCUMENT();

Loss of significant digits when using Formula

Note: This may be a known issue. (Its entirely possible that I've run across this issue before and just don't remember.) If that's the case feel free to close this thread.

Anyway, because of the way the default number format works with Context() if you use a real valued number as part of a formula you end up losing significant digits. For example

DOCUMENT();  

loadMacros("PG.pl",
           "PGbasicmacros.pl",
           "MathObjects.pl");

$real = exp(Real(3));
$formula = Formula("$real");
$diff = abs($real - $formula);

BEGIN_TEXT
$diff
END_TEXT

ENDDOCUMENT();  

The difference $diff should be zero, but its not because $real is only printed to 5 or so significant digits when the string interpolation occurs for Formula and the truncated version of $real is used to build the formula. This is a silly example, but it actually causes students to get wrong answers in Library/WHFreeman/Rogawski_Calculus_Early_Transcendentals_Second_Edition/9_Introduction_to_Differential_Equations/9.3_Graphical_and_Numerical_Methods/9.3.14.pg because the loss of significant digits is so bad that it falls foul of WeBWorK's own accuracy requirements.

There are a couple of existing solutions to this issue. The Freeman problem can be fixed by making sure that only integers are interpolated into the Formula strings. In general you could set

Context()->{format}{number} = "%.10f#";

before you do any of your math and then

Context()->{format}{number} = "%g";

before you do your printing. On the other hand this whole thing is pretty subtle and is the kind of unpleasant surprise that can make higher level problems hard to write. It might be worth thinking about if there is any sort of systematic solution so that the interpolation of MathObjects works differently depending on where its being used.

answer fields as argument for customize LaTeX subroutines

Several subroutines for LaTeX customization of some common mathematical objects, which I proposed for the Open Problem Library some time ago (openwebwork/webwork-open-problem-library#335 openwebwork/webwork-open-problem-library#336 openwebwork/webwork-open-problem-library#338 openwebwork/webwork-open-problem-library#340 openwebwork/webwork-open-problem-library#390 openwebwork/webwork-open-problem-library#391), were currently added to pg in the file pg/macros/customizeLaTeX.pl
(see #326). In pull request #339 I propose further subroutines, in particular span, which takes a set of vectors as an argument. Some problems ask students to provide a basis for vector spaces as the following example:

https://github.com/openwebwork/webwork-open-problem-library/blob/88d0e0acfaf419935b8abc1a78659fbffdd793c2/OpenProblemLibrary/Hope/Multi1/04-02-Kernel-image/Ker_im_01.pg#L88

It would be nice if one could replace

\( \mathrm{Kernel}(f) = \mathrm{span} \Big\lbrace \) \{ ans_rule(30) \} \( \Big\rbrace. \)

by something like

\( \mathrm{Kernel}(f) =  \{ span(ans_rule(30)) \}. \)

Of course this does not work, because the output of ans_rule(30) appears within math mode. Any hints on how this could be properly implemented?

Bug with context currency

If Currency() takes a math object Real as input, it is actually turning that Real into a Currency object. The following prints the word Currency when it should be printing the word Real.

DOCUMENT();

loadMacros(
  "PGstandard.pl",
  "MathObjects.pl",
  "contextCurrency.pl",
);

Context("Numeric");
$real = Real(2);

Context("Currency");
$currency = Currency($real);

BEGIN_TEXT
\{$real->class\}
END_TEXT

ENDDOCUMENT();

Gateways and Labelled Answers

I'm getting the following error when viewing the problem pasted below on gateways, but not on standard homeworks.

There is no answer evaluator for the question labeled first_answer at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1218
Error in Translator.pm::process_answers: Answer first_answer:<br/>
Unrecognized evaluator type || at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1242
Error in Translator.pm::process_answers: Answer first_answer:<br/>

from the following problem

DOCUMENT();
loadMacros(
        "PGbasicmacros.pl",
        "PGchoicemacros.pl",
        "PGanswermacros.pl"
);
TEXT(beginproblem(), $BR,$BBOLD, "Conditional questions example", $EBOLD, $BR,$BR);
$showPartialCorrectAnswers = 1;

$a1 = random(3,25,1);
$b1 = random(2,27,1);
$x1 = random(-11,11,1);
$a2 = $a1+5;

BEGIN_TEXT
If \( f(x) = $a1 x + $b1  \), find \( f'( $x1 ) \).
$BR $BR \{NAMED_ANS_RULE('first_answer',10) \}
$BR
END_TEXT



$ans_eval1 = num_cmp($a1);       
NAMED_ANS(first_answer => $ans_eval1);                                  

# Using named answers allows for more control.  Any unique label can be
# used for an answer.    
# (see http://webwork.math.rochester.edu/docs/docs/pglanguage/pgreference/managinganswers.html
# for more details on answer evaluator formats and on naming answers
# so that you can  refer to them later.  Look also at the pod documentation in
# PG.pl and PGbasicmacros.pl  which you can also reach at
# http://webwork.math.rochester.edu/docs/techdescription/pglanguage/index.html)

# Check to see that the first answer was answered correctly.  If it was then we
# will ask further questions.
$first_Answer = $inputs_ref->{first_answer};  # We need to know what the answer
                                                                                          # was named.
$rh_ans_hash = $ans_eval1->evaluate($first_Answer);

# warn pretty_print($rh_ans_hash);  # this is useful technique for finding errors.
                                    # When uncommented it prints out the contents of 
                                    # the ans_hash for debugging

# The output of each answer evaluator consists of a single %ans_hash with (at
# least) these entries:
#       $ans_hash{score}        -- a number between 0 and 1
#       $ans_hash{correct_ans}  -- The correct answer, as supplied by the instructor
#       $ans_hash{student_ans}  -- This is the student's answer
#       $ans_hash{ans_message}  -- Any error message, or hint provided by
#                                                          the answer evaluator.
#       $ans_hash{type}   -- A string indicating the type of answer evaluator.
#                                         -- Some examples:
#                                               'number_with_units'
#                                               'function'
#                                               'frac_number'
#                                               'arith_number'
# For more details see
# http://webwork.math.rochester.edu/docs/docs/pglanguage/pgreference/answerhashdataype.html

# If they get the first answer right, then we'll ask a second part to the
# question ...
if (1 == $rh_ans_hash->{score} ) {

        # WATCH OUT!!:  BEGIN_TEXT and END_TEXT have to be on lines by
        # themselves and left justified!!!   This means you can't indent
        # this section as you might want to. The placement of BEGIN_TEXT
        # and END_TEXT is one of the very few formatting requirements in
        # the PG language.

BEGIN_TEXT
                $PAR Right! Now
                try the second part of the problem: $PAR $HR
                If \( f(x) = $a2 x + \{$b1+5\}  \), find \( f'( x) \).
                $BR $BR \{ NAMED_ANS_RULE('SecondAnSwEr',10) \}
                $BR
END_TEXT

$ans_eval2 = num_cmp($a2);

        NAMED_ANS(SecondAnSwEr => $ans_eval2); 

}  
ENDDOCUMENT();

verbatim delimiter

Once upon a time (actually still in 2.14), in lib/Value/String.pm, there was this code:

#
#  Mark a string to be display verbatim
#
sub verb {shift; return "\\verb".chr(0x85).(shift).chr(0x85)}

The idea is the \verb LaTeX command is going to be used on a string answer, and it needs a delimiter character. Character 0x85, ASCII 133, was chosen because it would be crazy for a student to have that as part of a string answer they "typed".

Then in 539406c, the character changed to 0x1F, ASCII 31, the "unit separator character". This brought it down into 7 bit ASCII, and Geoff's comment in the commit suggests this has something to do with the utf8 conversion.

So now we have string answers that use character 0x1F in their display. This is causing an issue with PreTeXt. When WW processes a problem with "PTX" display mode, it makes XML. For each answer of the problem, it makes a single XML element, with lots of attributes and values that correspond to the Perl answer hash's keys and values. For an example, see:

https://webwork-ptx.aimath.org/webwork2/html2xml?courseID=anonymous&amp;userID=anonymous&amp;password=anonymous&amp;course_password=anonymous&amp;answersSubmitted=0&amp;displayMode=PTX&amp;outputformat=ptx&amp;problemSeed=8435&amp;sourceFilePath=Library/PCC/BasicAlgebra/Geometry/CylinderVolume10.pg

and view source, since your web browser is likely to try to read the XML as HTML.

So you can see how a string like \verb<0x1F>foo<0x1F> could end up inside a value for an attribute of one of these XML elements. The problem is that XML does not allow this character in an attribute value. (Well, there are varying standards for what is allowed, but even when this one is allowed, its use is discouraged, and anyway, its presence causes the python validator we use to declare this to be invalid XML.)

So. We want a character that a student will not be able to type with normal use of the keyboard. So nothing in ASCII 32--127. And we want a character that is valid for XML in an attribute value. So nothing in ASCII 00--31. So we have to leave 7-bit ASCII to meet both conditions. Is it possible to do this? Can a character be chosen somewhere else in utf8 and that be compatible with the utf8 conversion happening now?

AskSage results to json

As I am relatively new to WebWork and wanted to use AskSage, I found that the Wiki only contained one example. So I viewed IO.pm to get a better understanding and a few questions appeared to me.
They regard this bit of the code.

if isinstance(o,sage.rings.integer.Integer):
                json_obj = int(o)
            elif isinstance(o,(sage.rings.real_mpfr.RealLiteral, sage.rings.real_mpfr.RealNumber)):
                json_obj = float(o)
            elif sage.modules.free_module_element.is_FreeModuleElement(o):
                json_obj = list(o)
            elif sage.matrix.matrix.is_Matrix(o):
                json_obj = [list(i) for i in o.rows()]
            elif isinstance(o, SageObject):
                json_obj = repr(o)

I cannot seem to understand, why these cases could not all be reduced to the last one.
If we have an integer o in sage turning it into a json as an integer and then decoding it into a perl-scalar should have the same result as turning repr(o) into a json and then decoding it into a perl-scalar.
The same seems to be true for a real number apart from the number of digits. But since the MathObject Real seems to have less digits then either of those representations, that does not seem to matter so much either.
If o is a vector both the result of list(o) and repr(o) can later be transformed into a MathObject Vector.
When a matrix is returned by my sage code the result is cannot be turned into a MathObject Matrix. As in the example in the Wiki I have to turn the matrix into its rows and then print them or fix the result of
decode_json() decoding [list(i) for i in o.rows()] up with commas, before it can be turned in a MathObject Matrix.

I found this to be highly confusing, as I wanted to write a more detailed documentation on how to use AskSage because I thought since it is made use of a few types of json, their results of AskSage would differ, e.g. had to be written in an array depending on their type.

Can somebody explain to me what this is used for? Or might this code be redundant? In that case I would suggest removing it to avoid future confusion.

Possible misnaming of answer blank subroutines

For example, in parserPopUp.pl there is:

204 #
205 #  Answer rule is the menu list
206 #
207 sub ans_rule {shift->MENU(0,'',@_)}
208 sub named_ans_rule {shift->MENU(0,@_)}
209 sub named_ans_rule_extension {shift->MENU(1,@_)}

It seems to me that here, ans_rule is overruling the ans_rule from PGbasicmacros.pl. OK.

But what are named_ans_rule and named_ans_rule_extension doing? There are no subroutines by the same name in PGbasicmacros.pl. Instead, there are labeled_ans_rule, NAMED_ANS_RULE, and NAMED_ANS_RULE_EXTENSION.

In all of pg/macros, named_ans_rule only makes an appearance where it is defined here in parserPopUp.pl, where it is similarly defined in parserRadioButtons.pl, where it is similarly defined in parserWordCompletion.pl, and once where it is called upon in parserMultiAnswer.pl (but is not defined). The same for named_ans_rule_extension.

All of this is raising my suspicion that these instances were supposed to be the capital letter spelling, or something like that. Or do I misunderstand what is going on here?

NumberWithUnits and arithmetic operations

When forming products and quotients of two MathObjects that are created using the subroutine NumberWithUnits from parserNumberWithUnits.pl, the "number part" is computed correctly, but the "unit part" is not correct. It seems that the unit of a product and a quotient is the unit of the left operand. The example below illustrates this problem. A workaround is to work with Perl numbers first and only add the units after all computations are done. I think however that it would be nice if an example like this would work too.

DOCUMENT();
loadMacros("PGbasicmacros.pl",
           "PGML.pl",  
           "parserNumberWithUnits.pl",
);
$F = NumberWithUnits("1 N");
$s = NumberWithUnits("2 m");
$W = $F / $s;
$t = NumberWithUnits("2 s");
$P = $W/$t;
TEXT(beginproblem());
BEGIN_PGML
[` [$W]=[$F] \cdot [$s] `]

[` [$P]=[$W] / [$t] `]
END_PGML
ENDDOCUMENT();

Installation change needed for Chromatic

If you look at a pg file, webwork will look have the web server try to compile color.c on the fly for the Chromatic number. It wants to write the result to /opt/webwork/pg/lib/chromatic. So the permissions for this directory needs to be set during installation

Maybe there should be a script in webwork2/bin which sets all of the special permissions during installation

Internationalization: alternative scripts/languages and ALPHABET

When supporting foreign languages, there may be a desire to label items in a list using the letters of a foreign language, and not capital English (Latin) letters.

It would be nice to find a manner to allow changing the ALPHABET array in a "global" manner, either via a course level setting, or via PGcourse.pl (or a similar local macro), and having that effect all the code depending on the "alphabet". At present, doing so seems somewhat complicated, as there are several places where the Latin alphabet seems to be hard-wired in.

Several places in PG use the ALPHABET (array of Latin upper case letters: ('A'..'ZZ') and subroutine) to select the "item labels"

  • macros/PGbasicmacros.pl defines the main ALPHABET subroutine and array, but overriding the value of the @ALPHABET array would not actually change the output of the subroutine, as it does not use the @ALPHABET array but a fixed array ('A'..'ZZ').
    • macros/PGchoicemacros.pl depends on the $main::ALPHABET array (which I think is the desired behavior).
    • macros/parserRadioButtons.pl depends on the $main::ALPHABET array (which I think is the desired behavior).
    • macros/PGbasicmacros.pl also has a OL subroutine which also internally hard-codes @alpha = ('A'..'Z', 'AA'..'ZZ'); rather than depending on the value of the @ALPHABET array.
  • lib/ChoiceList.pm and lib/List.pm provide their own ALPHABET subroutines, and those subroutines both hard-code ('A'..'ZZ') rather than depending on $main::ALPHABET array.
    • lib/Multiple.pm and lib/Match.pm use the &ChoiceList::ALPHABET subroutine.

Context Fraction and non-terminating decimal reals

Using Context(“Fraction”) and only the associated context default values, I have found that responses that use decimal approximations in response to an answer blank that is assigned a Fraction object for comparison are unable to achieve a valid result whenever the decimal form is non-terminating (regardless of how many significant figures are provided, and despite the default tolerance).

In such a case, no error is provided by default - but using the AnswerHashInfo option, an error appears in the student_formula portion of the answer hash as follows:
Unable to determine stringify for this item Can’t locate object method “string” via package “context::Fraction::Real” at /opt/WeBWorK/pg/lib/Parser/Number.pm line 66

I have attempted to add a variety of string methods to the context::Fraction::Real segment of contextFraction.pl to no avail. There’s still something I’m not grasping about perl inheritance, perhaps...

@dpvc any ideas?

Context(“Fraction”);
$ans = Fraction(1,3);
BEGIN_PGML
Try a decimal approximation of [`\frac{1}{3}`]: [_______]{$ans}
END_PGML

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.