omsmith / ims-lti Goto Github PK
View Code? Open in Web Editor NEWA node.js library implementing the IMS LTI tool providers' standards
License: Other
A node.js library implementing the IMS LTI tool providers' standards
License: Other
I am trying to use this ToolProvider node plugin to verify Oauth signature and got stuck in one issue for long time.
This plugin is working fine and generating the correct signature of request payload when I am submitting post request from (http://ltiapps.net/test) or from canvas.instructure.com during course assignments tutorial. But this plugin signature is not matching when I am testing from demo site at acme.instructure.com ( https://officehours.acme.instructure.com/ ), Any idean what could go wrong here?
...from within provider.validate_request? It might make more sense to call it from within there, i.e. just inside the passed next function for this.nonceStore.isNew
in Provider.prototype._valid_oauth
If we're expected to call it from within our own implementation of isNew
itself, there's little value in documenting that it must be exposed externally, instead part of the contract of isNew
should be that on subsequent calls for the same value it should return false.
To debug problems with LTI Launches it is essential to be able to compare base strings. The different signature values have no information at all. It would be nice to return the base string in event of provider fail. Something like:
[ { message: 'Invalid Signature' , base: "GET&http..."}, false ]
Would be a nice, clean extension.
Hi everyone,
I've had a go at implementing the Consumer class in Javascript, reusing as much code as I could from the published Javascript Provider class. I'm short on time right now to bring this to the proper format for a PR, especially because I'm not versed in Coffeescript, but I'd really appreciate comments on the approach.
Also welcome any help bringing this to good enough condition for a pull request.
Implementation:
var lti = require( 'ims-lti' );
var consumer = new lti.Consumer( 'oauth_key', 'oauth_secret' );
var requestBody = {
lti_version : 'LTI-1p0',
lti_message_type: 'basic-lti-launch-request',
resource_link_id: '0',
resource_link_title: 'My title',
context_id: '1234',
context_title: "My Test",
context_label: "No label",
roles: 'Student'
};
var ltiUrl = 'https://lti.source.com/launch';
consumer.validate_request( requestBody, function( err, valid ) {
if( err || ! valid ) next( err || new Error( "Invalid LTI request" ) );
else {
consumer.send( { url: ltiUrl }, function( err, msg, response ) {
res.send( response );
}
}
}
AFAIK, ims-lti
relies on req
values being x-forwarded-*
aware; with express
it involves setting 'trust proxy' to a truthy value.
It works for https proxy but it won't affect the host value. Although, express
will set req.hostname
but it doesn't include the port. ims-lti
uses req.headers.host
to sign the request.
If ims-lti
has to use header values, there should be the option to lookup x-forwarded-*
values instead.
When calling send_read_result()
for a student that doesn't have any grade yet, some LMS (specifically Moodle) send back a XML response with a body value of ""
.
Here's an extract of the received XML sent back by a Moodle instance where the student did not have any grade for the activity :
<resultScore>
<language>en</language>
<textString/>
</resultScore>
As you can see, the value of textString
is ""
.
This causes the lib to raise an error.
Having followed the chain of method, I've pinpointed the line that, according to me, causes the issue :
ims-lti/src/extensions/outcomes.coffee
Line 138 in 4df2936
parseFloat
and navigateXml
in the same line yield a score
value of NaN
(which is normal).
But the following check of isNan(score)
returns an error as if this was an error case (which is not):
ims-lti/src/extensions/outcomes.coffee
Line 141 in 4df2936
I would expect a different behavior between a case where there is no grade yet and a case when the LMS sends back an actual invalid value.
I previously thought about returning a 0 value when there is no grade yet. But it might be confusing with the case where the student has received an actual 0 grade ? Dunno.
If a user opens multiple resources at once, how do you use provider.outcome_service
for a specific resourceId
then?
When I use the standard approach, the provider sets outcomes only for the user's last resource.
in php you seem to be able to do it like this
...reason I ask is that it is never used:
https://github.com/omsmith/ims-lti/blob/master/src/redis-nonce-store.coffee#L8
I'm having trouble understanding what type of request objects are expected. I'm using express, and the body-parser middleware, but my signatures are being signed incorrectly. It would be nice if the expected request object format were documented. Are these just basic node request objects?
The oauth1 specs state that the consumer secret and token need to be parameter encoded before passing them to hmac-sha1 for signing:
[...] the key is the concatenated values (each first encoded per Parameter Encoding) of the Consumer Secret and Token Secret, separated by an '&' character (ASCII code 38) even if empty.
In provider.coffee
and hmac-sha1.coffee
however, the consumer secret is passed directly to the signing algorithm without encoding it first.
This means that if the shared secret includes characters that should be encoded (e.g., "secret!key"), the signature test fill fail for a correctly signed message.
#I'm new to LTIs and have an implementation question and hope I can find some direction here.
I'm using meteor, express (to handle POST request) and React(client)/React Router, and experimenting with an LTI that allows instructors to send a URL link of some search result content back to the course page, after launching the LTI from the editor button.
After clicking on the editor button, in canvas, on the server backend, I have successfully been able to take the express's req and res objects and create a valid LTI provider, after validating the post request within a middleware function, where I have access to the request object.
Next, my search app comes up, and at this point, I would like to send a URL hyperlink back to be included in the course content. The only thing is, I only know how to do that on the server using:
provider.ext_content.send_url(res, hyperlink_url, text, title_attribute, target_attribute)
because it seems like it's the only place I can create the lti.provider is within Express, where I have access to the res and req.
Is there a way to send the consumer a URL link using the
provider.ext_content.send_url(res, hyperlink_url, text, title_attribute, target_attribute)
outside of express, particularly from the client?
Any help is welcomed, thank you.
This is definitely a very long term issue, but a few final drafts of the spec have been published (most notably LTI 2.0). It would be cool to implement them. The main issue I foresee here is that very very few platforms actually use any other version than 1.0 as of now, so there's not very many things to test against.
Hello Owen,
I discoverd a little typo in:
https://github.com/omsmith/ims-lti/blob/master/src/extensions/outcomes.coffee
Line 42 is now: @body.ele('sourceGUID').ele('sourcedId', source_did)
But has to be: @body.ele('sourcedGUID').ele('sourcedId', source_did)
Now I get a success-message back from http://ltiapps.net/test/tc.php, when I post a outcome.
Thanks for the software,
Ronald Pannekoek
Method _send_request
seems to be broken if HTTPS is used. It gives the following error message:
[Error: Protocol "http:" not supported. Expected "https:".]
I'm not sure if the current way of selecting the protocol works as expected. This happens when lis_outcome_service_url
contains https
.
Is there an example of a Mongo Nonce Store?
Thanks a lot!
I am implementing my own Consumer.js. I can access all the objects except HMac.
It would be good if reference to hmacSha1 is included in src/ims-lti.js (all other files in library are included except hmacSha1.js)
exports = module.exports = {
version: '0.0.0',
Provider: require('./provider'),
Consumer: require('./consumer'),
Errors: require('./errors'),
HMacSha1 : require('./hmac-sha1'),
Stores: {
RedisStore: require('./redis-nonce-store'),
MemoryStore: require('./memory-nonce-store'),
NonceStore: require('./nonce-store')
},
supported_versions: ['LTI-1p0']
};
So that in consumer.js one can do
var lti = require('ims-lti');
var signer = new lti.HmacSha1();
Hi, I am using this library to go through the Canvas LTI tutorial. However, there is difference between the oauth_signature generated by the library and the one passed by Canvas. I used https link in Canvas and the req_url that the library uses for the same request is http. I tried hard-coding the link to https in the library. I get a different signature, but still doesn't match with the signature sent from Canvas. There is no oauth_token passed from Canvas. Is it normal?
Was this library tested with Canvas before? Any tips on debugging the issue will be highly helpful.
Thanks!
The only delta between the latest release and master is the correct validation of a ContentItemSelectionRequest
message. Any chance of a point release please to bring this functionality into play?
I'll take a look at this one myself, basically if a query string is present in the request body it is not being correctly accounted for per the OAuth spec. Shouldn't be that difficult of a fix.
For our application the request can ONLY happen over SSL (we implement no other connection options). So I'm trying to determine if there is any purpose in verifying the oauth_nonce. I believe that the purpose of the nonce is entirely to prevent replay attacks which is already a feature of SSL.
Storing the nonce values will cost money and waste time for each user so I only want to do it if it has some value. Is there value in storing nonces and rejecting any duplicate requests when the request is made over SSL?
I asked this question on StackOverflow as well, but thought someone here might have more specific information:
https://stackoverflow.com/q/44469654/796999
When I launch to a normal http endpoint, the authentication works fine. When I launch the same endpoint but from https, the validation breaks. The signatures are different. I haven't looked into it too much, but I think it's how the headers are handled. There are different headers set (at least in the version of Chrome that I'm using) when performing the https post than when performing the http post.
It looks like the error classes don't behave like a Javascript Error:
it.only 'should return false if nonce already seen', (done) =>
req =
url: '/test'
method: 'POST'
connection:
encrypted: undefined
headers:
host: 'localhost'
body:
lti_message_type: 'basic-lti-launch-request'
lti_version: 'LTI-1p0'
resource_link_id: 'http://link-to-resource.com/resource'
oauth_customer_key: 'key'
oauth_signature_method: 'HMAC-SHA1'
oauth_timestamp: Math.round(Date.now()/1000)
oauth_nonce: Date.now()+Math.random()*100
#sign the fake request
signature = @provider.signer.build_signature(req, req.body, 'secret')
req.body.oauth_signature = signature
@provider.valid_request req, (err, valid) =>
should.not.exist err
valid.should.equal true
@provider.valid_request req, (err, valid) ->
should.exist err
console.log('Message: ', err.message);
console.log('Stack: ', err.stack);
err.should.be.instanceof(lti.Errors.NonceError)
valid.should.equal false
done()
which will print out:
Message: Expired nonce
Stack: undefined
vs changing this line to:
callback new Error('Expired nonce'), false
which will print out:
Message: Expired nonce
Stack: Error: Expired nonce
at /Users/jason/oss/ims-lti/lib/provider.js:204:27
at MemoryNonceStore.isNew (/Users/jason/oss/ims-lti/lib/memory-nonce-store.js:82:16)
at Provider._valid_oauth (/Users/jason/oss/ims-lti/lib/provider.js:200:30)
at Provider.valid_request (/Users/jason/oss/ims-lti/lib/provider.js:169:19)
at Provider.valid_request (/Users/jason/oss/ims-lti/lib/provider.js:88:59)
at /Users/jason/oss/ims-lti/test/Provider.coffee:326:19
at /Users/jason/oss/ims-lti/lib/provider.js:207:18
at /Users/jason/oss/ims-lti/lib/memory-nonce-store.js:98:20
at MemoryNonceStore.setUsed (/Users/jason/oss/ims-lti/lib/memory-nonce-store.js:117:14)
at MemoryNonceStore.isNew (/Users/jason/oss/ims-lti/lib/memory-nonce-store.js:85:19)
at Provider._valid_oauth (/Users/jason/oss/ims-lti/lib/provider.js:200:30)
at Provider.valid_request (/Users/jason/oss/ims-lti/lib/provider.js:169:19)
at Provider.valid_request (/Users/jason/oss/ims-lti/lib/provider.js:88:59)
at Context.<anonymous> (/Users/jason/oss/ims-lti/test/Provider.coffee:323:17)
at callFnAsync (/Users/jason/oss/ims-lti/node_modules/mocha/lib/runnable.js:349:8)
at Test.Runnable.run (/Users/jason/oss/ims-lti/node_modules/mocha/lib/runnable.js:301:7)
at Runner.runTest (/Users/jason/oss/ims-lti/node_modules/mocha/lib/runner.js:422:10)
at /Users/jason/oss/ims-lti/node_modules/mocha/lib/runner.js:528:12
at next (/Users/jason/oss/ims-lti/node_modules/mocha/lib/runner.js:342:14)
at /Users/jason/oss/ims-lti/node_modules/mocha/lib/runner.js:352:7
at next (/Users/jason/oss/ims-lti/node_modules/mocha/lib/runner.js:284:14)
at Immediate.<anonymous> (/Users/jason/oss/ims-lti/node_modules/mocha/lib/runner.js:320:5)
at runCallback (timers.js:651:20)
at tryOnImmediate (timers.js:624:5)
at processImmediate [as _immediateCallback] (timers.js:596:5)
This can cause issues if calling code is expecting to find a useful stack trace or is using something like nested error stacks. There might be other subtle differences, too, but the lack of a stack trace is the one that puzzled me today.
It seems like the class inheritance isn't working as expected? It could be related to this coffeescript bug, but I know literally nothing about coffeescript, so hard to tell.
Getting this when installing:
npm WARN deprecated [email protected]: This package is discontinued. Use lodash@^4.0.0.
Looks like this is coming from an old version of xmlbuilder-js that this package depends on. The latest release is 9.0.4.
Content and result data are extensions written on top of the base LTI spec. They aren't required but highly recommended.
i got some broken links on README.md.
"Read the LTI documentation on OAuth" links to http://www.imsglobal.org/LTI/v1p1pd/ltiIMGv1p1pd.html#_Toc309649687
"LTI launch data" links to http://www.imsglobal.org/lti/v1p1pd/ltiIMGv1p1pd.html#_Toc309649684
"LIT security model" links to http://www.imsglobal.org/lti/v1p1pd/ltiIMGv1p1pd.html#_Toc309649685
Hey @omsmith! ๐
Thanks for putting together an awesome project! ๐
It looks like things have been a bit quiet around here for a while now.
Would having additional maintainer(s) on the project be helpful?
If it would be helpful, I'd be happy to help maintain the project.
Getting this when installing ims-lti
:
npm WARN deprecated [email protected]: Use uuid module instead
I think it's coming from here.
const lti = require('ims-lti')
const outcome = new lti.OutcomeService({
consumer_key: 'key',
consumer_secret: 'secret',
// Note: localhost:3000 is a server that simply responseds with a status 200, empty body, to any request
service_url: 'http://localhost:3000',
source_did: 'sourcedid'
})
outcome.send_replace_result(1.0, (error, result) => {
if (error) {
console.error('Encountered an error while sending result:')
console.error(error)
}
})
$ node test.js
Encountered an error while sending result:
{ [Error] message: undefined }
Maybe a more helpful error message?
After debugging the program, it looks like the XML parser does not throw an error. But the code from the response is undefined
(since the XML is empty), so the library assumes there was a non-success code and tries to throw an error using the message in the XML. But, again, there is nothing in the XML. So msg === undefined
ims-lti/src/extensions/outcomes.coffee
Line 213 in 4df2936
As of right now the errors just consist of a simple string which we could just match against, but that's probably not ideal if the text changes. We should probably attach codes to the errors thrown so they can be verified via some sort of constant.
As an example fs.readFile will return an error code of ENOENT
if the file or directory does not exist.
I will look into providing an implementation via a pull request since I'm looking to use this functionality getting a provider app verified.
We are using this library with a LTI application that interfaces with the aNewSpring LMS. Using the same key, secret, and launch url (with the exception of http: and https:) results in the HTTP launch being valid, while the HTTPS launch is invalid.
The same result can be reproduced using the official IMS Consumer Emulator.
The "request" module is declared as a dev dependency in "package.json", but is required at runtime by "ims-lti.coffee" though it seems to be unused there.
Thus, if my project doesn't explicitly require "request" on it's own, the ims-lti code breaks on requiring a nonexistent module.
Hi there,
May I know is there any way to persist the provider for use later (such as saving it in a database) so that I can use the outcome service to submit the grade a few hours later?
By the way will the Express request object be detached from signature verification and what alternative can be used?
Thanks
Although the module for using Redis exists, in smaller environments MemoryNonceStore
is enough. However, it leaks memory because the used nonces are never removed.
If the used nonces were saved with the timestamp, checking new nonces could remove the expired nonces with some probability.
The README should be updated that this lib requires node Express. The lib requires a http req object that is unique to Express. I tried using Hapi and Connect alone but neither worked with this library.
It looks to me like the signing verification doesn't take into account query parameters (e.g. http://example.com/somepath?queryparam=value) when computing signatures, thus "lti.Provider.valid_request" returns an "Invalid Signature" error in these cases.
The build_signature
function is using raw request headers obtained from Node HTTP to get the host. Direct interaction with the raq request object is not recommended by hapijs and headers are not defined on this object running [email protected].
TypeError: Cannot read properties of undefined (reading 'host')
at HMAC_SHA1.build_signature (PROJECT_DIR/node_modules/ims-lti/lib/hmac-sha1.js:71:47)
at Provider._valid_oauth (PROJECT_DIR/node_modules/ims-lti/lib/provider.js:67:31)
at Provider.valid_request (PROJECT_DIR/node_modules/ims-lti/lib/provider.js:51:19)
at Provider.valid_request (PROJECT_DIR/node_modules/ims-lti/lib/provider.js:4:59)
Hi
It is not clear from the documentation how to specify ext_content value. I want to return an iframe in response to an LTI request but not sure how to access LTI provider ext_content to send iframe.
Thanks.
Calling code may want to know when calls to @redis.get
or @redis.set
fail. For example, in this code, err
should be handled when @redis.get
fails, perhaps because the redis client can't connect to the redis instance. And then despite this comment, I think you actually want to know when that write failed - otherwise, you're vulnerable to replay attacks.
Additionally, this code currently treats all nonceStore errors from @nonceStore.isNew
in _valid_oauth as an "Expired nonce," which may not be the case if it can error out for other reasons (i.e. a call to @redis.get
failed). It looks like you already have a StoreError, so you could use that in RedisNonceStore.isNew
and then switch on the error type in the @nonceStore.isNew
callback to determine which error to send to the next callback
and ultimately up to the calling code.
I'm happy to submit a PR for this, just want to make sure you guys are ok with the architectural ideas above. I guess, technically, it's kind of a breaking change if people are relying on "Expired nonce" even when its not?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.