GithubHelp home page GithubHelp logo

skeydan / deep-learning-and-scientific-computing-with-r-torch Goto Github PK

View Code? Open in Web Editor NEW
42.0 5.0 10.0 12.51 MB

Deep Learning and Scientific Computing with R torch

Home Page: https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/

TeX 100.00%

deep-learning-and-scientific-computing-with-r-torch's Introduction

This is the repository for the book Deep Learning and Scientific Computing with R torch, written by Sigrid Keydana and published by CRC Press.

A list of errata may be found in errata.md.

To cite using bibtex, you could use

@book{keydana2023deep,
  title={Deep Learning and Scientific Computing with R torch},
  author={Keydana, Sigrid},
  year={2023},
  publisher={CRC Press}
}

Please note that this work is written under a Contributor Code of Conduct, and the online version is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. By participating in this project (for example, by submitting an issue with suggestions or edits) you agree to abide by its terms.

deep-learning-and-scientific-computing-with-r-torch's People

Contributors

skeydan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

deep-learning-and-scientific-computing-with-r-torch's Issues

Time series Forecast ahead

Working with a Time Series using Chapter 21 "tutorial". Unable to find a way to forecast 52 weeks ahead on dataset structured as weekly demands spanning 11 years. Prediction per se worked well: plot displays just to the end of the dataset, though: did not include the 52-week future-time forecast. Any reference to the section of the tutorial that handles this feature? Thanks.

tensor dataset

the package modeldata no longer has the dataset "okc" used in the book chapter Tensors

If this sounds like magic, there is a simple means to make the magic go away.

Understanding convnet – simpler way to make the magic go away

In subsection 15.2.2.1, you wrote:

Namely, a simple way to find out about tensor shapes at any stage in a network is to comment all subsequent actions in forward(), and call the modified model.

I'm thinking of a simpler solution. Print the model first:

img <- torch_randn(1, 1, 64, 64)
model <- convnet()
model
An `nn_module` containing 30,211 parameters.

── Modules ────────────────────────────────
• conv1: <nn_conv2d> #160 parameters
• conv2: <nn_conv2d> #4,640 parameters
• conv3: <nn_conv2d> #18,496 parameters
• output: <nn_linear> #6,915 parameters

Then, if it would be possible to cut out the submodel, let's cut it out.

submodel <- model['conv2']
submodel(img)$size()
#> 1 32 29 29

I'm guessing the code, of course, but maybe there's a way to replace the line of code submodel <- model['conv2'].

Add "how to cite"

Hi, Sygrid, hope you are well. I would like to request a copy-pasting version of how to cite this book.

Thank you for this book, by the way, I'm learning.

reach out for help

Hi, Keydana. first of all, thanks a lot for wrting such an amazing book for deep learning. It's my first book to deep learning and I still read it. but when I went to luz chapter, I had a question in my mind: how to use L-BFGS optimizer with luz, could you give me a simple example about it, thanks a lot

Should the reference to 3-shaped tensor in 3.6 Broadcasting rather be to a 5-shaped tensor?

In the 3rd to last paragraph in the Broadcasting section, it says

To a tensor of shape 3 x 5, we were able to add both a tensor of shape 3 and a tensor of shape 1 x 5.

However, the code

t3$shape

returns 5, so the use of 3 in the text is a bit confusing. Could it be that you meant to say shape 5 instead of shape 3?

If so, the second item of the final list in the section should also be updated.

If not, perhaps you can elaborate a bit on why you refer to it as shape 3 and not as shape 5?

Minor inconsistency in 3.2.1 - [ CPUFloatType{5} ] should read [ CUDALongType{5} ]

Thank you for writing and sharing this great book.

I noticed a minor inconsistency in 3.2.1 Tensors from values. Where it currently reads:

Analogously, the default device is the CPU; but we can also create a tensor that, right from the outset, is located on the GPU:

torch_tensor(1:5, device = "cuda")

torch_tensor
1
2
3
4
5
[ CPUFloatType{5} ]

the output should read: [ CUDALongType{5} ].

! R type not handled error in the example 14.3.1 with the optimizer replaced by `optim_lbfgs`

Thanks for sharing the example code in the issue #3.

The example inspired me to change the code in section 14.3.1 (A more realistic scenario) to use the optim_lbfgs optimizer.

The code at the end works, but after uncommenting the line

metrics = list(luz_metric_mae())

fitting produces several errors, which are reproduced below.

Epoch 1/20
Error in `FUN()`:
! Error while calling callback with class <metrics_callback/LuzCallback/R6> at
  on_train_batch_end.
Caused by error in `FUN()`:
! Error when evaluating update for metric with abbrev "MAE" and class <LuzMetric/R6>
ℹ The error happened at iter 1 of epoch 1.
ℹ The model was in training mode.
Caused by error in `torch_tensor_cpp()`:
! R type not handled

Would it be possible to resolve this erroneous behavior? 
Thanks.

#! /usr/bin/env Rscript

library(torch)
library(luz)

n <- 1000
d_in <- 3

x <- torch_randn(n, d_in)
coefs <- c(0.2, -1.3, -0.5)
y <- x$matmul(coefs)$unsqueeze(2) + torch_randn(n, 1)

ds <- tensor_dataset(x, y)
dl <- dataloader(ds, batch_size = 100, shuffle = TRUE)

d_hidden <- 32
d_out <- 1

net <- nn_module(
  initialize = function(d_in, d_hidden, d_out) {
    self$net <- nn_sequential(
      nn_linear(d_in, d_hidden),
      nn_relu(),
      nn_linear(d_hidden, d_out)
    )
  },
  forward = function(x) {
    self$net(x)
  },
  step = function() {
    if (ctx$training) {
      closure <- function() {
        pred <- ctx$model(ctx$input)
        loss <- ctx$loss_fn(pred, ctx$target)
        loss$backward()
        loss
      }
      ctx$loss <- ctx$opt$step(closure)
    } else {
      pred <- ctx$model(ctx$input)
      ctx$loss <- ctx$loss_fn(pred, ctx$target)
    }
  }
)

train_ids <- sample(
  1:length(ds),
  size = 0.6 * length(ds))

valid_ids <- sample(
  setdiff(1:length(ds), train_ids),
  size = 0.2 * length(ds)
)

test_ids <- setdiff(1:length(ds), union(train_ids, valid_ids))

train_ds <- dataset_subset(ds, indices = train_ids)
valid_ds <- dataset_subset(ds, indices = valid_ids)
test_ds <-  dataset_subset(ds, indices = test_ids)

train_dl <- dataloader(train_ds, batch_size = 100, shuffle = TRUE)
valid_dl <- dataloader(valid_ds, batch_size = 100)
test_dl <-  dataloader(test_ds, batch_size = 100)

fitted <- net |>
  setup(
    loss = nn_mse_loss(),
    optimizer = optim_lbfgs,
    # metrics = list(luz_metric_mae())
  ) |>
  set_hparams(d_in = d_in, d_hidden = d_hidden, d_out = d_out) |>
  set_opt_hparams(line_search_fn = "strong_wolfe") |>
  fit(train_dl, epochs = 20, valid_data = valid_dl)

okc dataset is no longer in modeldata package

Hi,

Firstly, thanks for a very nice book and sharing it for free.

In subsection 3.2.3 Tensors from datasets, you used okc dataset from modeldata. Of note, the dataset is no longer in the latest version of modeldata (version 1.2.0). Maybe you want to consider changing the dataset.

Best regards

1:num_iterations range in for loops

The book contains several loops where I noticed the following pattern:

for (i in 1:num_iterations) {
  #...
  if (i %% 100 == 0) {
    cat("Value is: ", as.numeric(value), "\n")
  }
  #...
}

The first text is printed after 100 iterations for i==100.
Nothing is printed on the very first iteration.

I propose to change the range to 0:num_iterations:

for (i in 0:num_iterations) {
  #...
  if (i %% 100 == 0) {
    cat("Value is: ", as.numeric(value), "\n")
  }
  #...

I think that the most interesting things usually happen at the beginning.

--Włodek Bzyl

`t1$to(device = "cuda")` Error in (function (self, device, dtype, non_blocking, copy, memory_format) : PyTorch is not linked with support for cuda devices

Hello

I install cuda using conda install cuda -c nvidia

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Wed_Feb__8_05:53:42_Coordinated_Universal_Time_2023
Cuda compilation tools, release 12.1, V12.1.66
Build cuda_12.1.r12.1/compiler.32415258_0

How should I solve this problem?

t2 <- t1$to(device = "cuda")
Error in (function (self, device, dtype, non_blocking, copy, memory_format)  : 
  PyTorch is not linked with support for cuda devices
Exception raised from getDeviceGuardImpl at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\c10/core/impl/DeviceGuardImplInterface.h:319 (most recent call first):
00007FFD35C8D24200007FFD35C8D1E0 c10.dll!c10::Error::Error [<unknown file> @ <unknown line number>]
00007FFD35C8CE8A00007FFD35C8CE30 c10.dll!c10::detail::torchCheckFail [<unknown file> @ <unknown line number>]
00007FFC7469210600007FFC74691F60 torch_cpu.dll!at::Context::getDeviceFromPtr [<unknown file> @ <unknown line number>]
00007FFC74B80F2900007FFC74B80ED0 torch_cpu.dll!at::native::to [<unknown file> @ <unknown line number>]
00007FFC758C0A2300007FFC758BAAC0 torch_cpu.dll!at::compositeimplicitautograd::where [<unknown file> @ <unknown line number>]
00007FFC758A824800007FFC75860E20 torch_cpu.dll!at::compositeimplicitautograd::broadcast_to_symint [<unknown file> @ <unknown line number>]
00007FFC750DA36900007FFC750DA190 torch_cpu.dll!at::_ops::to_devic

any resources about NLP from you?

Hi, When I read the chapter 17, it says this book won't cover NLP, I wanna ask BTW do you have any plan to write a book or a series of blogs about NLP with torch for R? haha, really looking forward to :)

Error reproducing last example in Training for Luz

I am having a hard time reproducing the last lines of the training for luz chapter. I get the error Error in nn_mse_loss(output, target) : unused argument (target), but I'm not what to do with that information.

> for (epoch in 1:num_epochs) {
+   model$train()
+   train_loss <- c()
+ 
+   # use coro::loop() for stability and performance
+   coro::loop(for (b in train_dl) {
+     loss <- train_batch(b)
+     train_loss <- c(train_loss, loss)
+   })
+ 
+   cat(sprintf(
+     "\nEpoch %d, training: loss: %3.5f \n",
+     epoch, mean(train_loss)
+   ))
+ 
+   model$eval()
+   valid_loss <- c()
+ 
+   # disable gradient tracking to reduce memory usage
+   with_no_grad({ 
+     coro::loop(for (b in valid_dl) {
+       loss <- valid_batch(b)
+       valid_loss <- c(valid_loss, loss)
+     })  
+   })
+   
+   cat(sprintf(
+     "\nEpoch %d, validation: loss: %3.5f \n",
+     epoch, mean(valid_loss)
+   ))
+ }
Error in nn_mse_loss(output, target) : unused argument (target)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.