GithubHelp home page GithubHelp logo

music-suite / music-score Goto Github PK

View Code? Open in Web Editor NEW
52.0 52.0 10.0 3.29 MB

This repo has been merged into: https://github.com/music-suite/music-suite

Home Page: http://music-suite.github.io

License: BSD 3-Clause "New" or "Revised" License

Haskell 99.86% LilyPond 0.14%

music-score's Introduction

CircleCI

Music Suite

Music Suite is a language for describing music, based on Haskell.

image

Build Music Suite

Development environment

There are two ways of setting up the development environment:

  1. Using Nix (recommended on Linux)
  2. Manually (recommended on Windows and OS X)

Nix setup

Install the Nix package manager. We recommend using 2.3.1 or later.

Enter environment using:

nix-shell --pure

All build commands should be run in the Nix shell. You can exit the Nix shell using Ctrl-D.

Manual setup

Install the following.

Make sure that lilypond timidity and ghcup are available your shell environment (e.g. by adding them to PATH).

Use ghcup to install GHC:

ghcup install 8.10.4

Build the library and examples

$ cabal update
$ cabal build

Testing

Run the test suite:

To run all tests except doctests (see below):

$ cabal test --test-show-details=streaming --test-options=--color=always

To run individual tests:

$ cabal run TEST_NAME -- TEST_ARGS...

e.g.

$ cabal run music-suite-test-xml-parser

Expected output (regression tests)

Some tests have expected output, stored in test/regression.

If the output of these tests have changed, the diff should be manually inspected to assure that the new output is in fact correct. This may be because:

  1. The output is in fact expected to change.
  2. The output has changed in a way that is invisible to the end user, such as a a change to an *.ly output file that does not affect the appearance of the printed music.

To identify the latter it may be necessar to run lilypond or timidity on both the old and new versions of the output file. To do this manually after a failure.

  1. (optional) Re-run regression tests to see that it fails: cabal test music-suite-test-regression --test-options=--color=always
  2. For each failing file, run lilypond (for *.ly files) or timidity (for *.mid files).
  3. Regenerate the expected files: cabal test music-suite-test-regression --test-options=--color=always --test-options=--accept
  4. For each changed file, run lilypond or timidity as before.
  5. Inspect the old/new version side by side.
  6. If all are correct, commit the changes to test/regression.

Note: after running --accept, you can use git diff --name-only test/regression to get a list of changed files, assuming the repo was clean before.

Doctests

Music Suite makes use of doctests.

You can pass any file or directory. For example to test src/Music/Pitch:

$ cabal build music-suite && cabal exec doctester --package music-suite -- src/Music/Pitch

To test a single file:

$ cabal build music-suite && cabal exec doctester --package music-suite -- src/Music/Score/Meta.hs

To run all doctests use (Nix only):

$ doctests

Development shell

$ cabal build music-suite && cabal exec --package music-suite ghci

or

$ cabal repl

Build the documentation

User Guide

See these instructions.

The output appears in docs/build. You can point a HTTP server to this directory.

API docs

$ cabal haddock

Run example

$ cabal exec runhaskell -- examples/chopin.hs -f ly -o t.ly

Continous Integration

To replicate all steps run by the CI (Nix only), run:

$ ci

How to upgrade the compiler

We use Nix to pin the version of GHC and Cabal freeze files to pin the version of all Haskell dependencies. This describes how to upgrade GHC.

Because GHC pins a version of the Haskell base library, GHC and the Cabal dependencies need to be upgraded together. This is the recommended workflow:

  1. Update the commit/URL and hash in default.nix
  2. Use $ nix-prefetch-url --unpack <url> to obtain the hash (and verify)
  3. Enter new Nix shell (may take a while)
  4. Update the ghc-version field in cabal.project to whatever is printed by ghc --version
  5. Comment out reject-unconstrained-dependencies in cabal.project
  6. Update index-state in Cabal config to a recent time
  7. Run cabal update
  8. Run rm cabal.project.freeze
  9. Run cabal freeze
  10. Run cabal test to check that compiling/testing works (and fix errors)
  11. Restore reject-unconstrained-dependencies
  12. Commit your changes.

How to add a new dependency

  1. Comment out reject-unconstrained-dependencies in cabal.project
  2. Add the dependency to music-suite.cabal
  3. Update index-state in Cabal config to a recent time
  4. Run cabal freeze
  5. Run cabal test to check that compiling/testing works (and fix errors)
  6. Restore reject-unconstrained-dependencies
  7. Commit your changes.

Developer notes

Module hierarchy

  • The high-level DSL:

  • The notation DSL:

  • Import & Export:

  • Utility

    • Control.*: miscellaneous algorithms and utilities
    • Data.*: miscellaneous data structures

music-score's People

Contributors

armlesshobo avatar hanshoglund avatar ryanglscott avatar scrambledeggsontoast avatar sindikat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

music-score's Issues

Change score semantics to [(Time, Duration, a)], i.e. remove Maybe

I think we should instead let the user decide if they want rests in their score tranformer stack. We could say type RestT = Maybe.

Accordingly change type of

rest :: Default a => Score a

This would remove the need for perform and possibly simplify other things too. However there might be consequences.

Sibelius input

Specifically, copy-paste style music import. Probably do a separate package sibelius.

Voice.moveToPart semantics.

This combinator can be used as follows.

moveToPart fl e <> moveToPart cl c

However, it will not move previously existing voices.

moveToPart ob (moveToPart cl c)

Move combinators from music-imitator

In particular: sample, gate, take, rep, group, groupWith, repTimes, repWith

Renaming (?)

sampleS, gateS, takeS, repeatS, groupS, groupWithS, repeatTimesS, repeatWithS

Swap names of Voice and Part

That is:

  • Voice becomes the name of the type [(Duration, a)]
  • Part becomes the name of the Score.Part module, the transformer PartT etc.

Homophonic representation

Maybe a new transformer newtype ChordT = [].

Then a function to extract simultaneous notes Score a -> Score (ChordT a) and one to flatten (HasChord a, ChordNote a ~ b) => Score a -> Score b.

Would be useful for certain export formats.

Tie splitting

Basically

class Tiable a where
    -- | Split elements into beginning and end and add tie.
    --   Begin properties goes to the first tied note, and end properties to the latter.
    toTie :: a -> (a, a)

-- | Split all notes that cross a barlines into a pair of tied notes.
splitTies :: Tiable a => Score a -> Score a

Clarify the role of part vs. score vs single part score

We often use single-part scores in the implementation. That is scores which have an informal guarantee to have no overlapping events.

  • We usually want to manipulate things which have both offset and duration (i.e. scores and voices). Often we need to work on single voices.
  • Voice is great because it enforces no overlaps in its type
  • However it is inconvenient not to have the offsets around, forcing us to do accumulation over time.
  • Maybe add a newtype wrapper for score with the static guarantee, that is, a hidden implementation and safe constructors Part (Maybe a) -> SingleVoiceScore a, SingleVoiceScore a -> Voice (Maybe a).

Factor out separate type for Score without rests?

We now have the following semantics:

    type Track a = [(Time, a)]
    type Part a  = [(Duration, a)]
    type Score a = [(Time, Duration, Maybe a)]

I have a feeling that the Maybe may not be necessary for anything but implementing rest. Indeed, most Score instances seem to be juggling around it. On the other hand, a score without rests does not really seem like a musical score to me.

Note that we can remove all rests from a score and still get an unaffectd score as far as notes are concerned, however without them a note-less score can note does not have duration or offset, so we can not do

    c |> d |> rest^*2

Onset, offset, duration

This is related to the onset/offset/duration mixup. I am unsure what onset/offset/duration should mean for the different types.

The duration laws is a starting point:

    duration a = offset a - onset a
    duration a >= 0

accordingly

    offset a   >= onset a

Intutuion says that a |> b should place the onset of b at the offset of a, i.e.

    a |> b =  a <|> startAt (offset a) b
           =  a <|> delay (offset a - onset b) b

In the original library, onset was always zero, so we could simply do

    a |> b  =  a <|> delay (duration a) b
            =  a <|> delay (offset a - onset a)         -- as per the duration law
            =  a <|> delay (offset a - 0)
            =  a <|> delay (offset a - onset b)  

Instancces

Track and Score has more or less identical onset and offset instances:

    instance HasOnset (Track a) where
        onset x  = minimum (map on x)    where on  (t,x) = t
        offset x = maximum (map off x)   where off (t,x) = t

    instance HasOnset (Score a) where
        onset x  = minimum (map on x)    where on  (t,d,x) = t
        offset x = maximum (map off x)   where off (t,d,x) = t + d

This also give us duration instances for Track and Score per the duration laws.

    instance HasOnset a => HasDuration a where
        duration x = offset x - onset x

Note that in a track/score with onset zero (let us call that a normalized track/score), we still have duration = offset.

Part is different: duration is sum and there is no onset/offset.

    instance HasDuration (Part a) where
        duration x = sum (map duration x)

Note that while both tracks, scores and parts have scale, only tracks and scores have delay. Put it differently, parts are completely relative in time (like vectors), while scores and tracks are absolute (like points). Maybe we should rename parts to reflect this?

If a score is notes with possibly empty space around them, and onset, offset and duration is determined by the note occurences, then a score without values can not have a duration. Or well, it is error "empty list". Disambiguate as follows:

    duration mempty = 0
    onset mempty = 0
    offset mempty = 0

We want to have a function rest :: Score a, analogous to note :: a -> Score a. The purpose of a rest is simply to allow sequential catenation (juxtaposition) without values (similar to strut in diagrams). We can do rests either by wrapping score elements in Maybe, or by maintaining a separate duration valid if there are no elements. I choose the maybe option as it is more clear.

In the original library, I used an implementation like delay t x = rest^*t |> x. This does not make sense now, as rests are themselves elements (i.e. this implementation of delay would affect duration but not onset instead of the other way around.

Summary

Think of it like this:

  • Rests (Nothing) is not a way to encode empty space (as it is in a classical score).
  • Rather, each score (or note, track etc) has a nominal onset and offset. There may be sound outside these values (see pre-onset etc below), they are defined as logical start (attack action) and stop (damp action) time.
  • Redifine sequential and parallel composition to align so that onset a == onset b, or offset a == onset b respectively. For instaneous things, sequential composition is (of course) not defined.
  • Rests are simply empty scores for padding purposes. We remove them with removeRests.

Side notes

Diagrams

Compare this to (===) and (|||) in diagrams. Normally, all diagrams have an intuitive bounding box. However, we can also use functions like strut allow us to create empty diagrams with bounding boxes for juxtaposition purposes. I feel want something similar, and not just for Score (Maybe a), but the real thing.

Prepared notes

Think of something like an ADSR envelope. Logically, the onset of the note is the start of the attack phase. However, under parallel catenation we want the maximum level (in-between A and D) to be the concurring point. It makes sense to think of an ADSR as having five interesting events: pre-onset <= onset <= post-onset <= offset <= post-offset. Given onset and offset (or onset and duration), the pre and post events are determined by the qualities of the instrument: how long does it take to to excite, stabilize and tranquilize. We can thus amend our duration laws:

    excite      a  =  onset a      - preOnset a
    stabilize   a  =  postOnset a  - onset a
    tranquilize a  =  postOffset a - offset a

    excite, stabilize, tranquilize > 0

Clef selection

Currently, all notation output use the default clefs. We should support auto-selection of clefs based on tessitura as well as explicit selection.

The steps to this is roughly:

  • Add a clef transformer
  • Add a clef hint combinator
  • Add a pass that uses tessitura etc as well as hints to replace hints with actual clef directions.
  • Interpret directions in notation output.

See #61.

Quantization/splitting: Bounded rhythms not properly splitted in notation output

`TSee example:

screen shot 2013-05-07 at 20 11 17

Cause

The basic problem is that the rhythm representation (see below) is too weak.

data Rhythm a 
    = Beat       Duration a                    -- d is divisible by 2
    | Group      [Rhythm a]                    -- normal note sequence
    | Dotted     Int (Rhythm a)                -- n > 0.
    | Tuplet     Duration (Rhythm a)           -- d is an emelent of 'tupletMods'.
    | Bound      Duration (Rhythm a)           -- tied from duration

The semantics of Beat, Group, Dotted and Tuplet are clear and compositional. However, Bound is ambigous: it can be notated by adding the duration as an extra beat before or after the sub-rhythm encapsulated by the Bound constructor.

Problem 1

For example in Group [Dotted 1 (Beat (1/4) a), Bound (1/2) (Beat (1/8) b)] the bound constructor should be expanded as 8th to half (bound duration after), while in Group [Bound (1/2) (Beat (1/8) b)], Dotted 1 (Beat (1/4) a) it should be half to eight (bound duration before). However, this can not be determined by looking at the bound constructor, it depends on its place in the group context.

Problem 2

It is also unclear how to use it for representing ties between subdivisions. Consider Group [Tuplet (2/3) $ Group [...], Tuplet (4/5) $ Group [...]], with a tied note between the two groups. If the tied note is represented using Bound, either the first or the second group will have one note less than expected. Even worse, the bound constructor will have a "raw" duration (in this case 1/4 or 1/5) with no clear indication on how it was quantized.

Note: The quantization engine can not parse such rhythms at the moment. The representation should allow them just as well.

Solution 1

  • Problem 1 can be solved by disambiguering Bound, by adding an extra Bool field or making it two separate constructors. However this does not solve problem 2.

Solution 2

  • Remove Bound and add a constructor Tied BeginEnd (Rhythm a) to indicate a tied note. Then we can do Group [Tuplet (2/3) $ Group [... Tied Begin a], Tuplet (4/5) $ Group [Tied End a ...]] or similar.
  • This is problematic as it requires a tie split. Proper tie splitting (w.r.t dynamics etc) requires a Tiable instance. That was one of the reasons we used Bound (which does not split but simply store an extra duration) in the first place.

Solution 3 (best)

  • Remove Bound and fall back on Tiable. We need to add a Tiable constraint to the quantization engine.

Key selection

Currently, all notation output use the default key signature. We should support auto-selection of key based on harmonic material as well as explicit selection.

The steps to this is roughly:

  • Add a key transformer
  • Add a key hint combinator
  • Add a pass that pitches etc as well as hints to replace hints with actual key directions.
  • Interpret directions in notation output.

See #61.

Factor out Articulation, Dynamics and Ornament representations to associated types

At least dynamics should be factored out.

Spontaneously, I think articulation does not belong in music-score (it is very CMT-centric). On the other hand, what is the generalization? There are not so many ways of talking about separation/accents.

We have some generic pitch transformation, which depend on the pitch type being an instance of AffineSpace and so on. We could easily add similar funtions for dynamics. What about articulation and ornaments?

Related to/depends on #73

Move Time and Rhythm to separate package

This should be grown and modularized and will outgrow this package. Maybe music-rhythm.

Requires splitting time into music-time. A worse problem is that it also require us to split Tiable.

Better default pitch type

We need a nice default pitch type. Current projects all use Integer or Double at the bottom of the note stack, which is enharmonically ambigous.

This leads to strange spelling. For example,

openXml $ score $ melody [c,cs,db]

spells as c, ds, ds instead of c, cs, db.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.