music-suite / music-score Goto Github PK
View Code? Open in Web Editor NEWThis repo has been merged into: https://github.com/music-suite/music-suite
Home Page: http://music-suite.github.io
License: BSD 3-Clause "New" or "Revised" License
This repo has been merged into: https://github.com/music-suite/music-suite
Home Page: http://music-suite.github.io
License: BSD 3-Clause "New" or "Revised" License
Depends on #324 (as we don't want to have to rewrite yet another backend)
(Retain semantics and use TCMs to assure correct implementation.)
That is:
Voice
becomes the name of the type [(Duration, a)]
Part
becomes the name of the Score.Part
module, the transformer PartT
etc.I.e. prefer small tuplets (possibly bound) to big ones.
Currently, all notation output use the default key signature. We should support auto-selection of key based on harmonic material as well as explicit selection.
The steps to this is roughly:
See #61.
This should be grown and modularized and will outgrow this package. Maybe music-rhythm
.
Requires splitting time into music-time
. A worse problem is that it also require us to split Tiable
.
Goes between tie and type
For notating bounded rhythms such as 5/8+3/8 etc.Requiring in-bar ties.
Maybe a new transformer newtype ChordT = []
.
Then a function to extract simultaneous notes Score a -> Score (ChordT a)
and one to flatten (HasChord a, ChordNote a ~ b) => Score a -> Score b
.
Would be useful for certain export formats.
I.e. remove assumptions of 4/4.
`TSee example:
The basic problem is that the rhythm representation (see below) is too weak.
data Rhythm a
= Beat Duration a -- d is divisible by 2
| Group [Rhythm a] -- normal note sequence
| Dotted Int (Rhythm a) -- n > 0.
| Tuplet Duration (Rhythm a) -- d is an emelent of 'tupletMods'.
| Bound Duration (Rhythm a) -- tied from duration
The semantics of Beat
, Group
, Dotted
and Tuplet
are clear and compositional. However, Bound
is ambigous: it can be notated by adding the duration as an extra beat before or after the sub-rhythm encapsulated by the Bound
constructor.
For example in Group [Dotted 1 (Beat (1/4) a), Bound (1/2) (Beat (1/8) b)]
the bound constructor should be expanded as 8th to half (bound duration after), while in Group [Bound (1/2) (Beat (1/8) b)], Dotted 1 (Beat (1/4) a)
it should be half to eight (bound duration before). However, this can not be determined by looking at the bound constructor, it depends on its place in the group context.
It is also unclear how to use it for representing ties between subdivisions. Consider Group [Tuplet (2/3) $ Group [...], Tuplet (4/5) $ Group [...]]
, with a tied note between the two groups. If the tied note is represented using Bound
, either the first or the second group will have one note less than expected. Even worse, the bound constructor will have a "raw" duration (in this case 1/4 or 1/5) with no clear indication on how it was quantized.
Note: The quantization engine can not parse such rhythms at the moment. The representation should allow them just as well.
Bound
, by adding an extra Bool
field or making it two separate constructors. However this does not solve problem 2.Bound
and add a constructor Tied BeginEnd (Rhythm a)
to indicate a tied note. Then we can do Group [Tuplet (2/3) $ Group [... Tied Begin a], Tuplet (4/5) $ Group [Tied End a ...]]
or similar.Tiable
instance. That was one of the reasons we used Bound
(which does not split but simply store an extra duration) in the first place.Bound
and fall back on Tiable
. We need to add a Tiable
constraint to the quantization engine.Move types to separate packages (music-articulation and music-dynamics)
At least dynamics should be factored out.
Spontaneously, I think articulation does not belong in music-score (it is very CMT-centric). On the other hand, what is the generalization? There are not so many ways of talking about separation/accents.
We have some generic pitch transformation, which depend on the pitch type being an instance of AffineSpace
and so on. We could easily add similar funtions for dynamics. What about articulation and ornaments?
Related to/depends on #73
Seems to happen when run in GHCi, especially when reloading the module a lot. Reproduce better?
In particular: sample
, gate
, take
, rep
, group
, groupWith
, repTimes
, repWith
sampleS
, gateS
, takeS
, repeatS
, groupS
, groupWithS
, repeatTimesS
, repeatWithS
I think scoreToTrack
et al have strange semantics.
This combinator can be used as follows.
moveToPart fl e <> moveToPart cl c
However, it will not move previously existing voices.
moveToPart ob (moveToPart cl c)
We often use single-part scores in the implementation. That is scores which have an informal guarantee to have no overlapping events.
Part (Maybe a) -> SingleVoiceScore a, SingleVoiceScore a -> Voice (Maybe a)
.We now have the following semantics:
type Track a = [(Time, a)]
type Part a = [(Duration, a)]
type Score a = [(Time, Duration, Maybe a)]
I have a feeling that the Maybe may not be necessary for anything but implementing rest
. Indeed, most Score
instances seem to be juggling around it. On the other hand, a score without rests does not really seem like a musical score to me.
Note that we can remove all rests from a score and still get an unaffectd score as far as notes are concerned, however without them a note-less score can note does not have duration or offset, so we can not do
c |> d |> rest^*2
This is related to the onset/offset/duration mixup. I am unsure what onset/offset/duration should mean for the different types.
The duration laws is a starting point:
duration a = offset a - onset a
duration a >= 0
accordingly
offset a >= onset a
Intutuion says that a |> b
should place the onset of b
at the offset of a
, i.e.
a |> b = a <|> startAt (offset a) b
= a <|> delay (offset a - onset b) b
In the original library, onset was always zero, so we could simply do
a |> b = a <|> delay (duration a) b
= a <|> delay (offset a - onset a) -- as per the duration law
= a <|> delay (offset a - 0)
= a <|> delay (offset a - onset b)
Track and Score has more or less identical onset and offset instances:
instance HasOnset (Track a) where
onset x = minimum (map on x) where on (t,x) = t
offset x = maximum (map off x) where off (t,x) = t
instance HasOnset (Score a) where
onset x = minimum (map on x) where on (t,d,x) = t
offset x = maximum (map off x) where off (t,d,x) = t + d
This also give us duration instances for Track and Score per the duration laws.
instance HasOnset a => HasDuration a where
duration x = offset x - onset x
Note that in a track/score with onset zero (let us call that a normalized track/score), we still have duration = offset
.
Part is different: duration is sum and there is no onset/offset.
instance HasDuration (Part a) where
duration x = sum (map duration x)
Note that while both tracks, scores and parts have scale
, only tracks and scores have delay
. Put it differently, parts are completely relative in time (like vectors), while scores and tracks are absolute (like points). Maybe we should rename parts to reflect this?
If a score is notes with possibly empty space around them, and onset, offset and duration is determined by the note occurences, then a score without values can not have a duration. Or well, it is error "empty list"
. Disambiguate as follows:
duration mempty = 0
onset mempty = 0
offset mempty = 0
We want to have a function rest :: Score a
, analogous to note :: a -> Score a
. The purpose of a rest is simply to allow sequential catenation (juxtaposition) without values (similar to strut
in diagrams). We can do rests either by wrapping score elements in Maybe
, or by maintaining a separate duration valid if there are no elements. I choose the maybe option as it is more clear.
In the original library, I used an implementation like delay t x = rest^*t |> x
. This does not make sense now, as rests are themselves elements (i.e. this implementation of delay would affect duration but not onset instead of the other way around.
Think of it like this:
Nothing
) is not a way to encode empty space (as it is in a classical score).onset
and offset
. There may be sound outside these values (see pre-onset etc below), they are defined as logical start (attack action) and stop (damp action) time.onset a == onset b
, or offset a == onset b
respectively. For instaneous things, sequential composition is (of course) not defined.removeRests
.Compare this to (===)
and (|||)
in diagrams. Normally, all diagrams have an intuitive bounding box. However, we can also use functions like strut
allow us to create empty diagrams with bounding boxes for juxtaposition purposes. I feel want something similar, and not just for Score (Maybe a)
, but the real thing.
Think of something like an ADSR envelope. Logically, the onset of the note is the start of the attack phase. However, under parallel catenation we want the maximum level (in-between A and D) to be the concurring point. It makes sense to think of an ADSR as having five interesting events: pre-onset <= onset <= post-onset <= offset <= post-offset. Given onset and offset (or onset and duration), the pre and post events are determined by the qualities of the instrument: how long does it take to to excite, stabilize and tranquilize. We can thus amend our duration laws:
excite a = onset a - preOnset a
stabilize a = postOnset a - onset a
tranquilize a = postOffset a - offset a
excite, stabilize, tranquilize > 0
Specifically, copy-paste style music import. Probably do a separate package sibelius
.
We need a nice default pitch type. Current projects all use Integer or Double at the bottom of the note stack, which is enharmonically ambigous.
This leads to strange spelling. For example,
openXml $ score $ melody [c,cs,db]
spells as c, ds, ds
instead of c, cs, db
.
Now duplicated in Articulation, Dynamics, Ornaments.
Make part of public API?
Crashes on input such as
open $ c |> rest^*3 |> c
the problem being that quantize
emits dotted rests, which are then passd to Xml.rest
.
Currently, all notation output use the default clefs. We should support auto-selection of clefs based on tessitura as well as explicit selection.
The steps to this is roughly:
See #61.
Named music-time?
Basically
class Tiable a where
-- | Split elements into beginning and end and add tie.
-- Begin properties goes to the first tied note, and end properties to the latter.
toTie :: a -> (a, a)
-- | Split all notes that cross a barlines into a pair of tied notes.
splitTies :: Tiable a => Score a -> Score a
I think we should instead let the user decide if they want rests in their score tranformer stack. We could say type RestT = Maybe
.
Accordingly change type of
rest :: Default a => Score a
This would remove the need for perform
and possibly simplify other things too. However there might be consequences.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.