GithubHelp home page GithubHelp logo

Comments (32)

mark-jensen avatar mark-jensen commented on June 6, 2024

@Mttbnchtt Thanks for the catch on typo.

An algorithm does not prescribe inputs or outputs

Not sure I completely understand your concern. I agree it doesn't prescribe the specifics of the inputs and outputs. But, at the minimum, doesn’t an algorithm generically prescribe the number, order, and formatting of inputs and outputs?

from commoncoreontologies.

swartik avatar swartik commented on June 6, 2024

@mark-jensen I think I agree with @Mttbnchtt. Here's a dictionary definition of algorithm (from the OED):

A procedure or set of rules used in calculation and problem-solving; (in later use spec.) a precisely defined set of mathematical or logical operations for the performance of a particular task.

By these lights, CCO's definition seems awfully specific. What is the motivation for referencing mathematical functions? In practice, only a small fraction of algorithms derive from mathematical functions. In particular, mathematical functions are typically considered stateless, whereas algorithms can modify state.

One can infer an algorithm's inputs and outputs from its set of operations, but to my mind the operations are the fundamental aspect of the concept. CCO, with its "as well as workflow" description, demotes that aspect.

from commoncoreontologies.

Mttbnchtt avatar Mttbnchtt commented on June 6, 2024

Hi @mark-jensen. I still think that an algorithm does not prescribe inputs or outputs. The algorithm is a finite list of rules. These rules will typically be applicable to some inputs and not to some others, but that is not stated in the algorithm. The number of inputs that a function takes is also not necessarily mentioned in an algorithm. We could have a function f(x, y) that takes two input and the algorithm that describes how to compute the result may simply be: copy x on the output slot and halt (there is no mention of the second input). An algorithmically specified function may not have an output (for example, the algorithm may cause the computing agent to loop forever).

It is important to notice that an algorithm is not any list of rules. The list must be finite, the rules (individually but not necessarily collectively) must be finitely implementable, and the process must be deterministic. An additional condition that I forgot before (but it is usually required) is that the list must be consistent: i.e. the algorithm cannot ask the agent to do different things in the same situation.

Finally, this is not required to solve our issue, but it is better to say that explicitly: in classical mathematics, not every function is algorithmically specifiable (there are so-called constructive views of mathematics in which every function is algorithmically specifiable, but they are not mainstream math).

from commoncoreontologies.

mark-jensen avatar mark-jensen commented on June 6, 2024

Thanks @Mttbnchtt @swartik for your feedback. I agree that use of mathematical function seems too restrictive. Let's work here to improve the definition.

Note: the CCO term in its current state derives from IAO. A search of their tracker finds several stale issues working to refine their definition with no results. Moreover, there is a clear history of struggle amongst experts in computational theory and philosophy to reach agreement on a universal definition for 'algorithm'. Thus, we should be modest in our goal and aim for stepwise improvement.

First draft at revision:
algorithm = "A Directive Information Content Entity that prescribes a finite sequence of deterministic operations that can be automated to achieve an objective."

Please offer suggestions for improvement

from commoncoreontologies.

Mttbnchtt avatar Mttbnchtt commented on June 6, 2024

Hi @mark-jensen. I think the draft is good. I have three observations:

  1. Where I can find the definition of "directive information content entity"?
  2. Leaving aside more contentious issues (as you rightly suggest), one part of the draft that still needs attention is "to achieve an objective", where "objective" is defined as follows: A Directive Information Content Entity that prescribes some projected state that some Agent intends to achieve.
    In the context of an algorithm, I think most naturally of an objective as being the output of the process prescribed by the algorithm given a specific input. However, the process prescribed by an algorithm may not have an output (e.g.: write 1, delete 1, repeat).
    The definition of objective is more general than output, and one may think that the carrying out of the process on a given input is the objective, regardless of whether the process terminates or not. This is possible, given the above definition of "objective", although unnatural and, in the end, irrelevant to the definition of algorithm.
  3. An algorithm is a finite sequence of instructions. However, the sequence of operations that it prescribes may not be finite (see example above).

I am not familiar with the selected vocabulary of CCOs. Is there a way to re-state the following definitions within the CCOs:
algorithm= a finite sequence of instructions each of which can be automated?

from commoncoreontologies.

swartik avatar swartik commented on June 6, 2024

@Mttbnchtt, CCO defines "Directive Information Content Entity" as the superclass of Algorithm and Objective, among other classes. Its definition is:

An Information Content Entity that consists of a set of propositions or images (as in the case of a blueprint) that prescribe some Entity.

@mark-jensen, I'm trying to think of how to rewrite the definition to use as many CCO terms as possible. Unfortunately, CCO doesn't include either "instruction" or "operation". In the sense you intend, they are themselves Directive Information Content Entities.

I think "deterministic" is too strict. "Sort the array" is a legitimate step in an algorithm, but it doesn't tell me whether to use a stable sorting algorithm.

So with those concerns in mind, I propose the following revision:

algorithm = "A Directive Information Content Entity that prescribes a finite sequence of operations that can be automated to achieve an Objective."

This should be accompanied by an elucidation explaining what "finite sequence of operations" means.

If you agree I'll be happy to work on the elucidation. (For the record, I'm still pondering whether I prefer "operation" or "instruction".)

from commoncoreontologies.

Mttbnchtt avatar Mttbnchtt commented on June 6, 2024

Thanks @swartik for the clarification. About the definition of algorithm, there are two major problems the revision that is proposed: "A Directive Information Content Entity that prescribes a finite sequence of operations that can be automated to achieve an Objective".

  1. What is finite in an algorithm is the list of prescriptions, not of operations (or, at least, it is ambiguous to say that it is). Under some possible reading, the sequence of operations may be infinite, as long as I can specify that using only finitely many orders. This is an example: (a) add 1, (b) repeat. The list of orders is finite (there are two orders). The list of operations that it prescribes is infinite because one would keep adding 1 forever.

  2. Implementing an algorithm may not lead to an output. For example, implementing the algorithm above, one keeps adding 1 and never gives a result. In this sense, it is not clear that the algorithm prescribes something "to achieve an Objective".

  3. The notion of instruction (order, command, etc.) is really central to the notion of algorithm. I fear that we will not be able to define algorithm properly if do not use the notion of instruction. Is it htat complicated to add it to the ontology?

from commoncoreontologies.

swartik avatar swartik commented on June 6, 2024

@Mttbnchtt,

I think agree with you about "operation" vs. "instruction". However, @mark-jensen proposed "operation", and I would like him to weigh in so I can understand his motivation before I commit.

As for whether repeatedly adding 1 doesn't achieve an objective, I'm not so sure. You are right that it has no output. I could therefore say that your algorithm achieves the objective of demonstrating that an algorithm need not have an output. You as the algorithm's creator must have had an objective in mind when you stated the algorithm. That objective has nothing to do with the instructions' effects.

Alternately, CCO might argue, and quite reasonably I think, that "algorithm" in its context is limited to instructions combined to achieve an objective. Of course, that would have to be stated clearly and explicitly.

from commoncoreontologies.

Mttbnchtt avatar Mttbnchtt commented on June 6, 2024

Thanks @swartik. I see that could be reasonable here to concentrate on algorithm given to produce an objective. s for operation vs instruction, I agree that we should hear from @mark-jensen first.

from commoncoreontologies.

mark-jensen avatar mark-jensen commented on June 6, 2024

@swartik @Mttbnchtt Thanks for the thoughtful feedback. Apologies for delay in response.

A Directive ICE is (in a sense) an instruction, or composed of instructions, or commands. What it prescribes are operations, i.e., processes. We can drop use of 'operations' to make it more clear. The question is do we want to restrict algorithms to prescribe only computer processes? I think so.

Iterating:

cco:Algorithm =def "A Directive Information Content Entity that prescribes some Computer Program Execution process and contains a finite sequence of unambiguous instructions in order to achieve some Objective."

And yes, having an objective doesn't necessarily imply an output or termination.

from commoncoreontologies.

swartik avatar swartik commented on June 6, 2024

@mark-jensen Your definition definitely improves consistency with CCO.

Is Computer Program Execution a proposed class? I can't find it in either the master branch or bfo2020.

from commoncoreontologies.

Mttbnchtt avatar Mttbnchtt commented on June 6, 2024

from commoncoreontologies.

Mttbnchtt avatar Mttbnchtt commented on June 6, 2024

from commoncoreontologies.

mark-jensen avatar mark-jensen commented on June 6, 2024

@swartik My mistake. ComputerProgramExecution isn't in Mid-level CCO. I am tempted to amend the def. to be more general, to not restrict only to operations performed by a CPU, but include potential for other types of processes being prescribed by an algorithm.

cco:Algorithm =def "A Directive Information Content Entity that prescribes some Process and contains a finite sequence of unambiguous instructions in order to achieve some Objective.

@Mttbnchtt I think generalizing to a BFO process also would address your concern of an algorithm prescribing a non-terminating process.

from commoncoreontologies.

Mttbnchtt avatar Mttbnchtt commented on June 6, 2024

from commoncoreontologies.

alanruttenberg avatar alanruttenberg commented on June 6, 2024

@mark-jensen offers:

algorithm = "A Directive Information Content Entity that prescribes a finite sequence of deterministic operations that can be automated to achieve an objective."

There may be problems with finite and deterministic wording. I'm also not clear whether the intended sense is limited to computer programs. The wikipedia article uses the phrase "perform a computation". I'd go with that.

Finite: There are algorithms that don't quite match this. For example, FOL reasoning algorithms may land up in an infinite loop, which is typically cut off based on resources, for instance how long it takes, or how much memory is allowed. The specification of those resource limitations may or may not be included in the algorithm, since usually those are specific to a platform. In other words the number of steps in computation that the algorithm prescribes may not be finite. I think this might be resolved by putting finite on the other side of prescribes. or, it could be left out without harm.

Deterministic: Some algorithms are not by their nature deterministic, for example the class of "probably almost correct" algorithms or algorithms which reduce to probabilistic Turing machines or ones that deal with the unpredictable in the real world.

What do we know about algorithms? They aren't dependent on what programming language they are implemented in, but can be expressed in different programming languages. They describe computations. They are designed so that when an implementation is run it achieves an objective.

I'm not sure exactly how to express all this, at the moment. The IAO definition has some of this, but it's not perfect either. The following at least omits the parts that are suspect.

"A Directive Information Content Entity that prescribes a sequence of computations that achieve an objective"

or this, which captures the idea that there is an implementation intermediate:

"A Directive Information Content Entity that prescribes computer programs which when executed achieve an objective"

from commoncoreontologies.

Mttbnchtt avatar Mttbnchtt commented on June 6, 2024

from commoncoreontologies.

alanruttenberg avatar alanruttenberg commented on June 6, 2024

@Mttbnchtt, there are two things about 'finite'. I wrote something about this in my comments, saying that the finite requirement made more sense on the left hand side - the side of the directive information entity, than on the other side, the operations. As I re-read that part I realize it's more ambiguous than how I, at first, understood it. The thing is that 'operation' in CS connotes what the computer is doing, e.g. MFLOPs, or IOPs. So I read 'operation' as what the computer was doing, and my comments follow that interpretation. So, what I meant by being on left hand side is that it is the algorithm has a specific number of parts.

However, a second aspect that I didn't touch on is that in realist ontologies, which is what we are making in following BFO, there's no such thing as infinite. Our universe is finite, time until the heat death of the universe is finite, the number of particles in the universe is finite. Correspondingly, anything and anything that happens in the universe is finite. So "finite" doesn't really add anything in a definition.

Closer to the intent is whether there's a known limit or not. That's captured, for instance, by the idea of decidability in computational complexity theory. That said, I don't think that saying something about this here is useful. Ask what the difference in the extension (members) of the class of algorithms given a definition with and without the condition. Can we find an example where only not satisfying the 'finite' condition is what rules out the example being a member of the class? If not, then discard it since it has no utility in discriminating.

As to directive information entity, I don't think CCO's IAO captures the story as well as OBO's IAO does. With information entities, which are generically dependent continuants, we talk about concretizations of the information entity. For a novel, the information entity is the content and the concretizations are the print in each physical book. The book is the information bearer. The information entity is what is the same about all the copies.

Usually, concretizations are qualities - the pattern of ink in the book or the pattern of pixels on a screen - the book or screen being the information bearer/ bearer of the concretization. For a directive information entity we say that the concretization can also be a realizable entity. Realizable entities are what you have that explains how you act in some situation. In IAO, a plan specification is like a written version of what should happen. The realizable entity is the thing you have that has a direct relation (realizes) to acts. The realizable entity is the internalized plan, so to speak. So, that there's a specified path between a directive information entity and the act the the directive is about is what distinguishes them from the rest of the information entities.

Performance: It is exactly that. it means it is what happens. It's a performance not because it's art but because performing is a process and that process is guided by an information artifact, a program. It's colloquial to talk about performing an operation and in that sense there's no connotation that art is involved. In art it might be a music score or a script that is the information artifact. That said, for my purpose, 'executing' is synonymous. So, I don't mind that as an alternative. I'm not sure what you mean by mechanical.

I think the shape of my second definition better captures the intent. It makes it clear there are least three things: The algorithm (the thing that is the same in all implementations), the implementation (the program), and the execution of the program. It also removes the ambiguity of "operation".

Regarding your proposed definition:

A FINITE Directive Information Content Entity that UNAMBIGUOUSLY prescribes a sequence of operations that, IN PRINCIPLE, can be automated

I like the "in principle" part. It capture the aspect of realizable entities in that they may or may not be realized. You may be the designated driver (a role) but it may happen that the car you were going to use doesn't start and the gang lands up taking an Uber, and so the role is not realized. So, being a realizable entity captures possibility. I've already said why I think 'finite' is unnecessary. I'm not sure if 'ambiguous' helps. Can you give me an example of something that's almost an algorithm, except for the fact that it is ambiguous?

There's a sense in which and algorithm is actually deliberately ambiguous in that an algorithm doesn't specify exactly how some steps are done. For instance, if an algorithm says that a multiplication is done, the multiplication can be done in different ways. In typical CPUs there's a hardware implementation and you just issue the instruction. But, a cheap microcontroller might not have a multiplication unit and it instead needs to do the multiplication is implemented in software. Any correct algorithm for multiplication would be fine to use as a basis for that software. So the algorithm is at least unspecific, and it's not far to go from unspecific to ambiguous.

from commoncoreontologies.

mark-jensen avatar mark-jensen commented on June 6, 2024

mark-jensen offers:
algorithm = "A Directive Information Content Entity that prescribes a finite sequence of deterministic operations that can be automated to achieve an objective."

@alanruttenberg This definition was an early iteration in attempting to improve the original. See comments from last week, currently it is:

cco:Algorithm =def "A Directive Information Content Entity that prescribes some Process and contains a finite sequence of unambiguous instructions in order to achieve some Objective.

We removed the use of 'deterministic' for reasons you describe. The use of 'finite' and 'unambiguous' applies to the instructions themselves, not the particular implementaion or process that carries them out.

Admittedly, use of "in order to achieve some Objective" is redundant, and not really needed, as all Directives prescribe in order to achieve an objective.

You suggest that the entity that is prescribed is:

a sequence of computations

Are you suggesting to use 'computation' as a primitive and perhaps only elucidate? Or rather, attempt to define something akin to a computational process?

I'm amenable to inlcuding something about computation and automation, but think offering a formal def. for computation might lead us down a rabbit hole.

Iterating:

cco:Algorithm =def "A Directive Information Content Entity that contains a finite sequence of unambiguous instructions and prescribes the computations to be carried out in some Mechanical Process, which in principle can be automated.

  • elucidation: We use 'computation' here is in the sense of ... [careful wording that doesn't invite Pandora]

Note we have a term in CCO-mid Mechanical Process and a subclass curated in Cyber Ontology Computer Program Execution. If need, we can refactor these accordingly depending on the outcome of this discussion, but ignoring their existence would hinder progress at this point.

cco:ComputerProgramExecution = A Mechanical Process that is comprised of a sequence of simple operations performed by a Central Processing Unit or some Computer Program according to the instructions of some Algorithm.

I think best for our definition of Algorithm to not address the nature of computation in a way that goes beyond closed system and automated processess that artifacts are capable of. Ideally it would also not rule out other types of computation that organic or quantum systems exhibit. That is a tall order though, so we should remain humble in our attempt.

from commoncoreontologies.

alanruttenberg avatar alanruttenberg commented on June 6, 2024

This comment is criticism. Later I'll write something constructive.

cco:Algorithm =def "A Directive Information Content Entity that contains a finite sequence of unambiguous instructions and prescribes the computations to be carried out in some Mechanical Process, which in principle can be automated.

Mechanical process: A Process that is the realization of some Disposition of an Artifact

ComputerProgramExecution = A Mechanical Process that is comprised of a sequence of simple operations performed by a Central Processing Unit or some Computer Program according to the instructions of some Algorithm.

Quick comments:

  • See my comments above re:finite and unambiguous
  • Some things mechanical process def seems to allow
    -- The changing color of a ph strip
    -- A glass breaking when it falls down
    -- The outgassing of a new foam mattress
    -- The sandpaper I'm using wearing out
    -- The burning of a match
    -- My digital thermometer displaying a temperature
    -- A fingerprint reader determining whether there's a match
  • "according to the instructions of some algorithm" -> "prescribed by some algorithm" (the computer program is what's prescribed).

Are all those intended? For the most part, they don't seem to me to fit into the common sense of mechanical.

ComputerProgramExecution

  • Doesn't work for quantum computers, which are programmed.
  • It is the instructions that are simple, not the consequences of the instructions.
  • "Central Processing Unit or some Computer Program" -> the CPU (or quantum thingy) does the operations, not the program.

I'm not sure I agree with the goal "Ideally it would also not rule out other types of computation that organic or quantum systems exhibit"

DNA is not an information content entity. It is a different kind of GDC. ICEs are artifactual. In the case of organic or natural quantum systems 'computing', I'm of the opinion that such characterizations are concepts - ways to understand natural phenomena in terms of systems we've built and understand. That doesn't mean they are the same sort of things as the those systems.

Quantum computers do computations. Natural quantum processes (pretty much anything that happens) do not all fall under computation and do not execute algorithms, which are human constructed things. Non-intentionally built things do not prescribe.

I concur on the goal of being humble in our attempts. To me that means sticking to a narrower definition of a term we understand now, and then building generalizations when and if we understand more. If it's thought that there's some larger sense of algorithm, then define 'computer algorithm' for now and reserve 'algorithm' for when we're smart enough to figure out the generalization.

from commoncoreontologies.

Mttbnchtt avatar Mttbnchtt commented on June 6, 2024

I will try to give a comment that hopefully contributes to converge to a solution. There are at least two senses of "algorithm". In one sense, we may accept that some implementational minutiae are not specified (as Alan said). In another sense, given an algorithm and an input, the complexity of the procedure is determined. Therefore, all implementational minutiae are specified as well. It is not the task of an ontology to capture all the senses in which we use a word (otherwise BFO could not take objects to be always material entities). Let us pick one sense of algorithm. I suggest choosing the most studied one in mathematics, i.e. the second one.

Given that, Mark's last attempt seems almost ok to me (also, we accept that we are not writing the definitive definition of algorithm, since scholars keep discussing it as one can see reading the entries on computation in the Stanford Encyclopedia of Philosophy). My main worry is that, if indeed realist ontologies bound mechanical processes by the existence span or energy (or similar) of the universe, then it is wrong to say that a computation "can be carried out in some mechanical process" because every mechanical process is going to be finitely bounded. This would make the decimal expansion of the number \pi not computable, despite agreement among mathematicians that it is computable (see e.g. Alan Turing, On Computable Numbers, 1936). If we can tinker the definition so that it does not imply that the implementation of an algorithm be limited by the existence span or energy (or similar features) of the universe, then I think we are done.

from commoncoreontologies.

alanruttenberg avatar alanruttenberg commented on June 6, 2024

the complexity of the procedure is determined. Therefore, all implementational minutiae are specified as well.

This doesn't follow. The minutiae may vary in the way I mentioned in the earlier message. The only constraint from computational complexity is that the complexity of the implementation of a step is the same (or better) than what it is in the algorithm.

As I understand the Turing paper, the distinction being made is between computable and not-computable numbers - and is related to the notion of decidability. The mentions of infinity should be understood as in principle mentions and stand in contrast to the uncomputable numbers, a great example of which is Chaitin’s constant. That number isn't even in principle computable. Turing can't be used as an authoritative voice on whether the universe can accommodate the infinite as even now that is not known. When mathematicians talk about the infinite it is not (usually) in the physical sense and access to it is not given by physics. Rather, it is arrived at by things like limits and induction.

As an aside, for an entertaining take on finiteness and numbers check out Greg Egan's story Dark Integers

Inclusion of the finiteness criterion doesn't hurt (much) but it doesn't help either because, as far as I know, there's no sensible alternative to algorithm that differs only on the finiteness criterion. So, it doesn't make a difference/isn't suitable as a differentia. There's an implicature that conditions that are mentioned in a definition are meaningful. When they are not, including them is a tax on understanding.

My earlier comment raises a number of issues with the definition, only one of which is the use of "finite". I still don't really understand the term mechanical process and I list a number of processes that I'm asking about in order to understand it. Hopefully we can get back to addressing those in the interest of coming up with a suitable definition.

You are right that we aren't writing the authoritative definition. To me that means we should create a definition that's narrowly scoped, has an operational way of evaluating whether a given instance of information content entity is an algorithm, and where the meaning of common use of the label doesn't excessively differ from the definition.

To be clear, this isn't a personal criticism of the initial efforts to define the term nor an attempt to be overly philosophical. I look at all definitions in the same way - as a practical way of determining whether an instance is in or is outside the class. I test that by thinking of examples that are close to what I think the boundaries of the definition are and then checki with the author as to what they intended. If a definition doesn't let me do that or if parts of the definitions are not germane to that task then I aim to get it fixed.

Definitions can't always do all the work. That's why it's a good practice, in such cases, to give examples of usage as well as cases that seem close to the edge but are on the wrong side.

from commoncoreontologies.

Mttbnchtt avatar Mttbnchtt commented on June 6, 2024

Alan, in the informal sense, an algorithm does not specify every detail. In the formal sense studied by computability and complexity theory it does. Saying "add n to m" is not an instruction in a formally specified algorithm, even if in more relaxed contexts it could count as such. In a formal setting, "add n to m" would be said to refer to a function (addition) that can be specified by different algorithms (each algorithm giving instructions that leave no room for alternative moves, once the inputs are given). If not, the complexity of a procedure would indeed not be determined by the algorithm, given the inputs. That said, if we want to define the informal notion of algorithm, that is ok with me as long as we clarify the objective.

For the second point. I mentioned the computability of the number \pi to indicate that we cannot limit computations to what machines can do in the actual universe. The reason is that we cannot provide a fixed finite upper bound to the resources (time, space, energy, etc.) that a computation may take. If computations were limited by the existence span of the universe, then \pi would be uncomputable, but it is computable. I raised this point because some comment above made me worry that, given that our ontology is a realist one, mentioning mechanical processes may imply a fixed finite upper bound to the resources available to the mechanical process. If that is not the case, we can ignore this observation.

There still seems to be some confusion about the role of finiteness in an algorithm. It is essential for each instruction to be finite and for the list of instructions to be finite. This is a necessary condition, not an idle one: otherwise, every real number would be computable. Our definition should either say that explicitly or imply it (implicature is not good enough when capturing a formal notion). Moreover, every computation that provides an output uses only a finite amount of resources. However, there is no fixed finite upper bound to the size of the input, the size of the output, the length of an instruction, the length of the list, the resources used by the computation. That is all we need to consider regarding the issue finite vs infinite in this context. The "in principle" part of the definition is meant to indicate the absence of a fixed finite upper bound of this type.

Concerning Alan's issue with mechanical processes, I suppose that Mark is better informed than me. I will just observe that "mechanical" in computability is a term of art meaning, more or less, "in principle can be carried out by a machine" or "in principle can be carried out by a human with no insight or freedom or tools except basic ones such as paper and pencil". It is not the sense of "mechanical" that expressions such as "Newtonian mechanics" evoke.

from commoncoreontologies.

swartik avatar swartik commented on June 6, 2024

Matteo, "add m to n" is the COBOL syntax for incrementing n by m. I mention this detail not to call attention to my being an old geezer, but to ask whether the point of your first paragraph should deal with ambiguity rather than the particular set of tokens used to express addition. If an instruction has an unambiguous interpretation under some syntactic interpretation, it's possible to create a programming language incorporating that syntax. If there is no unambiguous interpretation (something like "increment n by the correct amount") I agree with your first paragraph.

from commoncoreontologies.

Mttbnchtt avatar Mttbnchtt commented on June 6, 2024

@swartik : I was taking "add n to m" to be an English sentence. If in COBOL this string of symbols constraints ach move of every implementer of it, then it can be used in an algorithm (in the formal sense of algorithm), as you say. In English that string of symbol does not have an unambiguous interpretation (my Italian teacher taught me a method that, albeit slightly, is different than what my son's American teacher is teaching him; of course both methods lead to the same result and define the same function).

from commoncoreontologies.

alanruttenberg avatar alanruttenberg commented on June 6, 2024

In a formal setting, "add n to m" would be said to refer to a function (addition) that can be specified by different algorithms (each algorithm giving instructions that leave no room for alternative moves, once the inputs are given). If not, the complexity of a procedure would indeed not be determined by the algorithm, given the inputs. That said, if we want to define the informal notion of algorithm, that is ok with me as long as we clarify the objective.

While there are, theoretically, algorithms for the individual steps, a given algorithm (formal or informal) typically does not mention them. I'm not sure why my assertion is controversial. Implementation happens at multiple levels in computer systems. There's the level of the computer program, the steps that it is compiled to, the particular implementation of a CPU. Instructions may be directly implemented in hardware, may be specified by microcode, may be coalesced, such as when a CPU does a multiple and add instruction instead of the two step add and multiply, may be reordered, and may even implement an instruction in different ways depending on the state of the machine.

I think we've beat this to death. I we can't agree on it at this point then we will have to agree to disagree. There are a host of other issues I've raised regarding the definition and that of mechanical process on which it is based. In the interest of moving forward, perhaps we can turn to those other issues.

from commoncoreontologies.

Mttbnchtt avatar Mttbnchtt commented on June 6, 2024

@alanruttenberg : a computer program like a python script is not an algorithm in the sense in which computability and complexity theory consider it. It is an algorithm only in an informal sense (assuming we call formal the mathematically exact sense of algorithm studied in computability and complexity theory). The reason is what you say: a python script does not specify every detail of the behavior of a computer. If we want to define an algorithm as a computer script (where a computer is, for example, what Apple produces), I am ok. However, as I tried to explain, I am considering the mathematical definition of an algorithm (where also a computer is a theoretical concept see, e.g., the infinite tape of Turing machines). I think we have to agree on what we discuss, not so much on the fact that computer programs are not algorithms in the formal sense. My insistence on non-ambiguity is standard in computability theory, not my choice (see. Rogers, Theory of recursive functions $1.1 and Soare, Turing comuability, §1.2). You need it to prove, e.g., Goedel's first incompleteness theorem.

I am not competent to discuss your questions about mechanical process as defined in this ontology. I defer to you and Mark, except noting, as above, that "mechanical" in computability theory is not what "mechanical" means in physics.

from commoncoreontologies.

alanruttenberg avatar alanruttenberg commented on June 6, 2024

I thought of a counterexample for all the definitions offered. Algorithms that employ an oracle. In determining the complexity of such algorithms the oracle can give a 1 step answer to a problem that is unsolvable, i.e. where no implementation is possible. In those cases there's no minutiae to be had nor can a program be written that accomplishes the goal of the algorithm.

So, it's back to the drawing board. One approach would be to avoid the problem by using a narrower definition that excludes such algorithms. Having a definition that captures all the senses in which it is used in language is desirable but sometimes not advisable. In the end it's definition that matters. The label and documentation should be written for the purpose of best communicating the class.

The other approach is to try to make a definition that works for the oracle case. I'm thinking that a principal difference between an algorithm and a program is the intent in creating it. Algorithms can be used for both both for determining complexity as well as being a guide in writing programs. Programs, for the most part, are written for the purpose of controlling what a computer does. There may be other goals, such writing a program as an entry to a competition where the winner is whoever writes the shortest program that works.

from commoncoreontologies.

Mttbnchtt avatar Mttbnchtt commented on June 6, 2024

Thanks, Alan. I think it is a good idea to define algorithm as one does in computability theory and define programs to capture what computer scientists writes (also called algorithms in a less formal sense).

I do not think that the oracle computation changes the definition of algorithm though, which, in the formal sense, is still a finite list etc. When one introduces oracle computation in computability theory, all that changes is that instructions of a new type (more or less "ask the oracle") are allowed. There is no change concerning the unambiguity of each instruction. What we should consider is that, in the sense of algorithm used in computability, given the inputs and the oracle, the complexity of the computation is given.

from commoncoreontologies.

alanruttenberg avatar alanruttenberg commented on June 6, 2024

@Mttbnchtt, Mark's definition proposal was:

A Directive Information Content Entity that contains a finite sequence of unambiguous instructions and prescribes the computations to be carried out in some Mechanical Process, which in principle can be automated.

An algorithm using an oracle answering, say, whether some program halts can't prescribe the computations to be carried out, because there is no process that can compute whether a program halts.

from commoncoreontologies.

alanruttenberg avatar alanruttenberg commented on June 6, 2024

I'm leaning toward the idea of, instead of defining algorithm, defining something like "realizable computing algorithm" with a definition that is specific to compute science/programming and is limited to algorithms that can actually can be implemented. Or, it could be slightly more restrictive by not including any algorithms using oracles. A recipe is not a realizable computer algorithm in this sense, nor instructions in a repair manual, though some people might call those algorithms. We could have "algorithm" as an alternative label, if desired.

from commoncoreontologies.

Mttbnchtt avatar Mttbnchtt commented on June 6, 2024

Alan, an oracle is not part of an algorithm. It is information stored somewhere. In a Turing machine it is on a tape that the head can read. The tape could contain the decimal expansion of an incomputable number but that is not relevant. We do not require the information on the oracle tape to be generated by a computation or algorithmically: it is given (like an oracle delivering a message from a god, hence the name). The algorithm is something separate from the oracle.

The algorithm of a machine with an oracle would still prescribe a computation. The computation could include actions like retrieving information from the oracle. Nothing changes in the (formal) definition of what an algorithm is by allowing actions of this type to be prescribed.

I do not oppose suggestions, like yours, to refocus the effort. I will let the CCO owners decide what they want the ontology to contain.

from commoncoreontologies.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.