Computer programming. Overview[edit] Within software engineering, programming (the implementation) is regarded as one phase in a software development process. There is an on-going debate on the extent to which the writing of programs is an art form, a craft, or an engineering discipline.[3] In general, good programming is considered to be the measured application of all three, with the goal of producing an efficient and evolvable software solution (the criteria for "efficient" and "evolvable" vary considerably). The discipline differs from many other technical professions in that programmers, in general, do not need to be licensed or pass any standardized (or governmentally regulated) certification tests in order to call themselves "programmers" or even "software engineers.
" Because the discipline covers many areas, which may or may not include critical applications, it is debatable whether licensing is required for the profession as a whole. History[edit] Some of the earliest computer programmers were women. Programming paradigm. A programming paradigm is a fundamental style of computer programming, a way of building the structure and elements of computer programs. Capablities and styles of various programming languages are defined by their supported programming paradigms; some programming languages are designed to follow only one paradigm, while others support multiple paradigms. There are six main programming paradigms: imperative, declarative, functional, object-oriented, logic and symbolic programming.[1][2][3] Overview[edit] Overview of the various programming paradigms[4]:5 In object-oriented programming, programmers can think of a program as a collection of interacting objects, while in functional programming a program can be thought of as a sequence of stateless function evaluations.
When programming computers or systems with many processors, process-oriented programming allows programmers to think about applications as sets of concurrent processes acting upon logically shared data structures. History[edit] Object-oriented programming. Overview[edit] Rather than structure programs as code and data, an object-oriented system integrates the two using the concept of an "object". An object has state (data) and behavior (code). Objects correspond to things found in the real world. So for example, a graphics program will have objects such as circle, square, menu.
An online shopping system will have objects such as shopping cart, customer, product. The goals of object-oriented programming are: Increased understanding.Ease of maintenance.Ease of evolution. The overall understanding of the system is increased because the semantic gap—the distance between the language spoken by developers and that spoken by users—is lessened. Object-orientation takes this to the next step. In addition to providing ease of maintenance, encapsulation and information hiding provide ease of evolution as well. History[edit] Fundamental features and concepts [edit] A survey by Deborah J.
Benjamin C. Lecture 1 | Programming Paradigms (Stanford) Logic programming. Logic programming is a programming paradigm based on formal logic. Programs written in a logical programming language are sets of logical sentences, expressing facts and rules about some problem domain. Together with an inference algorithm, they form a program. Major logic programming languages include Prolog and Datalog. A form of logical sentences commonly found in logic programming, but not exclusively, is the Horn clause. An example is: p(X, Y) if q(X) and r(Y) Logical sentences can be understood purely declaratively. The programmer can use the declarative reading of logic programs to verify their correctness. History[edit] The use of mathematical logic to represent and execute computer programs is also a feature of the lambda calculus, developed by Alonzo Church in the 1930s.
In 1997, the Association of Logic Programming bestowed to fifteen recognized researchers in logic programming the title Founders of Logic Programming to recognize them as pioneers in the field:[1] Prolog[edit] Functional programming. Imperative programming. The term is used in opposition to declarative programming, which expresses what the program should accomplish without prescribing how to do it in terms of sequences of actions to be taken. Functional and logic programming are examples of a more declarative approach. Imperative, procedural, and declarative programming[edit] Procedural programming could be considered a step towards declarative programming.
A programmer can often tell, simply by looking at the names, arguments and return types of procedures (and related comments), what a particular procedure is supposed to do, without necessarily looking at the details of how it achieves its result. At the same time, a complete program is still imperative since it 'fixes' the statements to be executed and their order of execution to a large extent. Declarative programming is a non-imperative style of programming in which programs describe their desired results without explicitly listing commands or steps that must be performed.
History[edit] Flowchart. A simple flowchart representing a process for dealing with a non-functioning lamp. A flowchart is a type of diagram that represents an algorithm, workflow or process, showing the steps as boxes of various kinds, and their order by connecting them with arrows. This diagrammatic representation illustrates a solution to a given problem. Flowcharts are used in analyzing, designing, documenting or managing a process or program in various fields.[1] Overview[edit] Flowcharts are used in designing and documenting complex processes or programs.
A processing step, usually called activity, and denoted as a rectangular boxa decision, usually denoted as a diamond. A flowchart is described as "cross-functional" when the page is divided into different swimlanes describing the control of different organizational units. Flowcharts depict certain aspects of processes and they are usually complemented by other types of diagram. History[edit] Flowchart building blocks[edit] Examples[edit] Symbols[edit] Arrows. Algorithm. Flow chart of an algorithm (Euclid's algorithm) for calculating the greatest common divisor (g.c.d.) of two numbers a and b in locations named A and B. The algorithm proceeds by successive subtractions in two loops: IF the test B ≥ A yields "yes" (or true) (more accurately the numberb in location B is greater than or equal to the numbera in location A) THEN, the algorithm specifies B ← B − A (meaning the number b − a replaces the old b).
Similarly, IF A > B, THEN A ← A − B. The process terminates when (the contents of) B is 0, yielding the g.c.d. in A. (Algorithm derived from Scott 2009:13; symbols and drawing style from Tausworthe 1977). In mathematics and computer science, an algorithm ( i/ˈælɡərɪðəm/ AL-gə-ri-dhəm) is a step-by-step procedure for calculations. Informal definition[edit] While there is no generally accepted formal definition of "algorithm," an informal definition could be "a set of rules that precisely defines a sequence of operations Formalization[edit] Pseudocode. Pseudocode is an informal high-level description of the operating principle of a computer program or other algorithm. It uses the structural conventions of a programming language, but is intended for human reading rather than machine reading. Pseudocode typically omits details that are not essential for human understanding of the algorithm, such as variable declarations, system-specific code and some subroutines.
The programming language is augmented with natural language description details, where convenient, or with compact mathematical notation. The purpose of using pseudocode is that it is easier for people to understand than conventional programming language code, and that it is an efficient and environment-independent description of the key principles of an algorithm. Application[edit] Syntax[edit] This is an example of pseudocode (for the mathematical game fizz buzz): Mathematical style pseudocode[edit] Return Machine compilation of pseudocode style languages[edit] See also[edit] Programming language. The earliest programming languages preceded the invention of the digital computer and were used to direct the behavior of machines such as Jacquard looms and player pianos.[1] Thousands of different programming languages have been created, mainly in the computer field, and many more still are being created every year. Many programming languages require computation to be specified in an imperative form (i.e., as a sequence of operations to perform), while other languages utilize other forms of program specification such as the declarative form (i.e. the desired result is specified, not how to achieve it).
Definitions[edit] A programming language is a notation for writing programs, which are specifications of a computation or algorithm.[2] Some, but not all, authors restrict the term "programming language" to those languages that can express all possible algorithms.[2][3] Traits often considered important for what constitutes a programming language include: Function and target Abstractions. Compiler. A diagram of the operation of a typical multi-language, multi-target compiler A compiler is a computer program (or set of programs) that transforms source code written in a programming language (the source language) into another computer language (the target language, often having a binary form known as object code).[1] The most common reason for wanting to transform source code is to create an executable program.
Program faults caused by incorrect compiler behavior can be very difficult to track down and work around; therefore, compiler implementors invest significant effort to ensure compiler correctness. The term compiler-compiler is sometimes used to refer to a parser generator, a tool often used to help create the lexer and parser. History[edit] Software for early computers was primarily written in assembly language. Towards the end of the 1950s, machine-independent programming languages were first proposed. Compilers in education[edit] Compilation[edit] Structure of a compiler[edit] Interpreter (computing) Parse the source code and perform its behavior directlytranslate source code into some efficient intermediate representation and immediately execute thisexplicitly execute stored precompiled code[1] made by a compiler which is part of the interpreter system While interpretation and compilation are the two main means by which programming languages are implemented, they are not mutually exclusive, as most interpreting systems also perform some translation work, just like compilers.
The terms "interpreted language" or "compiled language" signify that the canonical implementation of that language is an interpreter or a compiler, respectively. A high level language is ideally an abstraction independent of particular implementations. An illustration of the linking process. Object files and static libraries are assembled into a new library or executable A compiler converts source code into binary instruction for a specific processor's architecture, thus making it less portable. Semantics. The formal study of semantics intersects with many other fields of inquiry, including lexicology, syntax, pragmatics, etymology and others. Independently, semantics is also a well-defined field in its own right, often with synthetic properties.[4] In the philosophy of language, semantics and reference are closely connected.
Further related fields include philology, communication, and semiotics. The formal study of semantics can therefore be manifold and complex. Semantics contrasts with syntax, the study of the combinatorics of units of a language (without reference to their meaning), and pragmatics, the study of the relationships between the symbols of a language, their meaning, and the users of the language.[5] Semantics as a field of study also has significant ties to various representational theories of meaning including truth theories of meaning, coherence theories of meaning, and correspondence theories of meaning.
Linguistics[edit] Montague grammar[edit] Prototype theory[edit] Syntax (programming languages) In computer science, the syntax of a computer language is the set of rules that defines the combinations of symbols that are considered to be a correctly structured document or fragment in that language. This applies both to programming languages, where the document represents source code, and markup languages, where the document represents data. The syntax of a language defines its surface form.[1] Text-based computer languages are based on sequences of characters, while visual programming languages are based on the spatial layout and connections between symbols (which may be textual or graphical).
Documents that are syntactically invalid are said to have a syntax error. Computer language syntax is generally distinguished into three levels: Words – the lexical level, determining how characters form tokens;Phrases – the grammar level, narrowly speaking, determining how tokens form phrases;Context – determining what objects or variables names refer to, if types are valid, etc. 'a' + 1 a + b. List of programming languages. The aim of this list of programming languages is to include all notable programming languages in existence, both those in current use and historical ones, in alphabetical order, except for dialects of BASIC and esoteric programming languages. Note: Dialects of BASIC have been moved to the separate List of BASIC dialects.
Note: This page does not list esoteric programming languages. A[edit] B[edit] C[edit] D[edit] E[edit] F[edit] G[edit] H[edit] I[edit] J[edit] K[edit] L[edit] M[edit] N[edit] O[edit] P[edit] Q[edit] R[edit] S[edit] T[edit] U[edit] V[edit] W[edit] X[edit] Y[edit] Z[edit] See also[edit] List of programming languages by type. Timeline of programming languages. Perl. Though Perl is not officially an acronym,[5] there are various backronyms in use, such as: Practical Extraction and Reporting Language.[6] Perl was originally developed by Larry Wall in 1987 as a general-purpose Unix scripting language to make report processing easier.[7] Since then, it has undergone many changes and revisions.
The latest major stable revision of Perl 5 is 5.18, released in May 2013. Perl 6, which began as a redesign of Perl 5 in 2000, eventually evolved into a separate language. Both languages continue to be developed independently by different development teams and liberally borrow ideas from one another. History[edit] Early versions[edit] Wall began work on Perl in 1987, while working as a programmer at Unisys,[9] and released version 1.0 to the comp.sources.misc newsgroup on December 18, 1987.[14] The language expanded rapidly over the next few years.
Perl 2, released in 1988, featured a better regular expression engine. Early Perl 5[edit] 2000–present[edit] Name[edit] C (programming language) Generational list of programming languages.