What is the current consensus on the origins of the Brahmi script?

What is the current consensus on the origins of the Brahmi script?

There seem to be two hypotheses about the origin of India's Brahmi script: It developed either from the Aramaic script or the Indus Valley script.

Is there any scholarly consensus regarding which of these hypotheses is correct?

There is a gap of fifteen centuries between the demise of Indus script, and the origin of Brahmi script. More, Indus Valley script remains undeciphered despite the corpus of literature written in Brahmi script.

On the other hand, there are substantial and irreconcilable differences between Kharosthi, which was based on Aramaic, and Brahmi. The most current consensus, according to Amalia E. Gnanadesikan in his book "The Writing Revolution: Cuneiform to the Internet" is that it is a result of stimulus-diffusion; the idea of an alphasyllabary script from the middle-east by way of Iran influenced the creation of Kharosthi directly, and Brahmi indirectly, where it was created from scratch to serve as a more suitable vehicle for Prakit than any of the other contemporary writing systems. This is a common way for writing systems to come into existence, the most recent example of this in widespread use being Inuktitut.

As per my knowledge, Brahmi Script is developed from Indus valley script. The facts supporting this point are: 1. The earliest known script is found on the pottery remains of Harappa and across various parts of the world which dates around 1000 BCE to 500 BCE. and, These scripts resembles to Tamil language (Which is also a part of Indus language family) of that time. 2. Iravatham Mahadevan one of the prominent research scholar in languages has given various proof that the Brahmi language originated from Tamil. 3. Research work by Richard Salomon also states that its most likely that Brahmi originated from Tamil. 4. The archeological evidences found in Kodumanal, Chennimalai near Erode (500 BCE). Porunthal site, Palani (500 BCE). Tissamaharama, Sri Lanka (200 BCE). Tirupparankundram hill, Madurai (1 BCE). Quseir-al-Qadim, Egypt (1 BCE) suggest that the script is Tamil Brahmi script.

Based on the above facts my belief is that Brahmi is a form of Tamil which is widely in use during 500 BCE. So,this leads me to a conclusion that Brahmi is developed from Indus valley script.

Source/Further Reading 1. Corpus of Tamil-Brahmi inscriptions by Iravatham Mahadevan 2. Akam and Puram : 'Address' Signs of the Indus Script by Iravatham Mahadevan 3. http://www.hindu.com/2007/11/21/stories/2007112158412400.htm 4. Indian Epigraphy: A Guide to the Study of Inscriptions in Sanskrit, Prakrit, and the other Indo-Aryan Languages by Richard Salomon 5.Tamil litrature by Kamil Zvelebil

File:Lumbini - Pillar Edict in Brahmi Script, Lumbini (9241396121).jpg

Click on a date/time to view the file as it appeared at that time.

current13:54, 18 December 20143,217 × 2,413 (3.04 MB) Renamed user ExPsittacine (talk | contribs) Transferred from Flickr via Flickr2Commons

You cannot overwrite this file.

List of Ancient Indian Scripts

1. Indus Script

It refers to the script used by the people belonging to the Indus valley civilisation. It has not been deciphered yet. Some people have argued that this script was the predecessor of the Brahmi script. This script is an example of Boustrophedon style as in one line it is written from left to right while in others it is written from right to left.

2. Brahmi Script

Brahmi is the originator of most of the present Indian scripts, including Devanagari, Bengali, Tamil, and Malayalam etc. It developed into two broad types in Northern and Southern India, in the Northern one being more angular and the Southern one being more circular. It was deciphered in 1937 by James Princep. Its best examples are found in the rock-cut edicts of Asoka.

3. Kharosthi Script

It is the sister script and contemporary of Brahmi. It was written from right to left. It was used in the Gandhara culture of North-Western India and is sometimes also called the Gandhari Script. Its inscriptions have been found in the form of Buddhist Texts from present clay Afghanistan and Pakistan.

4. Gupta Script

It is also known as the Late Brahmi script. It was used for writing Sanskrit in the Gupta period. It gave rise to the Nagari, Sarada and Siddham scripts which in turn gave rise to the most important scripts of India such as Devanagari, Bengali etc.

5. Sarada Script

It was a Western variant of the Gupta script. It evolved into Kashmiri and Gurmukhi (now used for writing Punjabi) scripts. It was also used for writing Sanskrit. It is now rarely used.

6. Nagari Script

It was an Eastern variant of the Gupta script. It is an early form of the Devanagari script. It branched off into many other scripts such as Devanagari, Bengali, and Tibetan etc. It was used to write both Prakrit and Sanskrit.

7. Devanagari Script

It is the main script at present to write standard Hindi, Marathi and Nepali as well as Santhali, Konkani and many other Indian languages. It is also used presently to write Sanskrit and is one of the most used writing systems in the world. It is composed of Deva meaning, (God) and Nagari meaning, (city), which meant that it, was both religious and urbane or sophisticated.

8. Kalinga Script

Kalinga was the ancient name of Odisha and this script was used to write an ancient form of Oriya. It is visually close to the original Brahmi. Oriya language presently uses a different script, which has been derived from Bengali script.

9. Grantha Script

It is one of the earliest Southern scripts to originate from Brahmi. It branched off into Tamil and Malayalam scripts, which are still used to write those languages, It is also the predecessor of the Sinhala script used in Sri Lanka. A variant of Grantha called Pallava was taken by Indian merchants in Indonesia, where it led to the development of many South-East Asian scripts. It was used in Tamil Nadu to write the Sanskrit Granthas and hence, was named Grantha.

10. Vatteluttu Script

It was a script derived from the Brahmi and was used in the Southern part of India. It was used to write Tamil and Malayalam. It removed those signs from Brahmi, which were not needed for writing the Southern languages. Presently, both Tamil and Malayalam have moved on to their own Grantha derived scripts.

11. Kadamba Script

It is a descendant of Brahmi and marks the birth of the dedicated Kannada script. It led to the development of modern Kannada and Telugu scripts. It was used to write Sanskrit, Konkani, Kannada and Marathi.

12. Tamil Script

It is the script used to write the Tamil language in India and Sri Lanka. It evolved from Grantha, the Southern form of Brahmi. It is a syllabic language and not alphabetic. It is written from left to right.

According to the epigraphers- All Indian scripts are derived from Brahmi. There are three main families of scripts:

1. Devanagari, which is the basis of the languages of northern and western India: Hindi, Gujarati, Bengali, Marathi, Dogri, Panjabi, etc.

2. Dravidian which is the basis of Telugu, Kannada

3. Grantha is a subsection of the Dravidian languages such as Tamil and Malayalam, but is not as important as the other two.

What is the current consensus on the origins of the Brahmi script? - History

The antiquity of writing in India stretches back to the period of the Indus civilization which lasted for about a thousand years from 2500 to 1500 B.C. After a gap of over a thousand years we come across inscriptions of Asoka in the Greek, Aramaic, Kharosthi and Brahmi scripts. Brahmi was the most common script used by Asoka who ruled from 269 to 232 B.C. Brahmi inscriptions which belong to the period of Asoka have been found in Sri Lanka in rock-shelters. The language used in the Brahmi inscriptions of Ceylon and those of Asoka is Prakrit, a colloquial form of Sanskrit. Inscriptions using Brahmi characters have also been discovered in Tamil Nadu in rock-shelters and potsherds of different types, and the language used is Tamil with a mixture of Prakrit words. The earliest writings so far discovered in Tamil are written in characters which closely resemble Asokan Brahmi inscriptions. These inscriptions are said to be written in the Tamil-Brahmi script to denote the fact that it is a script closely resembling Brahmi and used for writing the Tamil language. The language of these inscriptions is a peculiar kind of Tamil and not really the classical Tamil of the Sangam poetry. Both the modern Tamil script 1 and the Vatteluthu script evolved from this parent script.

No other script earlier than the Tamil-Brahmi (also called the Dhamili or Tamil) script has so far been discovered in Tamil Nadu.

The Asokan Brahmi is the parent script from which all the modern Indian scripts evolved over many centuries. However there is no unanimity of opinion about the origin of this parent script. It is either an indigenous script or a script borrowed from outside the country, and some of the earlier theories were based on the similarities in shape between the Brahmi and some west Asian scripts. Some letters were similar between the scripts but there were many letters which were not similar and the sound values were often different.

If the script were to be of indigenous origin then it could have developed from the Indus signs and some of these signs which resemble Brahmi characters have formed the basis for the theory of indigenous development of the Brahmi script. There is another logical possibility, viz., that the script could be indigenous but need not have evolved over centuries from a different set of signs such as the ones found in the Indus system.

It is this logical possibility of the script having been designed and perfected during a fairly short period which we would like to examine. We assume that the script of the Asokan Brahmi inscriptions had not undergone any drastic change in the shape of the characters. When we put forward such a thesis a few years ago 2 it was rather new but now we have scholars who are willing to admit this possibility 3 .

When was the Asokan Brahmi script designed? Megasthenes the Greek envoy who visited the imperial Mauryan court around 300 B.C. is said to have observed that Indians did not have written books, implying that they did not use a writing system or writing was not widely prevalent. Archaeological excavations in pre-Asokan sites have not brought to light earlier forms of Brahmi and Brahmi itself appears rather suddenly on the scene as an elegantly designed script. It is quite likely that the northern Brahmi came into usage some decades before Asokan rule -- i.e. the beginning of the 3rd century B. C. From where did the inventor get hold of so many different signs? We had proposed earlier that he possibly made use of geometrical patterns like a square with a cross on it (or a large square made of four small squares), and a circle with a vertical straight line (like the Greek letter phi), and these geometrical designs are found along with Tamil-Brahmi inscriptions. Many of the Asokan-Brahmi and the Tamil-Brahmi signs can be fitted into these basic designs.

Once we accept the hypothesis that Asokan Brahmi was invented over a comparatively short period then we have to face the question of the relation between the Asokan-Brahmi and the Tamil-Brahmi scripts. They could not have been designed independently by two different groups unknown to one another and come out with more or less the same set of symbols for the same set of sounds. Either one is directly influenced by the other or there was a common source from which both Tamil in the south and Prakrit in the north borrowed. Since we do not have any evidence at the moment for such a third source we shall confine our attention to the relation between the Asokan and the Tamil-Brahmi scripts only.

It is generally accepted that the influence of Asoka was felt all over India and Sri Lanka, and Asokan edicts are spread all over India except in Tamil Nadu and Kerala. It is therefore held that Tamil had no script of its own prior to Asokan period, and that either during Asoka's reign or some time later Tamil was committed to writing for the first time using a modification of the Asokan-Brahmi script. However what we wish to examine in this paper is the possibility of the Asokan Brahmi script itself having been developed from the Tamil script. In other words the first Mauryan scribe of Brahmi knew about the Tamil script and added new symbols to suit the requirements of writing Prakrit. We shall examine the hypothesis that the Tamil-Brahmi script directly influenced the designing of the Asokan script.

The generally accepted theory that Tamil-Brahmi system is an adaptation of the Asokan Brahmi system has a number of difficulties, as any theory would have. With the addition of new symbols for the special Tamil letters such as [email protected]@@ a, l @@@ @ a, r @@ a and n @@ a, Asokan Brahmi must have been fairly adequate to write the Tamil language. The difference between the long and the short medial e and medial o can be made out from the context and so also the pure consonants from a consonant-vowel combination. If the existing Tamil-Brahmi inscriptions were in fact written during the period of Asoka why is it that they do not follow strictly the Asokan system? It is widely known that the common Tamil word MA KA N was erroneously read as MAA KAA NA for decades by professional epigraphists who relied on the Asokan system to read the Tamil-Brahmi inscriptions. In other words a Buddhist monk from the Asokan capital would not have been able to read correctly the proper names and other words written in the Tamil-Brahmi script of his period. There are in fact a number of Prakrit words in early Tamil inscriptions and why did they follow a system of writing which reduced unnecessarily the mutual intelligibility between the people from the north and the people of Tamil Nadu?

Alternatively one may suggest that writing was introduced into Tamil Nadu only a hundred years after Asoka. This suggestion also would lead to some other difficulties. This would mean that during the period of Asoka's time, and for a hundred years after him, his neighbouring kingdoms of the Cholas, Keralaputras, the Satyaputras and the Pandyas did not make use of writing even though writing had spread to Ceylon in the south and Mysore in the north. The Brahmi script of Asoka had become such a standard script for Ceylon and the southern part of India there is no reason to believe why Tamil had to follow a system of writing different from the Asokan system. We are referring to the example of the Tamil word MA KA N which would be read wrongly as MAA KAA NA if one used the Asokan notational system for reading that Tamil word. Furthermore many of the archaic looking Tamil-Brahmi inscriptions do not follow the Asokan notational system.

The Mahavamsa, 4 the Buddhist chronicle of Sri Lanka speaks of a monarch, King Vijaya of the fifth century B.C. having sought matrimonial alliance with a Pandya ruler. The ancient Pandya ruler is said to have sent a letter to Vijaya along with his daughter. This reference to a letter could indicate that writing existed in Tamil Nadu for many centuries before Christ.

A commentator of Tolkappiyam 5 speaks of the letters of the Tamil alphabet having or having been derived from forms such as the square, the circle etc. This shows that the Tamil grammarians had a theory of the origins and development of writing based on simple geometrical forms. Since it is not found in the original text there is no way of claiming much antiquity for this theory except that it could be based on an early tradition.

We shall now discuss the different orthographic systems that were prevalent about 2000 years ago. For the sake of convenience we shall refer to the different systems of writing used in Tamil Nadu as Tamil-Brahmi I, Tamil-Brahmi II and the Tamil Pulli systems. We shall compare these with the Asokan Brahmi system and the Bhattiprolu relic-casket inscriptions from Krishna district of Andhra Pradesh.

The existence of two orthographic systems of writing Tamil-Brahmi was first demonstrated by Mahadevan. 6 In the system that we call Tamil-Brahmi I, the letter NA is distinguished from the letter N by adding a horizontal stroke at the top of the letter N (Fig.1). This vowel-marker unambiguously distinguishes the consonant-vowel combination from the corresponding pure consonant. In the word NAVAMANI, for example, the letters NA, VA and MA have the medial vowel a inherent in them. In Fig. 1, all these three letters have the vowel-marker denoted by the same short horizontal stroke at the top right hand side. The letter NAA is written as two letters NA and A and this system unambiguously distinguishes the letters with the medial long a from the letters with the medial short a. In Tamil, pure consonants occur frequently at the end of a word as well as in the middle of a word. Letters with short medial a are more frequent than letters with long medial a. A short horizontal stroke on the left side would indicate the short medial e and two horizontal strokes one on the left and the other on the right would indicate the short medial o. The short medial i indicated by a short vertical stroke at the top and the medial u by a short stroke at the bottom. The sign for ai is a double horizontal stroke on the left. The lengthening of the vowels could be indicated by adding a pure vowel as in the case of NAA in Figure 1.

In Tamil-Brahmi II, the same sign is used to denote a pure consonant as well as the consonant with inherent a . For example the letter N as well as the letter NA will be denoted by the same symbol. In contrast to this, in the first system these letters would be distinguished by using a horizontal stroke to the letter NA to the top right hand side. In the second system an addition of the stroke would indicate letter NAA or N with the long medial a. Here the Tamil-Brahmi II system closely resembles the Asokan Brahmi system.

In the Asokan Brahmi inscriptions which are in Prakrit, pure consonants do not occur frequently. There is a special sign called the anusvara for the letter m which is represented by a small circle, and it represents the pure consonant m. All other pure consonants are denoted by a special device. If a pure consonant like K were to be followed by a letter like YA, then Y is written below K and a compound letter KYA is formed indicating that the top sign denotes a pure consonant. There is no way of unambiguously representing a pure consonant when it occurs at the end of a word. One way of getting over the difficulty is to write the consonant as a consonant with short medial a and the correct value may be given depending on the context. This practice is not followed in any of the Tamil-Brahmi systems.

The third system of writing in Tamil-Brahmi inscriptions is the Tamil Pulli system. A pure consonant is distinguished from a consonant with inherent a in the form of a small circular dot or pulli placed near the symbol. In Tolkappiyam it is stated that the pure consonant will have a dot or a pulli and the sound M could also be in the form of a pulli. This fact that Tolkappiyam refers to the anusvara is not universally accepted. The Tamil pulli system unambiguously denotes the pure consonants, the letters with short medial vowels and those with long media! vowels. The short e is distinguished from the long e by the addition of a pulli to the sign for e.

The Bhattiprolu system is similar to the Tamil-Brahmi I system. The long medial a is denoted by a longer stroke at the top right hand side which bends down (Fig.1). A sign without any extra marking would denote a pure consonant, one with a short horizontal sign at the top right hand side would denote a short medial a sign and a sign with a long horizontal stroke bent down, would denote a long medial a sign. This system can be viewed as an improvement over the Tamil-Brahmi I system. The Asokan Brahmi system is closer to the Tamil-Brahmi II system.

We wish to put forward the hypothesis that Asokan Brahmi is a close adaptation of the Tamil-Brahmi script. This hypothesis is not startlingly new. In 1954 T. N. Subramaniam proposed that Brahmi was originally meant for a language like Tamil. 7 However we shall not follow the main lines of his arguments here.

We wish to propose that the Tamil-Brahmi system I belongs to the pre-Asokan period and the Asokan Brahmi is an adaptation and elaboration of that system.

Is there any evidence to support such a hypothesis? First we note that there are fewer symbols in the Tamil-Brahmi script compared to the Asokan Brahmi script. It is quite plausible that signs other than the ones used by Tamil grammarians could have been used along with the letters of the Tamil Brahmi script, from the earliest times. There do exist some Asokan Brahmi symbols which could be thought of as resulting from an elaboration of certain Tamil letters For instance the letter PHA is obviously an elaboration of the letter PA and it is obtained by adding a cur! to the PA sign. The fish-shaped form of MA used in Asokan Brahmi inscriptions could be treated as a form evolved from the Tamil form for MA (Fig. 2). Other things being equal a more elaborate script is a later development of a less elaborate one.

As we have described earlier, there exist three systems of writing followed in Tamil-Brahmi inscriptions. The Asokan Brahmi inscriptions, however, follow a single orthographic system which is different from the Tamil systems as well as the Bhattiprolu system. Here we wish to make use of a principle followed in the Life Sciences for fixing the original home of a plant or an animal which occurs over a vast area. When the same kind of plant is found all over the world, certain criteria are used to fix the original site from which the plant eventually spread. The first objective criterion is this. If many related species are found in the wild in one region of the world then that region is reckoned to be the original home of the plant even though it may be found extensively in other parts of the world. If we apply this principle to the area of ancient scripts then Tamil Nadu would be original home of the Brahmi script since a variety of systems were in use for writing Tamil. One may wonder whether this methodology is applicable to the area of scripts. We can verify it by looking at the Grantha script used in Thailand. 8 The original home of the Grantha script is southern India and many varieties of that script are found there. We conclude that the objective scientific principle is also applicable to ancient scripts.

Another objective criterion used in the Life Sciences is to assign an area as the original home of a plant if that area harbours a more primitive and wild variety of the plant in question. For instance, chillies were introduced into India less than about 500 years ago. Wild chillies are found in the American continent, and South America is the original home of the chilli plant. Let us apply this objective principle to the field of epigraphy. Let us once again take the example of the Grantha script from Thailand. The original home of the script is Andhra Pradesh and Tamil Nadu wherein we find the earlier forms of that script. Let us apply this principle to the study of the Brahmi script. We shall show that the Tamil-Brahmi I system is more primitive than the Asokan system. In the former system every consonant sign stands for a pure consonant. If a vowel were to follow a pure consonant it is represented either by printing an initial vowel sign next to the consonant or by modifying the consonant sign by the addition of a short stroke or two which represent an appropriate medial vowel sign. For example consider the case of the consonant K which is written in the form of a cross. A cross with the addition of a horizontal stroke at the top right hand side will represent KA, a stroke at the bottom right hand side will represent KU, a stroke on the top left hand side will represent KE and so on. The same principle is used for all the vowels and vowel a does not get any special priority over other vowels. This system is more logical and more basic than the Asokan system. We recall that in the Asokan Brahmi system, a cross sign stands for KA or K+A. A vertical stroke on the top right hand side makes it KI or (K+A)-A+I. The Asokan system, therefore, is less primitive and less logical than the Tamil-Brahmi I notational system. The fact that there exists a system of writing in Tamil which is more logical and more basic than the Asokan system supports our hypothesis that Tamil-Brahmi is a little more ancient than Asokan Brahmi.

The dating of the Tamil-Brahmi inscriptions is full of uncertainties. The only dated Brahmi inscriptions of significance are the Asokan inscriptions of the third century B.C., the Arikkamedu graffiti on potsherds 9 of the first and second centuries A.D and a silver coin 10 of Vashishtiputra Sri Satakarni of the second century A D. The Arikkamedu dating is based on archaeological methods which make use of Roman pottery which can be dated to within a few decades. Some inscribed potsherds have also been reported from Uraiyur 11 and these are assigned to the end of the first century or the beginning of the second century A. D. The Arikkamedu inscriptions follow the Tamil-Brahmi II system whereas the Uraiyur writings follow the Tamil-Brahmi I system. The Satakarni coin follows the Pulli system. In some inscriptions there is some kind of mix-up of more than one system. All .the three systems seem to have existed side by side.

Many scholars tend to date the Tamil Brahmi inscriptions on the basis of Paleography or the study of the shape of (ancient) letters. A reliable paleographical scheme for the study of Tamil Brahmi inscriptions is yet to be established. In the Ceylon inscriptions it has been found that there is no uniformity of script in the early inscriptions and it has not been possible to formulate a paleographical scheme to be used for dating inscriptions of unknown authorship. 12 As a matter of fact paleographical methods are not dependable enough for determining the dates of early Tamil-Brahmi inscriptions. Even in Asokan inscriptions (Fig. 2) there is a lot of variation 13 in the shapes of letters inscribed by different individual scribes.

In Fig. 2 we see a variety of shapes in which the letters are depicted in the inscriptions of Asoka. The symbol used for initial i is often a group of three dots. Tamil-Brahmi inscriptions use a group of three horizontal strokes which, I believe, is an earlier form of the letter which can be derived from the geometrical pattern of square with a cross. The triangle sign for initial e is also found in Tamil-Brahmi inscriptions. In a cavern at Muthupatti 14 the pyramidal form appears and it could be the earliest form. The sign for k is written in more than one way. The various shapes of the letter ta (ta as in Tamil) includes many shapes which are found in Tamil-Brahmi inscriptions. The Asokan symbol for ma appears to be derived from the corresponding typical Tamil form. The different forms of ya are also found in Tamil-Brahmi inscriptions. In Asokan inscriptions the medial i is represented in more than one way. Such a wide range of variation in the shapes of letters in the inscriptions and the similarity between some of the Asokan letters and the Tamil-Brahmi letters should lead scholars to take a cautious approach towards dating early Tamil inscriptions on the basis of paleography.

The significant changes in the shapes of Tamil-Brahmi letters have yet to be successfully identified. The difference between the period characteristics and the scribal characteristics has yet to be established. The sign for medial i which is a short vertical stroke at the top of a sign, evolved over centuries into the form of a cap. However, even in some Asokan inscriptions the pure shape of a vertical line attached by a horizontal stroke is not strictly maintained. If such a feature were to be used to date a Tamil-Brahmi inscription it could lead to erroneous conclusions.

The recent discovery of an inscription of Adiyaman Nedumananji at Jambai village brings home the difficulty. The first part of the inscription is very similar to Asokan inscriptions. From literary sources we know that Atiyan was a contemporary of Ceraman Peruncheral Irumborai. An inscription of Ceraman's son is known to exist at Pugalur 15 near Karur and this inscription must be only a few years or decades later than the Jambai inscriptions. However if one were to depend upon paleographical methods, the Pugalur inscription will appear to be more than a century later than the Jambai inscription. Atiyaman is believed to have lived in the later part of the second century A. D. but his inscription is more similar to the Asokan inscriptions of the 3rd century B. C. than to the legend on the Satakarni coin of the second century A. D. The Atiyaman inscription has brought to the forefront the difficulties and uncertainties in dating the Tamil-Brahmi inscriptions.

One is therefore left mainly with literary sources to fix the dates of inscriptions and the literary sources themselves are not always free from interpolations. Silapathikaram has a couple of references to king Gajabahu of Sri Lanka. One appears in the introductory passage where many kings are described as offering sacrifices to Kannagi who had already been deified as a goddess. The second reference is towards the end of Silapathikaram where Gajabahu is described in the company of Senguttuvan. Mahavamsa is silent about Gajabahu's visit to India but some scholars have accepted the authenticity of this account in Silapathikaram for establishing Gajabahu-Senguttuvan synchronism. These and perhaps other materials are used to date the period of Karikala Chola and Atiyaman Nedumananji to the second century A. D. However the dates proposed for the sangam period on the basis of earlier historical studies are often at variance with the dates proposed by those who rely heavily on Tamil language and literature.

It seems to me that the Senguttuvan-Gajabahu synchronism is not based on strong corroborative evidence and once this synchronism is given up, I wonder why Atiyaman Nedumananji cannot be treated as a contemporary of Asoka or at least placed in the first or second centuries before Christ.

With the available data at our disposal it is therefore difficult to get a clear picture of the origin of writing in Tamil Nadu. We shall however propose a tentative scheme of development as a hypothetical alternative to the already existing theories.

In the first stage the Tamil-Brahmi script is invented and used for writing in Tamil Nadu during the pre-Asokan days. The script could be alternatively called Dhamili or Tamil. All the letters might have been invented in toto or some more Tamil letters such as [email protected]@@ a, l @@@ @ a, r @@ a and n @@ a and some non-Tamil letters could have been invented towards the end of the first stage The first orthographic system would have been in use.

In the second stage the Tamil letters reach the courts of the Mauryan kings and the Tamil script is adapted to write Prakrit language. New signs are added and these could have been devised using the same principle as the one used by the original inventors so that there is not much difference between the new and the old letters. Special Tamil characters which are not of use for writing Prakrit are dropped. During the reign of Emperor Asoka, the Brahmi script is spread far and wide up to Sri Lanka. The already existing Tamil system gets influenced by the Asokan system and the Tamil-Brahmi II system which is akin to the Asokan system is slowly accepted in Tamil Nadu. Both the systems operate side by side. Due to the mixing of the two systems, consistency of systems is not strictly maintained in many inscriptions.

The Pulli system can be envisaged as an improvement over the second system in which a pulli is used to mark pure consonants. The Pulli system coexisted with the other two systems and the Anaimalai inscription 16,17 is an example of the Pulli system. This scheme assumes that the pulli system came after the second system in order to accommodate the theory that Asokan writings influenced the Tamil notational system. It is quite conceivable, however, that Tamil was not very much influenced by the northern Brahmi inscriptions and the Pulli system could have preceded the second system as an alternative independent notational system. The Tamil-Brahmi II system, in that case, would be a later development of the Pulli system and the pulli could have gone out of use as it happened in the medieval period.

The Pulli system is described in Tolkappiyam as the norm. In Tolkappiyam every vowel is treated as uyir or soul and every consonant as mey or body. The same soul could enter different bodies and form uyir mey or body with soul. A single soul or uyir could exist by itself but a body or pure consonant could not. Thus a theory for the letters of the Tamil alphabet existed at the time of Tolkappiyar and it reflected the contemporary metaphysical system which included the belief in the transmigration of souls.

The Bhattiprolu system can be viewed as an improvement over the Tamil-Brahmi I system and it also existed side by side with the other systems.

We are not in a position now to establish with certainty the nature of the origin and development of writing in Tamil Nadu but the picture may become clearer in the future when new inscriptions are discovered. What is clear, however, is that two systems of Brahmi as well as a third Tamil Pulli system had existed side by side during the early period. A large proportion of inscriptions that have been discovered so far, is in the Tamil-Brahmi II system. The earlier theory that writing in Tamil Nadu developed after Asoka does not fit in well with the available data and therefore we have proposed an alternative theory which needs to be examined further for its validity.

I wish to thank Dr M. Lockwood. for fruitful discussions and for going through an earlier draft of this paper, and Mr. S. Govindaraju for preparing the diagrams.

Breaking Rules That Don’t Hold Water

The current most popular hypothesis maintains seals were inscribed with Proto-Dravidian or Proto-Indo-European ‘names of the seal-owners’ but this, according to the researcher, simply ‘does not hold water.’ Many scholars, according to the author, assume Indus script is ‘logo-syllabic’ where one symbol can be used as a word sign at one time, then as a syllable-sign in another instance.

This method, where a word-symbol is also used for its sound value is called the ‘ rebus principle .' The paper offers the analogy of combining ‘pictures of a honey bee and a leaf to signify the word “belief” (bee+leaf) and according to Ms. Mukhopadhyay, while many ancient scripts use rebus methods to generate new words, the inscriptions found on the Indus seals and tablets ‘have not used rebus as the mechanism to convey meaning.’

Tamil language

Our editors will review what you’ve submitted and determine whether to revise the article.

Tamil language, member of the Dravidian language family, spoken primarily in India. It is the official language of the Indian state of Tamil Nadu and the union territory of Puducherry (Pondicherry). It is also an official language in Sri Lanka and Singapore and has significant numbers of speakers in Malaysia, Mauritius, Fiji, and South Africa. In 2004 Tamil was declared a classical language of India, meaning that it met three criteria: its origins are ancient it has an independent tradition and it possesses a considerable body of ancient literature. In the early 21st century more than 66 million people were Tamil speakers.

The earliest Tamil writing is attested in inscriptions and potsherds from the 5th century bce . Three periods have been distinguished through analyses of grammatical and lexical changes: Old Tamil (from about 450 bce to 700 ce ), Middle Tamil (700–1600), and Modern Tamil (from 1600). The Tamil writing system evolved from the Brahmi script. The shape of the letters changed enormously over time, eventually stabilizing when printing was introduced in the 16th century ce . The major addition to the alphabet was the incorporation of Grantha letters to write unassimilated Sanskrit words, although a few letters with irregular shapes were standardized during the modern period. A script known as Vatteluttu (“Round Script”) is also in common use.

Spoken Tamil has changed substantially over time, including changes in the phonological structure of words. This has created diglossia—a system in which there are distinct differences between colloquial forms of a language and those that are used in formal and written contexts. The major regional variation is between the form spoken in India and that spoken in Jaffna (Sri Lanka), capital of a former Tamil city-state, and its surrounds. Within Tamil Nadu there are phonological differences between the northern, western, and southern speech. Regional varieties of the language intersect with varieties that are based on social class or caste.

Like the other Dravidian languages, Tamil is characterized by a series of retroflex consonants (/ḍ/, /ṇ/, and /ṭ/) made by curling the tip of the tongue back to the roof of the mouth. Structurally, Tamil is a verb-final language that allows flexibility regarding the order of the subject and the object in a sentence. Adjectives and relative, adverbial, and infinitive clauses normally precede the term they modify, while inflections such as those for tense, number, person, and case are indicated with suffixes.

Notable features

  • Possibly pre-dates Sumerian Cuneiform writing - if this is true, the Ancient Egyptian script is the oldest known writing system. Another possibility is that the two scripts developed at more or less the same time.
  • The direction of writing in the hieroglyphic script varied - it could be written in horizontal lines running either from left to right or from right to left, or in vertical columns running from top to bottom. You can tell the direction of any piece of writing by looking at the way the animals and people are facing - they look towards the beginning of the line.
  • The arrangement of glyphs was based partly on artistic considerations.
  • A fairly consistent core of 700 glyphs was used to write Classical or Middle Egyptian (ca. 2000-1650 BC), though during the Greco-Roman eras (332 BC - ca. 400 AD) over 5,000 glyphs were in use.
  • The glyphs have both semantic and phonetic values. For example, the glyph for crocodile is a picture of a crocodile and also represents the sound "msh". When writing the word for crocodile, the Ancient Egyptians combined a picture of a crocodile with the glyphs which spell out "msh". Similarly the hieroglyphs for cat, miw, combine the glyphs for m, i and w with a picture of a cat.

Discovering the Meaning Behind the Vauxhall Logo & Name

As one of the oldest established vehicle manufacturers and distribution companies in the UK, Vauxhall has been around for much longer than you’d think. Alexander Wilson founded the company in the Vauxhall district of London in 1857. It was originally named Vauxhall Iron Works before settling on its current name. Vauxhall designed its original logo when it was founded in 1857 as a nod to its local heritage, Vauxhall chose the image of a griffin driving a “V” flag into the ground. After its founder left the company and it began producing cars, the name and logo were retained to pay homage to its roots.

The griffin, a mythical creature with the body of a lion and the head/wings of an eagle, reflects the coat of arms of Sir Falkes de Breauté, a mercenary soldier who was given the Manor of Luton by King John in the thirteenth century. His mansion, Fulk’s Hall, became known eventually as Vauxhall.

The logo underwent changes over the years, streamlining details and going from square to round.

Evolution of the Vauxhall logo over the years

In 2008, Vauxhall released its most up-to-date logo design, cropping out most of the animal’s body to focus on its head. Concerning the design, Vauxhall’s Managing Director Bill Parfitt stated, “While the new-look Griffin pays homage to our 100 year-plus manufacturing heritage in the UK, it also encapsulates Vauxhall’s fresh design philosophy, first showcased in the current Astra, and set to continue with Insignia.”

A wyvern (dragon) on the White Dragon Flag
Photo: Wikimedia

There’s actually an ongoing argument concerning whether the animal is–rather–a wyvern. Some historians and representatives claim that the animal is–or at some point–was actually a wyvern, a mythical dragon with legs and a tail. Its barbed head does bear similarities to the feathered eagle head, and features horn-like ears, but the consensus by and large is that the Vauxhall creature is a griffin.

Now, with the sale of Vauxhall to the PSA Group, the big question is–will the Vauxhall logo be revamped to usher in this new chapter in the company’s history? While there hasn’t been official word on this yet (as of today, March 13th), it will depend on if the PSA Group wants to sustain the brand’s current image without drawing attention to the change in ownership, or if a noticeable makeover would be welcomed. Considering the 2008 makeover was made to improve perception of the brand, an upcoming redesign wouldn’t be a surprise.

Enjoy reading about logos like Vauxhall’s? Check out the rest of our Behind the Badge series examining fascinating automotive emblems!

The News Wheel is a digital auto magazine providing readers with a fresh perspective on the latest car news. We’re located in the heart of America (Dayton, Ohio) and our goal is to deliver an entertaining and informative perspective on what’s trending in the automotive world. See more articles from The News Wheel.

ECMAScript: JavaScript as a standard

The first big change for JavaScript after its public release came in the form of ECMA standardization. ECMA is an industry association formed in 1961 concerned solely with standardization of information and communications systems.

Work on the standard for JavaScript was started in November 1996. The identification for the standard was ECMA-262 and the committee in charge was TC-39. By the time, JavaScript was already a popular element in many pages. This press release from 1996 puts the number of JavaScript pages at 300,000.

JavaScript and Java are cornerstone technologies of the Netscape ONE platform for developing Internet and Intranet applications. In the short time since their introduction last year, the new languages have seen rapid developer acceptance with more than 175,000 Java applets and more than 300,000 JavaScript-enabled pages on the Internet today according to www.hotbot.com. - Netscape Press Release

Standardization was an important step for such a young language, but a great call nonetheless. It opened up JavaScript to a wider audience, and gave other potential implementors voice in the evolution of the language. It also served the purpose of keeping other implementors in check. Back then, it was feared Microsoft or others would stray too far from the default implementation and cause fragmentation.

For trademark reasons, the ECMA committee was not able to use JavaScript as the name. The alternatives were not liked by many either, so after some discussion it was decided that the language described by the standard would be called ECMAScript. Today, JavaScript is just the commercial name for ECMAScript.

ECMAScript 1 & 2: On The Road to Standardization

The first ECMAScript standard was based on the version of JavaScript released with Netscape Navigator 4 and still missed important features such as regular expressions, JSON, exceptions, and important methods for builtin objects. It was working much better in the browser, however. JavaScript was becoming better and better. Version 1 was released in June 1997.

Notice how our simple test of prototypes and functions now works correctly. A lot of work had gone under the hood in Netscape 4, and JavaScript benefited tremendously from it. Our example now essentially runs identically to any current browser. This is a great state to be for its first release as a standard.

The second version of the standard, ECMAScript 2, was released to fix inconsistencies between ECMA and the ISO standard for JavaScript (ISO/IEC 16262), so no changes to the language were part of it. It was released in June 1998.

An interesting quirk of this version of JavaScript is that errors that are not caught at compile time (which are in general left as unspecified) leave to the whim of the interpreter what to do about them. This is because exceptions were not part of the language yet.

ECMAScript 3: The First Big Changes

Work continued past ECMAScript 2 and the first big changes to the language saw the light. This version brought in:

  • Regular expressions
  • The do-while block
  • Exceptions and the try/catch blocks
  • More built-in functions for strings and arrays
  • Formatting for numeric output
  • The in and instanceof operators
  • Much better error handling

ECMAScript 3 was released in December 1999.

This version of ECMAScript spread far and wide. It was supported by all major browsers at the time, and continued to be supported many years later. Even today, some transpilers can target this version of ECMAScript when producing output. This made ECMAScript 3 the baseline target for many libraries, even when later versions of the standard where released.

Although JavaScript was more in use than ever, it was still primarily a client-side language. Many of its new features brought it closer to breaking out of that cage.

Netscape Navigator 6, released in November 2000 and a major change from past versions, supported ECMAScript 3. Almost a year and a half later, Firefox, a lean browser based on the codebase for Netscape Navigator, was released supporting ECMAScript 3 as well. These browsers, alongside Internet Explorer continued pushing JavaScript growth.

The birth of AJAX

AJAX, asynchronous JavaScript and XML, was a technique that was born in the years of ECMAScript 3. Although it was not part of the standard, Microsoft implemented certain extensions to JavaScript for its Internet Explorer 5 browser. One of them was the XMLHttpRequest function (in the form of the XMLHTTP ActiveX control). This function allowed a browser to perform an asynchronous HTTP request against a server, thus allowing pages to be updated on-the-fly. Although the term AJAX was not coined until years later, this technique was pretty much in place.

The term AJAX was coined by Jesse James Garrett, co-founder of Adaptive Path, in this iconic blog post.

XMLHttpRequest proved to be a success and years later was integrated into its separate standard (as part of the WHATWG and the W3C groups).

This evolution of features, an implementor bringing something interesting to the language and implementing it in its browser, is still the way JavaScript and associated web standards such as HTML and CSS continue to evolve. At the time, however, there was much less communication between parties, which resulted in delays and fragmentation. To be fair, JavaScript development today is much more organized, with procedures for presenting proposals by any interested parties.

Playing with Netscape Navigator 6

This release supports exceptions, the main showstopper previous versions suffered when trying to access Google. Incredibly, trying to access Google in this version results in a viewable, working page, even today. For contrast we attempted to access Google using Netscape Navigator 4, and we got hit by the lack of exceptions, incomplete rendering, and bad layout. Things were moving fast for the web, even back then.

Playing with Internet Explorer 5

Internet Explorer 5 was capable of rendering the current version of Google as well. It is well known, however, there were many differences in the implementation of certain features between Internet Explorer and other browsers. These differences plagued the web for many years, and were the source of frustration for web developers for a long time, who usually had to implement special cases for Internet Explorer users.

In fact, to access the XMLHttpRequest object in Internet Explorer 5 and 6, it was necessary to resort to ActiveX. Other browsers implemented it as a native object.

Arguably, it was Internet Explorer 5 who brought the idea to the table first. It was not until version 7 that Microsoft started to follow standards and consensus more closely. Some outdated corporate sites still require old versions of Internet Explorer to run correctly.

ECMAScript 3.1 and 4: The Years of Struggle

Unfortunately, the following years were not good for JavaScript development. As soon as work on ECMAScript 4 started, strong differences in the committee started to appear. There was a group of people that thought JavaScript needed features to become a stronger language for large-scale application development. This group proposed many features that were big in scope and in changes. Others thought this was not the appropriate course for JavaScript. The lack of consensus, and the complexity of some of the proposed features, pushed the release of ECMAScript 4 further and further away.

Work on ECMAScript 4 had begun as soon as version 3 came out the door in 1999. Many interesting features were discussed internally at Netscape. However, interest in implementing them had dwindled and work on a new version of ECMAScript stopped after a while in the year 2003. An interim report was released and some implementors, such as Adobe (ActionScript) and Microsoft (JScript.NET), used it as basis for their engines. In 2005, the impact of AJAX and XMLHttpRequest sparked again the interest in a new version of JavaScript and TC-39 resumed work. Years passed and the set of features grew bigger and bigger. At the peak of development, ECMAScript 4 had features such as:

  • Classes
  • Interfaces
  • Namespaces
  • Packages
  • Optional type annotations
  • Optional static type checking
  • Structural types
  • Type definitions
  • Multimethods
  • Parameterized types
  • Proper tail calls
  • Iterators
  • Generators
  • Instrospection
  • Type discriminating exception handlers
  • Constant bindings
  • Proper block scoping
  • Destructuring
  • Succint function expressions
  • Array comprehensions

The ECMAScript 4 draft describes this new version as intended for programming in the large. If you are already familiar with ECMAScript 6/2015 you will notice that many features from ECMAScript 4 were reintroduced in it.

Though flexible and formally powerful, the abstraction facilities of ES3 are often inadequate in practice for the development of large software systems. ECMAScript programs are becoming larger and more complex with the adoption of Ajax programming on the web and the extensive use of ECMAScript as an extension and scripting language in applications. The development of large programs can benefit substantially from facilities like static type checking, name hiding, early binding and other optimization hooks, and direct support for object-oriented programming, all of which are absent from ES3. - ECMAScript 4 draft

An interesting piece of history is the following Google Docs spreadsheet, which displays the state of implementation of several JavaScript engines and the discussion of the parties involved in that.

The committee that was developing ECMAScript 4 was formed by Adobe, Mozilla, Opera (in unofficial capacity) and Microsoft. Yahoo entered the game as most of the standard and features were already decided. Doug Crockford, an influential JavaScript developer, was the person sent by Yahoo for this. He voiced his concerns in strong opposition to many of the changes proposed for ECMAScript 4. He got strong support from the Microsoft representative. In the words of Crockford himself:

But it turned out that the Microsoft member had similar concerns — he also thought the language was getting too big and was out of control. He had not said anything prior to my joining the group because he was concerned that, if Microsoft tried to get in the way of this thing, it would be accused of anti-competitive behavior. Based on Microsoft's past performance, there were maybe some good reasons for them to be concerned about that — and it turned out, those concerns were well-founded, because that happened. But I convinced him that Microsoft should do the right thing, and to his credit, he decided that he should, and was able to convince Microsoft that it should. So Microsoft changed their position on ES4. - Douglas Crockford — The State and Future of JavaScript

What started as doubts, soon became a strong stance against JavaScript. Microsoft refused to accept any part of ECMAScript 4 and was ready to take every necessary action to stop the standard from getting approved (even legal actions). Fortunately, people in the committee managed to prevent a legal struggle. However, the lack of concensus effectively prevented ECMAScript 4 from advancing.

Some of the people at Microsoft wanted to play hardball on this thing, they wanted to start setting up paper trails, beginning grievance procedures, wanting to do these extra legal things. I didn't want any part of that. My disagreement with ES4 was strictly technical and I wanted to keep it strictly technical I didn't want to make it nastier than it had to be. I just wanted to try to figure out what the right thing to do was, so I managed to moderate it a little bit. But Microsoft still took an extreme position, saying that they refused to accept any part of ES4. So the thing got polarized, but I think it was polarized as a consequence of the ES4 team refusing to consider any other opinions. At that moment the committee was not in consensus, which was a bad thing because a standards group needs to be in consensus. A standard should not be controversial. - Douglas Crockford — The State and Future of JavaScript

Crockford pushed forward the idea of coming up with a simpler, reduced set of features for the new standard, something all could agree on: no new syntax, only practical improvements born out of the experience of using the language. This proposal came to be known as ECMAScript 3.1.

For a time, both standards coexisted, and two informal committees were set in place. ECMAScript 4, however, was too complex to be finished in the face of discordance. ECMAScript 3.1 was much simpler, and, in spite of the struggle at ECMA, was completed.

The end for ECMAScript 4 came in the year 2008, when Eich sent an email with the executive summary of a meeting in Oslo which detailed the way forward for ECMAScript and the future of versions 3.1 and 4.

The conclusions from that meeting were to:

  1. Focus work on ES3.1 with full collaboration of all parties, and target two interoperable implementations by early next year.
  2. Collaborate on the next step beyond ES3.1, which will include syntactic extensions but which will be more modest than ES4 in both semantic and syntactic innovation.
  3. Some ES4 proposals have been deemed unsound for the Web, and are off the table for good: packages, namespaces and early binding. This conclusion is key to Harmony.
  4. Other goals and ideas from ES4 are being rephrased to keep consensus in the committee these include a notion of classes based on existing ES3 concepts combined with proposed ES3.1 extensions.

All in all, ECMAScript 4 took almost 8 years of development and was finally scrapped. A hard lesson for all who were involved.

The word "Harmony" appears in the conclusions above. This was the name the project for future extensions for JavaScript received. Harmony would be the alternative that everyone could agree on. After the release of ECMAScript 3.1 (in the form of version 5, as we'll see below), ECMAScript Harmony became the place were all new ideas for JavaScript would be discussed.


ActionScript was a programming language based on an early draft for ECMAScript 4. Adobe implemented it as part of its Flash suite of applications and was the sole scripting language supported by it. This made Adobe take a strong stance in favor of ECMAScript 4, even going as far as releasing their engine as open-source (Tamarin) in hopes of speeding ECMAScript 4 adoption. An interesting take on the matter was exposed by Mike Chambers, an Adobe employee:

ActionScript 3 is not going away, and we are not removing anything from it based on the recent decisions. We will continue to track the ECMAScript specifications, but as we always have, we will innovate and push the web forward when possible (just as we have done in the past). - Mike Chamber's blog

It was the hope of ActionScript developers that innovation in ActionScript would drive features in ECMAScript. Unfortunately this was never the case, and what later came to ECMAScript 2015 was in many ways incompatible with ActionScript.

Some saw this move as an attempt of Microsoft to remain in control of the language and the implementation. The only viable engine for ECMAScript 4 at the moment was Tamarin, so Microsoft, who had 80% browser market share at the moment, could continue using its own engine (and extensions) without paying the cost of switching to a competitor's alternative or taking time to implement everything in-house. Others simply say Microsoft's objections were merely technical, like those from Yahoo. Microsoft's engine, JScript, at this point had many differences with other implementations. Some have seen this as a way to remain covertly in control of the language.

ActionScript remains today the language for Flash, which, with the advent of HTML5 has slowly faded in popularity.

ActionScript remains the closest look to what ECMAScript 4 could have been if it had been implemented by popular JavaScript engines:

E4X? What is E4X?

E4X was the name an extension for ECMAScript received. It was released during the years of ECMAScript 4 development (2004), so the moniker E4X was adopted. Its actual name is ECMAScript for XML, and was standardized as ECMA-357. E4X extends ECMAScript to support native processing and parsing of XML content. XML is treated as a native data type in E4X. It saw initial adoption by major JavaScript engines, such as SpiderMonkey, but it was later dropped due to lack of use. It was removed from Firefox in version 21.

Other than the number "4" in its name, E4X has little to do with ECMAScript 4.

A sample of what E4X used to bring to the table:

Arguably, other data formats (such as JSON) have gained wider acceptance in the JavaScript community, so E4X came and went without much ado.

ECMAScript 5: The Rebirth Of JavaScript

After the long struggle of ECMAScript 4, from 2008 onwards, the community focused on ECMAScript 3.1. ECMAScript 4 was scrapped. In the year 2009 ECMAScript 3.1 was completed and signed-off by all involved parties. ECMAScript 4 was already recognized as a specific variant of ECMAScript even without any proper release, so the committee decided to rename ECMAScript 3.1 to ECMAScript 5 to avoid confusion.

ECMAScript 5 became one of the most supported versions of JavaScript, and also became the compiling target of many transpilers. ECMAScript 5 was wholly supported by Firefox 4 (2011), Chrome 19 (2012), Safari 6 (2012), Opera 12.10 (2012) and Internet Explorer 10 (2012).

ECMAScript 5 was a rather modest update to ECMAScript 3, it included:

  • Getter/setters
  • Trailing commas in array and object literals
  • Reserved words as property names
  • New Object methods ( create , defineProperty , keys , seal , freeze , getOwnPropertyNames , etc.)
  • New Array methods ( isArray , indexOf , every , some , map , filter , reduce , etc.)
  • String . prototype . trim and property access
  • New Date methods ( toISOString , now , toJSON )
  • Function bind
  • JSON
  • Immutable global objects ( undefined , NaN , Infinity )
  • Strict mode
  • Other minor changes ( parseInt ignores leading zeroes, thown functions have proper this values, etc.)

None of the changes required syntactic changes. Getters and setters were already unofficially supported by various browsers at the time. The new Object methods improve "programming in the large" by giving programmers more tools to ensure certain invariants are enforced ( Object . seal , Object . freeze , Object . createProperty ). Strict mode also became a strong tool in this area by preventing many common sources for errors. The additional Array methods improve certain functional patterns ( map , reduce , filter , every , some ). The other big change is JSON: a JavaScript-inspired data format that is now natively supported through JSON . stringify and JSON . parse . Other changes make small improvements in several areas based on practical experience. All-in-all, ECMAScript 5 was a modest improvement that helped JavaScript become a more usable language, for both small scripts, and bigger projects. Still, there were many good ideas from ECMAScript 4 that got scrapped and would see a return through the ECMAScript Harmony proposal.

ECMAScript 5 saw another iteration in the year 2011 in the form of ECMAScript 5.1. This release clarified some ambiguous points in the standard but didn't provide any new features. All new features were slated for the next big release of ECMAScript.

ECMAScript 6 (2015) & 7 (2016): a General Purpose Language

The ECMAScript Harmony proposal became a hub for future improvements to JavaScript. Many ideas from ECMAScript 4 were cancelled for good, but others were rehashed with a new mindset. ECMAScript 6, later renamed to ECMAScript 2015, was slated to bring big changes. Almost every change that required syntactic changes was pushed back to this version. This time, however, the committee achieved unity and ECMAScript 6 was finally released in the year 2015. Many browser venders were already working on implementing its features, but with a big changelog things took some time. Even today, not all browsers have complete coverage of ECMAScript 2015 (although they are very close).

The release of ECMAScript 2015 caused a big jump in the use of transpilers such as Babel or Traceur. Even before the release, as these transpilers tracked the progress of the technical committee, people were already experiencing many of the benefits of ECMAScript 2015.

Some of the big features of ECMAScript 4 were implemented in this version of ECMAScript. However, they were implemented with a different mindset. For instance, classes in ECMAScript 2015 are little more than syntactic sugar on top of prototypes. This mindset eases the transition and the development of new features.

We did an extensive overview of the new features of ECMAScript 2015 in our A Rundown of JavaScript 2015 features article. You can also take a look at the ECMAScript compatibility table to get a sense of were we stand right now in terms of implementation.

A short summary of the new features follows:

  • Let (lexical) and const (unrebindable) bindings
  • Arrow functions (shorter anonymous functions) and lexical this (enclosing scope this)
  • Classes (syntactic suger on top of prototypes)
  • Object literal improvements (computed keys, shorter method definitions, etc.)
  • Template strings
  • Promises
  • Generators, iterables, iterators and for . . of
  • Default arguments for functions and the rest operator
  • Spread syntax
  • Destructuring
  • Module syntax
  • New collections (Set, Map, WeakSet, WeakMap)
  • Proxies and Reflection
  • Symbols
  • Typed arrays
  • Support for subclassing built-ins
  • Guaranteed tail-call optimization
  • Simpler Unicode support
  • Binary and octal literals

Classes, let, const, promises, generators, iterators, modules, etc. These are all features meant to take JavaScript to a bigger audience, and to aid in programming in the large.

It may come as a surprise that so many features could get past the standardization process when ECMAScript 4 failed. In this sense, it is important to remark that many of the most invasive features of ECMAScript 4 were not reconsidered (namespaces, optional typing), while others were rethought in a way they could get past previous objections (making classes syntactic sugar on top of prototypes). Still, ECMAScript 2015 was hard word and took almost 6 years to complete (and more to fully implement). However, the fact that such an arduous task could be completed by the ECMAScript technical committee was seen as a good sign of things to come.

A small revision to ECMAScript was released in the year 2016. This small revision was the consequence of a new release process implemented by TC-39. All new proposals must go through a four stage process. Every proposal that reaches stage 4 has a strong chance of getting included in the next version of ECMAScript (though the committee may still opt to push back its inclusion). This way proposals are developed almost on their own (though interaction with other proposals must be taken into account). Proposals do not stop the development of ECMAScript. If a proposal is ready for inclusion, and enough proposals have reached stage 4, a new ECMAScript version can be released.

The version released in year 2016 was a rather small one. It included:

  • The exponentiation operator ( ** )
  • Array . prototype . includes
  • A few minor corrections (generators can't be used with new, etc.)

However, certain interesting proposals have already reached stage 4 in 2016, so what lies ahead for ECMAScript?

The Future and Beyond: ECMAScipt 2017 and later

Perhaps the most important stage 4 proposal currently in the works is async / await . Async / await are a syntactic extension to JavaScript that make working with promises much more palatable. For instance, take the following ECMAScript 2015 code:

And compare it to the following async / await enabled code:

Other stage 4 proposals are minor in scope:

  • Object . values and Object . entries
  • String padding
  • Object . getOwnPropertyDescriptors
  • Trailing commas if function parameters

These proposals are all slated for release in the year 2017, however the committee may choose to push them back at their discretion. Just having async / await would be an exciting change, however.

But the future does not end there! We can take a look at some of the other proposals to get a sense of what lies further ahead. Some interesting ones are:

  • Asynchronous iteration (async/await + iteration)
  • Generator arrow functions
  • 64-bit integer operations
  • Realms (state separation/isolation)
  • Shared memory and atomics

JavaScript is looking more and more like a general purpose language. But there is one more big thing in JavaScript's future that will make a big difference.


If you have not heard about WebAssembly, you should read about it. The explosion of libraries, frameworks and general development that was sparked since ECMAScript 5 was released has made JavaScript an interesting target for other languages. For big codebases, interoperability is key. Take games for instance. The lingua-franca for game development is still C++, and it is portable to many architectures. Porting a Windows or console game to the browser was seen as an insurmountable task. However, the incredible performance of current JIT JavaScript virtual machines made this possible. Thus things like Emscripten, a LLVM-to-JavaScript compiler, were born.

Mozilla saw this and started working on making JavaScript a suitable target for compilers. Asm.js was born. Asm.js is a strict subset of JavaScript that is ideal as a target for compilers. JavaScript virtual machines can be optimized to recognize this subset and produce even better code than is currently possible in normal JavaScript code. The browser is slowly becoming a whole new target for compiling apps, and JavaScript is at the center of it.

However, there are certain limitations that not even Asm.js can resolve. It would be necessary to make changes to JavaScript that have nothing to do with its purpose. To make the web a proper target for other languages something different is needed, and that is exactly what WebAssembly is. WebAssembly is a bytecode for the web. Any program with a suitable compiler can be compiled to WebAssembly and run on a suitable virtual machine (JavaScript virtual machines can provide the necessary semantics). In fact, the first versions of WebAssembly aims at 1-on-1 compatibility with the Asm.js specification. WebAssembly not only brings the promise of faster load times (bytecode can be parsed faster than text), but possible optimizations not available at the moment in Asm.js. Imagine a web of perfect interoperability between JavaScript and your existing code.

At first sight, this might appear to compromise the growth of JavaScript, but in fact it is quite the contrary. By making it easier for other languages and frameworks to be interoperable with JavaScript, JavaScript can continue its growth as a general purpose language. And WebAssembly is the necessary tool for that.

At the moment, development versions of Chrome, Firefox and Microsoft Edge support a draft of the WebAssembly specification and are capable of running demo apps.

Future directions

The development of the international classification systems appears to reflect a growing consensus regarding the clinical entity of ADHD. Evidence has been presented (Faraone 2005) to show that ADHD meets the criteria established by Robins and Guze (1970) for the validation of psychiatric diagnoses. Patients with ADHD show a characteristic pattern of hyperactivity, inattention, and impulsivity that lead to adverse outcomes. ADHD can be distinguished from other psychiatric disorders including those with which it is frequently comorbid. Longitudinal studies have demonstrated that ADHD is invariably chronic and not an episodic disorder. Twin studies show that ADHD is a highly heritable disorder. Molecular genetic studies have found genes that explain some of the disorder’s genetic transmission. Neuroimaging studies show that ADHD patients have abnormalities in frontal-subcortical-cerebellar systems involved in the regulation of attention, motor behavior, and inhibition. Many individuals with ADHD show a therapeutic response to medications that block the dopamine or noradrenaline transporter. This evidence as reviewed by Faraone (2005) supports the hypothesis of ADHD being a clinical entity and fulfilling the Robins and Guze (1970) validity criteria.

However, there has been considerable debate about this issue. Critics have described ADHD as a diagnosis used to label difficult children who are not ill but whose behavior is at the extreme end of the normal range. Concerns have been raised that �HD is not a disease per se but rather a group of symptoms representing a final common behavioral pathway for a gamut of emotional, psychological, and/or learning problems” (Furman 2005). Most of the research studies available rely on clinically referred cases, i.e. severely ill or narrowly diagnosed patients. The generalization of the research findings to non-referred cases in the community is therefore not necessarily valid.

In summary, the cardinal ADHD symptoms of inattention, hyperactivity, and impulsivity are not unique to ADHD. In addition, there is a remarkable overlap of these ADHD symptoms with those of comorbid mental health conditions or learning problems. A consistent genetic marker has not been found, and neuroimaging studies have been unable to identify a distinctive etiology for ADHD. The lack of evidence of a unique genetic, biological, or neurological pathology hinders the general acceptance of ADHD as a neurobehavioral disease entity. In addition, the ratings of school children with ADHD by parents and teachers are frequently discrepant and do not appear to provide an objective diagnostic basis. The issue of the clinical entity of ADHD remains therefore an open question and requires further investigation.