Baby sign language (science or hype?)

PXL_20201003_001411369_cropped

Recap: What is “baby sign language”?

I made an attempt at defining this concept in the last post, which I’ll reproduce here: baby sign language is signing between (hearing) parents/caregivers and young children, where the signs are either from a real sign language like ASL, are idiosyncratic inventions of the family using them, or are some combination of both. Even if the signs come from a legitimate sign language, they’re often simplified, and the fuller grammar is not usually taught to/learned by parents or their children.

In this post, I’ll talk about baby sign timelines and tips, and then we’ll look at the supposed benefits versus the science.

Timelines

Taking into account your baby’s development, signing with them might roughly follow this path:

  • 6-9 months: Introduce basic, highly relevant signs
    • At this age, babies really start associating language (verbal or signed) with their world
    • Their long-term memory is now primed to start retaining the language heard around them
    • Their motor skills and hand-eye coordination are growing more precise
  • 7-12 months: Baby is likely to start signing back to you
    • Their first signs will be physical approximations – they may be less detailed and only roughly resemble the sign’s actual shape or location – and they may also be semantic approximations – for example, where the meaning of the sign is broader than the standard/adult meaning (a phenomenon called overextension).
      • My son Ryden’s first sign was more. He started using it in the same context in which we modeled it – during meals to indicate “I want more food” – but then began using it in many different contexts, all the time. From these various contexts, we guessed his meaning was something like “do / keep doing the thing that I like / that makes me happy”. His usage was still related to the concept of more, but it also communicated things that would be captured by a larger variety of expressions in standard ASL. 
  • 12+ months: Introduce slightly more complex (and maybe abstract, but still relevant) signs
  • ~2 years: Child may start stringing signs together, and combining them with speech

Still, like with spoken language, an infant will grasp a sign’s meaning before they’re able (or willing) to produce the sign themselves.

I started signing for Ryden when he was about 5 months old. Signs I used daily were milk, drink, sleep, food/eat, more, read, book, mommy, water, change, diaper, and bath; and I included some of these into the two-word phrases drink milk, read book, more food, and change diaper. Signs I used a bit less frequently, but still repeatedly, were daddy, hungry, cereal, sing/song/music, outside, I love you, play, all done, and up. 

Ryden always seemed to pay extra attention when I used a sign (he grew quiet and still, looked intently at my hands and face), and I could tell by way he responded at 10-11 months that he recognized all of them. But except for adopting the “more” sign around 11 months, he didn’t produce any others. By then he had begun voicing some semi-intelligible words anyway, so I kind of dropped off signing to him.[1]

Tips

Here are some common-sensical tips for signing with your baby:

  1. Be consistent and contextual.
    • Use the same sign for the same thing. Sign at the time of the event.
  2. Be open and encouraging to any efforts.
    • As mentioned, your baby’s first attempts will be approximations. They may even make up a totally different “sign”.
      • Before Ryden began using the more sign, he would just open his mouth really wide and whine impatiently in between spoonfuls (he’s been a voracious eater since the start of solid food). Even though he wasn’t signing, his desire was very clearly (if not pleasantly) communicated.
      • When he began signing more, he would kind of slap his hands together (like a clap, but with his fingers slightly curled) – definitely an approximation of the precise adult sign.
  3. Make signing fun.
    • Use it in games; be excited; don’t scold if your baby doesn’t sign back. 
  4. Be multimodal.
    • Say the word when you are signing it.
  5. Be repetitive.
    • Incorporate the signs into your daily routines.
  6. Use signs for tangible objects and actions over more general and abstract words/concepts, and choose signs for things that interest your baby.
    • Try milk, dog, and crawl before yes or please.

Proposed benefits (versus the science)

Infants are little sponges. They start soaking up the language around them immediately – as soon as they’re born (and newer research points even earlier, to in-utero). At 6 months old, they comprehend considerably more than one might imagine. Language production, however, takes a while longer. The earliest that babies produce their first spoken words is 8-10 months. More often it’s closer to one year. (Ryden’s first word, cup [“kah!”], was around 11 months.) Proponents of baby sign language claim that babies can produce signs before speech. According to this claim, an infant’s hand/arm muscles develop before their articulatory system. Thus one of the main motivations for teaching a preverbal child some basic signs is to enable them to communicate sooner.

The babysignlanguage.com website groups the benefits of baby signing into three types, and most of the other material I’ve seen proposes benefits that fall into these categories.

Practical

If babies have the tools to communicate what they want, they will use those tools instead of crying and tantrum-ing. Parents will be able to understand when their child wants Cheerios, or has a poopy diaper, or is tired, and respond appropriately. There is less frustration all around.

Emotional

Increased communication between parents and their preverbal children tightens the bonds between them. Parents report feeling closer to their babies, better able to anticipate and understand their needs. And their babies, secure in this close connection, are generally less fussy.

Cognitive

Advocates assert that signing with your baby improves their long-term cognitive development. Per babysignlanguage.com, this means a +12 IQ point advantage, a larger speaking vocabulary, earlier reading and higher reading comprehension, and better grades. 

Bill White and Kathleen Harper of “Happy Baby Signs” also state that signing with babies:

  • “Accelerates language acquisition.” Babies that sign usually speak earlier and have larger vocabularies;
  • Employs more diverse brain pathways like the visual and kinesthetic (in addition to the auditory) to process language[2]; and (again)
  • “May actually improve a child’s IQ.” Research that tracked signing babies as they grew found that those children, at eight years old, scored an average of 12 points higher in IQ testing than a non-signing control group (after accounting for socioeconomic differences). 

This sounds awesome. I want a sweeter, brainier, more communicative baby! So let’s delve a bit further.

The practical and emotional benefits seem obvious. However, they both hinge on the assumption that infants really can and do learn to sign before they start communicating verbally – or even with gestures (like pointing[3]). It’s actually unclear whether this is true. I recommend reading this article on the subject, but here are the basic points:

  • It’s difficult to determine what counts as a baby’s first real spoken words (versus the syllables they’ve been babbling up to that point), because their early attempts are necessarily imperfect – approximations.
    • When your baby says “dada” for daddy, should that be considered a real word? If you observe that your little one says “dada” whenever he/she sees dad, it makes sense to count “dada” as their legitimate word for daddy. But at this stage, “dada” could also be meaningless babble. Thus “researchers who study the emergence of speech must find ways to sift those instances out. They need to establish a set of objective criteria for recognizing an utterance as a spoken word.”
    • Similarly, (as discussed above) a baby’s first signs are going to be approximations.
    • Comparing the onset of infant signing to infant speaking requires using the same criteria for what counts as a word.
  • If infants really can learn to sign before they learn to speak, we should see evidence that deaf babies and babies of deaf parents sign earlier than hearing babies. But there aren’t any solid studies (with large sample sizes) demonstrating this.

As for the long-term cognitive benefits… again, the research is weak. The most frequently-cited work in this area is that by Linda P. Acredolo and Susan W. Goodwyn (1988, 1998, 2000)[4]. Their studies followed children from infancy to 36 months, and found that the ones who were taught to sign had slightly larger receptive vocabularies than babies in a control group – but only at a couple points in the middle of the study. By the end, differences between the two groups were not statistically significant. 

More recent and well-controlled studies[5] have also failed to uncover longer-term language gains for sign-taught babies.

On IQ, Dr. Gwen Dewar (author of the article referenced above) states: “the relevant research has yet to appear in any peer-reviewed journal. On this question, it’s safe to say that the jury is still out.”

Wikipedia also underlines the questionable nature of baby sign language benefit claims on the internet by detailing two studies conducted on sites making these claims. In the first study, over 90% of the information was opinion pieces or marketing products, with no research backing. The second study found only 10 articles out of 1747 (!) that presented research on the developmental effects of baby signing. And the consensus among those 10 articles was that baby signing doesn’t improve linguistic production or caregiver-child relationships. (That said, neither is there evidence that teaching babies to sign could be detrimental.)

My take

My brief takeaway is – sign with your baby if it interests you and seems fun. I enjoyed learning a bit of ASL just for myself. But temper your expectations. Don’t expect your child to sign back quickly / much / at all before they start on verbal language (unless maybe you AND other caregivers sign to them constantly). And don’t expect that it will later transform them into a linguistic super-genius.

Resources / Further reading


[1] All babies are different, but my main guess as to why he didn’t produce any other signs before starting speech is that he needed more input – more than the few words I regularly used, and from more people (his father, our nanny, etc.).

[2] From White, W.P. and Harper, K.A. (2017). Signs of a Happy Baby: The Baby Sign Language Book. United States: Morgan James Publishing. [Google Books link] (pp. 11-12):

“When you say the word ‘milk’, babies hear the word […] That auditory stimulus travels from the ears to the language center of the brain to be stored as the word ‘milk’. However, […] when babies watch their parents sign milk and hear them say the word, […] [they] also have the wonderful opportunity to see what the word looks like. This additional visual information travels from the eyes back to the occipital cortex, which is commonly called the visual center of the brain. […] Your baby also knows what the word ‘milk’ feels like. When babies start signing back using their hands, that is what is called kinesthetic information. The motion of their hands and arms is being controlled by a third part of the brain called the motor cortex.”

[3] Do not underestimate the communicative powers of pointing! As soon as Ryden started pointing (between 11-12 months), it became much easier to intuit what he wanted or was interested in – because he simply pointed at it.

[4]
(1) Acredolo, L.P. and Goodwyn, S.W. (1988). Symbolic gesturing in normal infants. Child Development 59: 450-466.
(2) Acredolo, L.P. and Goodwyn, S.W. (1998). Baby Signs. Chicago: Contemporary Books.
(3) Goodwyn, S.W., Acredolo L.P, and Brown, C. (2000). Impact of symbolic gesturing on early language development. Journal of Nonverbal Behavior. 24: 81-103.

[5]
(1) Johnston, J.C., Durieux-Smith, A., and Bloom, K. (2005). Teaching gestural signs to infants to advance child development: A review of the evidence. First Language 25(2): 235-251.
(2) Kirk, E., Howlett, N., Pine, K.J., and Fletcher, B.C. (2013). To Sign or Not to Sign? The Impact of Encouraging Infants to Gesture on Infant Language and Maternal Mind-Mindedness. Child Dev. 84(2):574-90.
(3) Seal, B.C. and DePaolis, R.A. (2014). Manual Activity and Onset of First Words in Babies Exposed and Not Exposed to Baby Signing. Sign Language Studies 14(4): 444-465.

Sign language, ASL, and baby signing

September was National Deaf Awareness Month. I tried to post this piece before the month ended, but alas! Better late than never. I’d like to discuss and dispel some of the many misconceptions around signed languages. Here are a few of the most common:

  • Sign language is universal – there is only one
  • Sign languages are not “real” languages
    • They’re simpler and easier to learn than spoken languages; they’re just gestures, or body language, or pantomime
    • They’re not as complex as spoken languages; they don’t have true grammars, or large vocabularies, or the ability to express abstract concepts
    • They were “invented”; they didn’t evolve naturally among communities over time
    • They have to be explicitly taught; they cannot be acquired naturally by children through exposure as with spoken language
  • Sign languages are the visual equivalent of spoken languages – for example, American Sign Language is the visual equivalent of English

I’ll also spend some time discussing “baby sign language” (which is of personal import due to last year’s arrival of my very own teacup human).

Sign languages

Sign languages are natural languages whose modality is visual and kinesthetic instead of speech- and sound-based. They exhibit complexity parallel to that of spoken languages, with rich grammars and lexicons. Sign languages developed and are used among communities of deaf people, but can also be used by hearing individuals. These languages are not composed solely of hand movements. A good deal of their prosody, grammar (e.g. syntax, morphology), modification (adjectives and adverbials), and other features are expressed through head movements, facial expressions, and body postures.

American Sign Language (ASL)

American Sign Language (ASL) is the main language of Deaf communities in the U.S. and Canada. Contrary to what many assume, ASL is not grammatically related to English. From Wikipedia:

“On the whole […] sign languages are independent of spoken languages and follow their own paths of development. For example, British Sign Language (BSL) and American Sign Language (ASL) are quite different and mutually unintelligible, even though the hearing people of the United Kingdom and the United States share the same spoken language. The grammars of sign languages do not usually resemble those of spoken languages used in the same geographical area; in fact, in terms of syntax, ASL shares more with spoken Japanese than it does with English.”

ASL emerged in the early 1800s at the American School for the Deaf in Connecticut, from a mix of Old French Sign Language, village sign languages, and home signs. ASL and French Sign Language (LSF – Langue des Signes Française) still have some overlap, but are not mutually intelligible.

One element of ASL that I find particularly neat is its reduplication (repetition of a morpheme or word to serve a particular grammatical function, like plurality)[1]. Reduplication is a common process in many languages, and it performs several important jobs in ASL. It does things like pluralize nouns, convey intensity, create nouns from verbs (e.g. the noun chair is a repeated, slightly altered version of the verb to sit), and represent verbal aspects such as duration (e.g. VERB + for a long time).

Baby sign language

What is “baby sign language”? I haven’t found a very precise definition. The term seems to basically describe signing between (hearing) parents/caregivers and young children, but whether the signs come from a legitimate sign language like ASL, or are invented idiosyncratically by the family using them (and are maybe more iconic[2]), or some combination of the two, varies from source to source.

Anthropologist-psychologist Gwen Dewar, on her blog parentingscience.com, says:

“The term is a bit misleading, since it doesn’t refer to a genuine language. A true language has syntax, a grammatical structure. It has native speakers who converse fluently with each other. By contrast, baby sign language […] usually refers to the act of communicating with babies using a modest number of symbolic gestures.”

When Dr. Dewar mentions symbolic gestures, she is describing things like pointing or other hand motions that accompany speech and make communication with preverbal infants a little easier. Most of the baby sign language resources I’ve come across endorse using ASL as a base, however, so it’s not just “symbolic gestures”. At the same time, the ASL signs are often simplified (both by baby sign teachers and the parents learning), and the fuller grammar is not usually taught to/learned by parents or their children.

In the following post, I’m going to delve further into baby sign language – its supposed benefits, tips, timelines, resources, and my personal experience so far. We’ll look at proponents’ claims versus the scientific research. (Spoiler: Your offspring won’t be the next Einstein just because you taught them how to sign ‘more’ and ‘milk’.)

 

*Photo attribution: “Learn sign language at the playground”


[1] More on this process in “I heart hangry bagel droids (or: How new words form)” – see #9.

[2] I’ll discuss iconicity in the next post.

Career interviews: Computational linguist for a virtual assistant

working wugs_cropped

Wugs go to work

After much delay (eek! just realized it’s been a year!), I have another interview with a career linguist for your reading pleasure. [See the first interview here.] Even though I still get the “I’ve never met a real-live linguist” reaction when telling folks what I do, these days there are indeed people working full-time, earning living wages, as these specialized language nuts – and not all as professors in academia, or as translators/interpreters for the UN.

* * * * *

Just like with my last interviewee, I met Allan at Samsung Research America, where we worked together on Bixby, Samsung’s virtual voice assistant. On the Bixby linguist team, we worked with engineers, Quality Assurance (QA) testers and others to develop a personal assistant that would carry out thousands of different spoken user commands. Also like with my last interviewee, Allan is no longer at the job I interviewed him about. (He’s now a Language Engineer on Amazon’s Alexa!). I’m keeping questions and answers in present tense, however, because I feel like it.

Allan Schwade, a graduate student in linguistics, won the Humanities Division Dean's Award for his poster on the adaptation of Russian words by English speakers

  1. What kind of work do you do?

I’m a computational linguist, which means I create solutions for natural language processing problems using computers. More specifically, I work on the systems and machine learning models that enable your smart devices to understand you when you say “set an alarm for 7am” or “tell me the weather in Chicago”.

  1. Describe a typical day at your job.

I usually start the day by meeting with my manager. The lab I work in supports products in production and conducts research and development for smart devices. If there is an issue with a product in production, I’ll work with the team to solve the problem. Usually this involves curating the training data for the machine learning models – removing aberrant data from training sets or generating new data to support missing patterns. If nothing is on fire, there are usually several projects I’ll be working on at any given time. Projects generally start out with me doing a lot of reading on the state of the art, then I’ll reach a point where I’m confident enough to build a proof of concept (POC). While I’m creating the POC, the linguists will generate data for the models. Once the code and data are ready, I’ll build the models and keep iterating until performance is satisfactory. The only really dependable thing in my schedule is lunch and a mid-afternoon coffee break with colleagues, which are both indispensable.

  1. How does your linguistics background inform your current work?

My degree in linguistics is crucial for my current line of work. When building machine learning models, so much rests on data you feed into your models. If your data set is diverse and representative of the problem, your model will be robust.

Having a linguistics background also gives me quick insight into data sets and how to balance them. Understanding the latent structures in the data allows me to engineer informative feature vectors for my models (feature vectors are derived from the utterances collected and are the true inputs to the machine learning model).

  1. What do you enjoy most and/or least about the job?

I really enjoy getting to see differences between human and machine learning. We have a pretty good idea of the types of things humans will attend to when learning language, but sometimes those things aren’t informative for machines. It can be frustrating when something I’d call “obvious” is useless in a model and even more frustrating when something “marginal” is highly informative. But I never tire of the challenge, the satisfaction I feel at the end of a project is worth it.

The thing I enjoy least is data annotation. The process of doing it is indispensable because you become intimately familiar with the problem, but after a couple of hours of it my mind goes numb.

  1. What did you study in college and/or grad school?

I got my BA from Rutgers University and my MS from the University of California, Santa Cruz. Both degrees were in linguistics and both schools specialized in generative linguistic theory. I enjoyed a lot about the programs but they did a better job of preparing people for careers in academia than industry. Learning programming or common annotation tools and schemas before graduating would have made industry life easier for me.

  1. What is your favorite linguistic phenomenon?

Loanword adaptation! I wrote my master’s thesis on it. Seeing how unfamiliar phonemes are digested by speakers never fails to pique my interest. In general, I love it when stable systems are forced to reconcile things outside their realm of experience.

  1. (If you had the time) what language would you learn, and why?

As a phonetician I’d love to learn Georgian for its consonant clusters, Turkish for its morpho-phonology, Hmong for its tones, or ASL because it’s a completely different modality than what I specialized in. As a subjective entity who does things for personal enjoyment, I’d love to learn Japanese.

  1. Do you have any advice for young people looking to pursue a career in linguistics?

If you want to go into industry doing natural language processing, I cannot stress enough how important the ability to code is. It’s true that for annotation work you won’t usually need it, but if you want to be annotation lead, the ability to write utility scripts will save you a lot of time. Also, how I transitioned from annotator to computational linguist came from me showing basic coding competency – the engineers were too busy to work on some projects so they threw the smaller ones my way. This brings me to my next piece of advice: always voice your interest in things that interest you to those with the potential to get you involved. Telling your co-worker you really want to work on a cool project will do next to nothing, but telling your manager or the project lead that you are interested in a project may get you involved.

Frame Semantics and FrameNet

FN image

I’d like to discuss a theory in cognitive linguistics which is very near to my heart[1]: frame semantics. I’ll also present FrameNet, a database built using frame semantic theory, which has been and continues to be an excellent resource in the fields of natural language processing (NLP) and machine learning (ML).

Why is frame semantics cool? Why should you want to learn about it? Just this: the theory is an intuitive and comprehensive way to categorize the meaning of any scenario you could possibly dream up and express via language. Unlike many other semantic and syntactic theories, the core concepts are quickly understandable to the non-linguist. What’s more, frame semantics can apply to language meaning at many different levels (from the tiniest morpheme to entire swaths of discourse), and it works equally well for any particular language – be it English, Mandarin, Arabic, or Xhosa. I’ll try to demonstrate the theory’s accessibility and applicability with some details.

American linguist Charles Fillmore developed the frame semantics research program in the 1980s, using the central idea of a frame: a cognitive scene or situation which is based on a person’s prototypical understanding of real-world (social, cultural, biological) experiences. A frame is ‘evoked’ by language – this can be a single word (called a lexical unit), a clause, a sentence, or even longer discourse. Each frame contains various participants and props, called frame elements (FEs). If you’ve studied syntax/semantics (the generative grammar kind), FEs are somewhat analogous to traditional theta roles.

FrameNet is a corpus-based lexicographic and relational database (sort of a complex dictionary) of English frames, the lexical units evoking them, annotated sentences containing those lexical units, and a hierarchy of frame-to-frame relations. It was built and continues to grow at the International Computer Science Institute (ICSI), a nonprofit research center affiliated with UC Berkeley. FrameNets have also been developed in other languages, such as Spanish, Brazilian Portuguese, Japanese, Swedish, French, Chinese, Italian, and Hebrew.

Each frame entry includes a definition, example sentences, frame elements, lexical units, and annotation that illustrates the various fillers (words) of the FEs as well as their syntactic patterns. Let’s unpack all of this!

We’ll take a look at the Motion frame in FrameNet. Some screenshots of the frame entry follow.

framenet_motion1

The Motion frame is first defined. Its definition includes the frame elements that belong to the frame (the text with color highlighting):

“Some entity (Theme) starts out in one place (Source) and ends up in some other place (Goal), having covered some space between the two (Path). Alternatively, the Area or Direction in which the Theme moves or the Distance of the movement may be mentioned.”

After the definition come example sentences, featuring lexical units that evoke the frame (the black-backgrounded text) such as move, drift, float, roll, go.

Further down is the list of frame elements with their definitions and examples.

framenet_motion2

Here, the Theme FE is “the entity that changes location,” while the Goal FE is “the location the Theme ends up in.” In order for language to evoke this Motion frame, it must have some words or phrases which instantiate the Theme, the Goal, and the other FEs listed. In the examples above, me is a Theme in The explosion made [me] MOVE in a hurry; and into the slow lane is a Goal in The car MOVED [into the slow lane].

At the bottom of the entry is a list of lexical units that belong to or evoke the frame, as well as links to annotation of sentences from real data that contain those words.

framenet_motion3

Verbs like come, glide, roll, travel, and zigzag all evoke, quite sensibly, the Motion frame.

Once you click on the “Annotation” link for a particular lexical item, you’re taken to a page that looks like this:

framenet_motion4

Natural language sentences pulled from online corpora (texts from newspapers, magazines, books, tv transcripts, scholarly articles, etc.) are annotated for their Motion FEs. Annotation for the lexical item glide gives us an idea of the types of “entities” (the purple-backgrounded text, or Theme FEs) that “change location” (i.e. that glide) – boats, pink clouds, men, cars, planes, gondolas, and so on.

* * * * *

After this mini FrameNet dive, you may be wondering how the database is used in a concrete sense. To illustrate, let’s compare two sentences:

  1. The boat GLIDED into the harbor.
  2. The dingy DRIFTED away from the harbor.

The entities differ (boat vs. dingy), the verbs differ (glide vs. drift) and the prepositions differ (into vs. [away] from). Yet at a higher level, both of these sentences describe a Theme which “changes location” – either moving towards a Goal in (1), or from a Source in (2). They both indicate motion. Because FrameNet helps machines “learn” that sentences with a variety of nouns, verbs, prepositions, and syntactic patterns can basically point to the same scenario, it’s a useful tool for many applications in the computational realm.

These days computers do all kinds of language-y things for us: answer questions, paraphrase texts, extract relevant information from text (and then maybe organize it thematically – for instance, around people, places, or events), and even generate new texts. These feats require that a computer parse natural language into accurate semantic chunks. FrameNet’s semantically- and syntactically-annotated data can be used as training input for machine models that “learn” how to analyze such meaning chunks, enabling our electronic devices to respond, paraphrase, or extract information appropriately.

To peruse a (very long) list of the projects which have used FrameNet data (organized by requester/researcher), check out the FrameNet Downloaders page.

So – on the off-chance that you find yourself stuck at home and bored out of your mind (?!?!)… you might perhaps enjoy a little investigation of frame-semantic characterization of scenes that involve applying heat, intoxication, or temporal collocation. 🙂

 

[1] Why am I so fond of frame semantics? A terrific professor of mine during grad school introduced the theory, and it resonated with me immediately. I used it in my master’s thesis, then presented the paper at the International Conference on Construction Grammar in 2014. Eventually, I had the privilege of working at FrameNet, where I came to know the brilliant lexicographers/semanticists/cognitive linguists who have dedicated decades of their lives to the theory and the project. Sadly, I never met the legendary Chuck Fillmore, as he passed away the year before I joined the FrameNet team.