Phonetic Annotation/Content

This is the phonetic analogy to text content (<t>) and allows associating a phonetic transcription with any structural element, it is often used in a speech context. Note that for actual segmentation into phonemes, FoLiA has another related type: Phonological Annotation

Specification

Annotation Category:
 

Content Annotation

Declaration:

<phon-annotation set="..."> (note: set is optional for this annotation type; if you declare this annotation type to be setless you can not assign classes)

Version History:
 

Since v0.12

Element:

<ph>

API Class:

PhonContent (FoLiApy API Reference)

Required Attributes:
 
Optional Attributes:
 
  • set – The set of the element, ideally a URI linking to a set definition (see Set Definitions (Vocabulary)) or otherwise a uniquely identifying string. The set must be referred to also in the Annotation Declarations for this annotation type.
  • class – The class of the annotation, i.e. the annotation tag in the vocabulary defined by set.
  • processor – This refers to the ID of a processor in the Provenance Data. The processor in turn defines exactly who or what was the annotator of the annotation.
  • annotator – This is an older alternative to the processor attribute, without support for full provenance. The annotator attribute simply refers to the name o ID of the system or human annotator that made the annotation.
  • annotatortype – This is an older alternative to the processor attribute, without support for full provenance. It is used together with annotator and specific the type of the annotator, either manual for human annotators or auto for automated systems.
  • confidence – A floating point value between zero and one; expresses the confidence the annotator places in his annotation.
  • datetime – The date and time when this annotation was recorded, the format is YYYY-MM-DDThh:mm:ss (note the literal T in the middle to separate date from time), as per the XSD Datetime data type.
Accepted Data:

<comment> (Comment Annotation), <desc> (Description Annotation)

Valid Context:

<current> (Correction Annotation), <def> (Definition Annotation), <div> (Division Annotation), <event> (Event Annotation), <ex> (Example Annotation), <head> (Head Annotation), <hiddenw> (Hidden Token Annotation), <list> (List Annotation), <morpheme> (Morphological Annotation), <new> (Correction Annotation), <note> (Note Annotation), <original> (Correction Annotation), <p> (Paragraph Annotation), <part> (Part Annotation), <phoneme> (Phonological Annotation), <ref> (Reference Annotation), <s> (Sentence Annotation), <str> (String Annotation), <suggestion> (Correction Annotation), <term> (Term Annotation), <utt> (Utterance Annotation), <w> (Token Annotation)

Explanation

Written text is always contained in the text content element (<t>, see Text Annotation), for phonology there is a similar counterpart that behaves almost identically: <ph>. This element holds a phonetic or phonological transcription. It is used in a very similar fashion:

<utt src="helloworld.mp3"  begintime="..." endtime="...">
    <ph>helˈoʊ wɝːld</ph>
    <w xml:id="example.utt.1.w.1" begintime="..." endtime="...">
        <ph>helˈoʊ</ph>
    </w>
    <w xml:id="example.utt.1.w.2" begintime="..." endtime="...">
        <ph>wɝːld</ph>
    </w>
</utt>

Like the Text Annotation, the <ph> element supports the offset attribute, referring to the offset in the phonetic transcription for the parent structure. The first index being zero. It also support multiple classes (analogous to text classes), the implicit default and predefined class being current. You could imagine using this for different notation systems (IPA , SAMPA, pinyin, etc…).

Phonetic transcription and text content can also go together without problem:

<utt>
    <ph>helˈoʊ wɝːld</ph>
    <t>hello world</t>
    <w xml:id="example.utt.1.w.1">
        <ph offset="0">helˈoʊ</ph>
        <t offset="0">hello</t>
    </w>
    <w xml:id="example.utt.1.w.2">
        <ph offset="8">wɝːld</ph>
        <t offset="6">world</t>
    </w>
</utt>

Note

You should still use the normal Text Annotation for a normal textual transcription of the speech. This annotation type is reserved for phonetic/phonological transcriptions.

See also

If you want to actually do segmentation into phonemes, see Phonological Annotation.

Example

A simple example document:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
<?xml version="1.0" encoding="utf-8"?>
<FoLiA xmlns="http://ilk.uvt.nl/folia" version="2.0" xml:id="example">
  <metadata>
      <annotations>
          <phon-annotation>
			 <annotator processor="p1" />
          </phon-annotation>
          <utterance-annotation>
			 <annotator processor="p1" />
          </utterance-annotation>
          <token-annotation>
			 <annotator processor="p1" />
          </token-annotation>
      </annotations>
      <provenance>
         <processor xml:id="p1" name="proycon" type="manual" />
      </provenance>
  </metadata>
  <speech xml:id="example.speech">
    <utt xml:id="example.utt.1" src="helloworld.mp3"  begintime="00:00:01.000" endtime="00:00:02.000">
        <ph>helˈoʊ wɝːld</ph>
        <w xml:id="example.utt.1.w.1" begintime="00:00:00.000" endtime="00:00:01.000">
            <ph>helˈoʊ</ph>
        </w>
        <w xml:id="example.utt.1.w.2" begintime="00:00:01.000" endtime="00:00:02.000">
            <ph>wɝːld</ph>
        </w>
    </utt>
  </speech>
</FoLiA>