Phonological Annotation

The smallest unit of annotatable speech in FoLiA is the phoneme level. The phoneme element is a form of structure annotation used for phonemes. Alike to morphology, it is embedded within a layer phonology which can be embedded within word/tokens.

Specification

Annotation Category:
 

Subtoken Annotation

Declaration:

<phonological-annotation set="..."> (note: ``set`` is optional for this annotation type)

Version History:
 

since v0.12

Element:

<phoneme>

API Class:

Phoneme

Required Attributes:
 
Optional Attributes:
 
  • xml:id – The ID of the element; this has to be a unique in the entire document or collection of documents (corpus). All identifiers in FoLiA are of the XML NCName datatype, which roughly means it is a unique string that has to start with a letter (not a number or symbol), may contain numers, but may never contain colons or spaces. FoLiA does not define any naming convention for IDs.
  • set – The set of the element, ideally a URI linking to a set definition (see Set Definitions (Vocabulary)) or otherwise a uniquely identifying string. The set must be referred to also in the Annotation Declarations for this annotation type.
  • class – The class of the annotation, i.e. the annotation tag in the vocabulary defined by set.
  • processor – This refers to the ID of a processor in the Provenance Data. The processor in turn defines exactly who or what was the annotator of the annotation.
  • annotator – This is an older alternative to the processor attribute, without support for full provenance. The annotator attribute simply refers to the name o ID of the system or human annotator that made the annotation.
  • annotatortype – This is an older alternative to the processor attribute, without support for full provenance. It is used together with annotator and specific the type of the annotator, either manual for human annotators or auto for automated systems.
  • confidence – A floating point value between zero and one; expresses the confidence the annotator places in his annotation.
  • datetime – The date and time when this annotation was recorded, the format is YYYY-MM-DDThh:mm:ss (note the literal T in the middle to separate date from time), as per the XSD Datetime data type.
  • n – A number in a sequence, corresponding to a number in the original document, for example chapter numbers, section numbers, list item numbers. This this not have to be an actual number but other sequence identifiers are also possible (think alphanumeric characters or roman numerals).
  • src – Points to a file or full URL of a sound or video file. This attribute is inheritable.
  • begintime – A timestamp in HH:MM:SS.MMM format, indicating the begin time of the speech. If a sound clip is specified (src); the timestamp refers to a location in the soundclip.
  • endtime – A timestamp in HH:MM:SS.MMM format, indicating the end time of the speech. If a sound clip is specified (src); the timestamp refers to a location in the soundclip.
  • speaker – A string identifying the speaker. This attribute is inheritable. Multiple speakers are not allowed, simply do not specify a speaker on a certain level if you are unable to link the speech to a specific (single) speaker.
Accepted Data:

<alt> (Alternative Annotation), <altlayers> (Alternative Annotation), <comment> (Comment Annotation), <correction> (Correction Annotation), <desc> (Description Annotation), <metric> (Metric Annotation), <part> (Part Annotation), <ph> (Phonetic Annotation/Content), <phoneme> (Phonological Annotation), <relation> (Relation Annotation), <str> (String Annotation), <t> (Text Annotation)

Valid Context:

<phoneme> (Phonological Annotation), <phonology> (Phonological Annotation)

Feature subsets (extra attributes):
 
  • function

Explanation & Example

The smallest unit of annotatable speech in FoLiA is the phoneme level. The <phoneme> element is a form of subtoken annotation used for phonemes.

Very much alike to morphology, it is embedded within a layer <phonology> which can be used within word/token elements (<w>) or directly within higher structure such as utterances (<utt>) if no words are distinguished:

<utt>
  <w xml:id="word" src="book.wav">
    <t>book</t>
    <ph>bʊk</ph>
    <phonology>
      <phoneme begintime="..."  endtime="...">
          <ph>b</ph>
      </phoneme>
      <phoneme begintime="..." endtime="...">
          <ph>ʊ</ph>
      </phoneme>
      <phoneme begintime="..." endtime="...">
          <ph>k</ph>
      </phoneme>
    </phonology>
  </w>
</utt>