This hypervideo is intended as a demonstration of current Advene software and framework capabilities (march 2005).
The objective of the advene approach is to help people design and exchange hypervideos (hypermedias composed from audiovisual material plus annotation structure contents). A hypervideo is composed of several views, some of which (static views) can be experienced in a standard web browser, as the page you are currently reading, while others (dynamic views) result from the augmentation of the stream with text, popups, links, and so on. Several other views can be used, such as timeline, transcription related to the stream, etc. For more precisions about the advene project, see the website or read the paper currently submitted to the HT'05 conference if you can access it.
The advene prototype allows you to define your annotation structure (annotation and relation types), to annotate a video, to design queries on the annotation and to create both static and dynamic views. All of this information can then be saved as packages that can freely be shared amongst users, while the video itself has to be retrieved differently depending on the rights attached to it. A freely distributed video can be shared without difficulty, while copyrighted film material would have to be bought by both the producer and the consumer of the package. That is why Advene means: Annotate Digital Video, Exchange on the NEt! (or Annotate DVd's, Exchange on the NEt!).
The hypervideo you are currently viewing was composed with material coming from one video file and an annotation structure contained in the package you have loaded in the Advene software.
This video contains Ted Nelson's presentation at a HT'03 conference panel. It lasts approximately 8 minutes, and is in MPEG4 format. The best thing you should do by now is download the video, save it somewhere on your disk, and link the package to it. To do so, use the advene main window "Select movie file..." button, select the appropriate video file, and save the package. You can click here to check if the movie is launched. If not, check that the path to the video file is set (Edit Package Properties).
Each of the panelist had to submit his views on Common-use hypertext for the next decade. As quoted on the panel website:
The panel itself will be presented in an imaginary corporate setting. Panel members have been requested to submit a 5 minute proposal to a new multi-billion dollar international consortium, Next Big Thing Inc., that has decided to launch itself as the information nexus of the future. They want an infrastructure to enable them to do this. They are prepared to either use current technology as it is, augment it, or completely start from scratch, and have the resources to build whatever they accept. Each panelist will have 5 minutes to make their case, then will be examined by the shareholders (the audience) who will vote on the final technology to adopt at the end.
The proposal should include a background to the technology they propose, the steps needed to implement their vision, and the result in 10 years time of adopting their proposal.
Panelists are, of course, allowed (expected? :) to sabotage each other's bids, with commercial espionage, bribery, subterfuge, and of course, old-fashioned heckling all fair game - this is the funding oportunity of a lifetime!
As announced on the website, the panel chair was Wendy Hall, and panelists were Paul de Bra, Ted Nelson and Peter Nuernberg.
As reported on personnal notes from Matt Webb, panelists were in fact Paul de Bra, Peter Murray-Rust, Ted Nelson, Peter Nuernberg, and Cathy Marshall.
They all appear on the video, and we have some images of their presence, corresponding to annotation beginning times:
Maybe you only see red rectangles instead of snapshots taken from the video stream. If so, it means that the images have not been extracted from the video yet (no image from the video is delivered with the package), and that you should let the player capture them during the video play. For the moment being snapshots are taken when the player plays the video (and not very precisely indeed). So if you haven't played the moment you want a snapshot of, you should not have it. For accelerating things, you can use the "Player/Save image cache" facility for saving images for later reuse.
You can for instance play the first two minutes of the video, and reload this page, to check if snapshots have been updated. You also can check if they really correspond to the right timecodes. These little inconveniences correspond to the alpha status of the prototype and the difficulty to correctly and precisely handle multimedia data without much engineering force.
nelson.xml package contains
Peopleannotation type belongs to the
Summaryschema). Its MIME-type is
application/x-advene-structuredwhich means that annotations of type
Peoplecan have a simple structured text content such as:
name=Peter Murray Rust URL=http://www.ch.cam.ac.uk/CUCL/staff/pm.html
You can browse the content of the package using the advene "tree view". This view is called ad hoc view, as it was programmed, and as the user can only use it (changing it would need programming skills).
Another ad-hoc view is the "timeline view", which presents all the annotations defined in the package, on lines representing annotation type. Right-clicking on elements (annotations and annotation types) pops up a menu from which it possible to do various actions on the element.
The "transcription view" can also be used to textually present annotations of the same type temporaly connected to the video (text/video alignment interfaces, with temporal jumps allowed at any level). The context menu of the annotation is also available by right-clicking in the text.
Also called UTBV (User-Time Based Views), these views are merely standard web documents that use the package elements (mostly annotations). They take advantage of a template attribute language developped in the context of the Zope application server platform, called ZPT (Zope Page Templates) [Tutorial]. A static view therefore consists in standard XML elements with special TAL attributes that permit simple commands, such as
tal:condition, etc. The final construction of the page is done by the ZPT engine, and served by the web server embedded in the Advene application, when accessing the view.
The present document contains standard XHTML with several TAL expressions, for instance for looping on the six
People annotations, and presenting six list items with a name (i.e. the value of the "name" attribute), making it a web link (which destination is the value of the "URL" attribute), and inserting an image whose source is the snapshot corresponding to the beginning of the fragment related to each annotation. Here is for instance the corresponding part of the current view:
<ul> <li tal:repeat="a here/annotationTypes/People/annotations"> <a tal:attributes="href a/content/parsed/URL" tal:content="a/content/parsed/name">name</a>: <br></br><img tal:attributes="alt a/content/parsed/name" tal:attributes="src a/snapshot_url" /> </li> </ul>
The main means for locating and accessing elements of a package consists in using TALES (Template Attributes Language Expression Syntax). TALES expressions have the form of a path, with identifiers separated with slashes. For example,
here/annotationTypes/People/annotations permits the access to all the annotations of type
People contained here (
here representing the current package).
a/content/parsed/name identifies the value of the
name attribute of the
content of the annotation
The package browser is an ad-hoc view that allows to explore the package using TALES expressions in a "Nextstep file browser" way. The first column represents the element it has been launched on, or the package root.
Given their path nature, TALES expressions can easily be used in URLs, which is what we implemented in the Advene webserver. For instance, calling the present view was done using the
http://localhost:1234/packages/nelson URL. Other parts of the application can also be controlled through URLs. For instance, launching the player can be done using
Some static views can be applied to packages, others to annotations, others to schemas, etc. Some views (mainly administrative views) can be applied to any package, others only to a unique package (e.g. by refering to annotations ids directly, or containing textual information that are pertinent only in its context), other can be generic and limited to packages that contain some schemas and related annotations. Views can be exchanged amongst users just as annotations or schemas.
Just as for what happened on the web and was a very important factor of spread and use, users are expected to be able to easily design the static views they need by a mix of copy-paste and general understanding of the principles. We are planning to integrate a WYSIWYG editor such as Kupu in order to achieve this goal.
Also called STBV (Stream-Time Based Views) these views are video based, i.e. they consist of the video player augmented with several possibilities: add text on the video, add SVG drawing on the video (currently only available for Linux), navigate to other parts of the stream or to URLs (external or package related), display information popups, activate another dynamic view, etc.
The general approach currently implemented in Advene is based on the ECA (Event-Condition-Action) paradigm, a system we consider generally apprehensible, as anybody can use its principles nowadays for filtering his mail. A dynamic view is then a set of rules. Each rule is activated by an event (e.g. beginning of an annotation) and, if a condition applies (e.g. the annotation type is
People), it triggers an action (e.g. caption the name attribute from the content of the annotation).
Several dynamic views have been defined in the package you are currently consulting. You can set the current dynamic view in the Advene main interface. But a view change is itself an action you can trigger from another view (be it static or dynamic). Feel free to test for instance the D_People view.
You can find a complete description of the schemas and views that have been designed for this package in the packageConception view. These views have been designed to demonstrate the hypermedia capabilities of the Advene prototype. All of the views form a hypervideo generator.
The summary of the video, divided in "Parts of speech" : S_Summary
This view, named
index, is a static Advene view. Notice that as such you can also directly edit it in your browser. Any Advene static or dynamic view definition can also be edited in the browser (but be prepared to hand-edit XML in the latter case).
The transcription of the discourse of Ted Nelson S_Transcription
A view on the assistants Ted Nelson talks about: assistants
A reportNoises view sums up the Laughs and Applauses annotations by counting them, and presenting other noises that can be heard on the video.
Notice that most of the static views propose links to dynamic views and control of the player.
A special view is the _package-view view, which is an administrative view allowing to visualize all of the package's elements. Note that a static view can always be visualized considering two modes: raw (default), and navigation. The navigation mode provides navigation and test facilities to administrate the package content directly from a web browser. You can either see only the package-view in navigation mode, or switch to the navigation mode for the rest of your session. If you want to set again the "raw" mode, visit the Server Administration page accessible at the top of the document.
A view that allows to navigate between the different parts of the speech (using navigation popup + URL stack actions): D_Summary
A view that simply captions the utterances as subtitles, and proposes to follow links inside the discourse, when Ted Nelson reuses one term he previously used or defined: D_Utterance.
A generic view that proposes to follow any relation in the annotations, whatever it may be: D_relation. Note that this view has been sort of specialized in the D_Utterance view.
A view that provides links (in the URL stack) to websites for concepts Ted Nelson employs in his discourse (3 dummy assistants, 3 operating systems): outsideURLs.
The D_People view we already talked about that caption the names of the panelists and propose their web pages URL on the stack for further consultation.
A view that dynamically stops the video player and loads external URL in a browser, when it encounters an annotation with a URL attribute (cf. preceding view): stopAndBrowse.
The simple D_assistantsLoop view, that just loops on a part of the speech.
Two virtual editing view:
Note that several dynamic views propose hyperlinks to navigate the video, hyperlinks to static views, or hyperlinks to the outside web. Also note that the links above proposed hyperlinks change the current dynamic view and play the stream from its beginning. You can also just change the view and play from the current position (D_Summary, D_Utterance, outsideURLs, D_People, stopAndBrowse, D_abstract, D_laughsTour). The D_assistantsLoop view has a fixed begin time.
Now you can enjoy the different views, explore Advene, give it a stop, etc.
Two parts of this hypervideo may be of interest to you: