The followings are questions related to the planned upgrade:
1. I want to clarify that whether "5 published assignments" are in the same kind of tasks with the review task to take. Experiences of 5 English subtitles wouldn't qualify the person as a reviewer for Japanese subtitles. Also, experiences of Japanese transcription wouldn't really qualify the person as a reviewer for Japanese translation.
2. I'm not sure from that screen images whether is it possible to do 3-way communication (approver can send a message to both translator and reviewer and their reply is also delivered to other two) with the new messaging system.
3. Amara staff were kept saying that the workflow will change from task-based model to collaboration-based model and I had an impression that translator and reviewer can access the editor at the same time. Will it be possible?
4. It's nice we'll have a form to correct credits. My question is whether the process is automated and the change is made instantly, or the correction itself is still done by human and takes days to be corrected? In the case that it is not immediate, I want credit confirmation dialog at approval to avoid showing wrong credit to translators. It's really embarrassing (at least in our culture).
5. I'd like to know will there be any changes in the Amara API following those changes.
6. Here are some items that I want them to be added to the API (I asked these a year ago).
a. Make task's last update time available and make task data retrievable in the order of last update time. (Currently, it is needed for apps tracking tasks to re-retrieve all open task data periodically to check the tasks' assignment status changes and it consumes lots of time/resources.)
b. Add activity types for these events: 'Assign Task', 'Decline(Unassign) Task', 'Task Expired', 'Translation (or Subtitle) Task Completed'. These help us observe translator's behavior better. I point out that the number of these events are not so big in comparison to 'Add Version' (save draft).
7. I'd like to know details of the new crediting algorithm. There are cases that crediting judgment is difficult even for human (like two translators did about the same amount of contributions), and we LCs apologize to contributors when we need to ask them to give up the credit. I want the algorithm to indicate that "I'm not so sure about this" in such cases rather than simply giving credit to an arbitrary person. (It would be even better if it asks who should be credited to the approver when it is not sure.)
8. What is the unit of comparison of the diff algorithm? Word or character? In the case of word, how do you define word? (I care this because my language Japanese doesn't use space as word delimiter.)
9. In the image of revision comparison (diff) page, it says "Changes | 15 Subtitles Changed | 10% Timing changed | 7% Text changed". How do you define these percentages? 7% text means 7% of characters or 7% of words or 7% of captions? How about 10% timing?
10. Will the 'sync history' page (for checking sync status and resyncing failed subtitles) support also TEDx/TED-Ed YouTube videos, not only TEDTalks on TED.com?