|
Hi,
I have a double value describing rotation angle for a text box object. I need to convert it to quaternion. I have searched the web, and all I get is converters that conver Euler angle to Quaternion.
I would appreciate any formulae or algorithm.
And this double value is the only information I have for rotation.
Regards,
Shoaib
Its never over !
|
|
|
|
|
shaibee wrote: have a double value describing rotation angle for a text box object. I need to convert it to quaternion. I have searched the web, and all I get is converters that conver Euler angle to Quaternion.
I would appreciate any formulae or algorithm.
And this double value is the only information I have for rotation.
You want to convert an axis angle, for example? A rotation around an axis in space can be converted to a quaternion Q(w,x,y,z) . If the rotation is given by a unit vector (ax, ay, az) and the angle theta is in radians, then the conversion is:
w = cos(theta/2)
x = ax * sin(theta/2)
y = ay * sin(theta/2)
z = az * sin(theta/2)
Make sure the axis is normalized before conversion. If there is no rotation, the quaternion is the rotation identity quaternion.
|
|
|
|
|
73Zeppelin wrote: rotation is given by a unit vector (ax, ay, az) and the angle theta is in radians
I believe I have theta , but I dont see any vector provided along with in the information object that I have. All it says is "angle in radians rotated from +ve world x-axis". Does this indicate any default vector that I can use for this conversion?
Its never over !
|
|
|
|
|
So it's a rotation about the x-axis? This means the rotations around the other axes can be taken as zero. So your rotation vector would be:
(ax, ay, az) = (theta, 0, 0)
Don't forget to convert to a unit vector[^] if necessary.
|
|
|
|
|
Semantics:
write_item(X): is the operation of writing the value of record X in the database
read_item(X): is the operation of reading the value of record X in the database
TS(T) : is the Time Stamp of a transaction in the database, each transaction may include many operations before it is either committed or rolled-back, e.g. write_item(X), read_item(Z), write_item(W)
write_TS(X): is the TS(T) of transaction T that has successfully written record X
read_TS(X): is the TS(T) of transaction T that has successfully read record X
Strict time stamp ordering
// read_item(X) by T
read_item(X)
{
while(true)
{
if( write_TS(X) > TS(T) )
{
abort(T); // break or return
}
else if( write_TS(X) < TS(T) )
{
wait until the transaction T', which TS(T) > TS(T'), has
either committed or aborted; // write_TS(X) == TS(T')
continue;
}
else
{
read_item(X);
read_TS(X) = max{ read_TS(X), TS(T) };
break; // or return
}
}
}
// write_item(X) by T
write_item(X)
{
while(true)
{
if( write_TS(X) > TS(T) || read_TS(X) > TS(T) )
{
abort(T); // break or return
}
else if( write_TS(X) < TS(T) )
{
wait until the transaction T', which TS(T) > TS(T'), has
either committed or aborted; // write_TS(X) == TS(T')
continue;
}
else
{
write_item(X);
write_TS(X) = TS(T);
break; // or return
}
}
}
<div class="ForumMod">modified on Wednesday, December 17, 2008 4:16 PM</div>
|
|
|
|
|
What do you want help with? You didn't actually ask a question.
|
|
|
|
|
I apologize for not providing the question.
The question is:
Is this pseudocode correct regarding the strict time stamp ordering protocol ?
( Strict TS protocol is used for ensuring conflict serializability, recoverability and cascade less roll-backs )
Some background knowledge ( I hope this to clarify things):
1 ) Conflict serializability -> This is ensured by the basic time stamp protocol
2 ) But the basic TS protocol does not ensure (1) recoverability and (2) cascadeless roll-backs.
3 ) This is were "strict" steps in and ensures the aforementioned 2 features.
e.g. Transactions TS(T1)<TS(T2)<TS(T3)
Assume that TS(T2) has performed a write_item(X) and has committed.
Additionally, let us assume that TS(T3) performed the following 2 operations after the previous commit:
Y=Y+X
write_item(Y)
but has not committed yet.
Then, transaction T1 does attempt to perform a write_item(X).
The basic time stamp protocol will force transaction T1 to abort (roll-back), because TS(T1) < write_TS(X), write_TS(X) == TS(T2)
However, since the transaction T2 has committed it is not recoverable through a roll-back anymore.
Furthermore, transaction T3 has to be rolled-back as well. If another transaction T4 had based its writes on T3 then it should also be rolled-back. This is were cascading roll-backs happen.
Thanks, for using your time to answer my question.
|
|
|
|
|
/ write_item(X) by T
write_item(X)
{
while(true)
{
if( write_TS(X) > TS(T) || read_TS(X) > TS(T) )
{
abort(T);
}
else if( write_TS(X) < TS(T) )
{
wait until the transaction T', which TS(T) > TS(T'), has
either committed or aborted;
continue;
}
else
{
write_item(X);
write_TS(X) = TS(T);
break;
}
}
}
Hmmm, on the indicated line above, shouldn't that be AND and not OR? Also, you might want to consider what would happen if the timestamps were equal.
Also, just to be clear, to avoid cascading rollbacks, you should only be reading committed values. So, given all that, shouldn't the pseudo-code look more like this:
if t >= wmax(X)
rmax(X) <- max(t, rmax(X))
wait for a committed value of X
perform read
else
abort
if t >= rmax(X) and t >= wmax(X)
wmax(X) <- t
wait until X value is a committed value and
pending reads are done
perform write
else
if t < rmax(X)
abort
else
do nothing
|
|
|
|
|
I have studied your answer carefully and your remarks are quite
insightful. I will incorporate them while implementing the algorithm.
Your willingness to answer my question is greatly appreciated.
Thanks!
|
|
|
|
|
|
I recently had a phone interview for a C++ development position. I was given the following problem. You have an array of 101 elements. In the array are the values 1 through 100. There is exactly one value that appears twice. Find the value that appears twice. My first solution was to sort the array and then find the missing value. The second solution was to use a boolean array to keep track of the values already found. He claims that there is a solution required only a minimal amount of additional memory and is of lower order than sorting the array.
We both agreed that if the numbers went from 1 to 1,000,000,000 and the array had 1,000,000,001 numbers in them then my solution first solution would take a long time and my second solution required a lot of additional space. The interviewer claimed that there is a more efficient solution.
However, I cannot find one. I am hoping that somebody out there could tell me what it is.
Thanks
Bob
|
|
|
|
|
You should try and take advantage of all available information...
|
|
|
|
|
Create a 100 bit array, set a bit by the value if that bit is not already set, otherwise report the duplicate number.
Dave.
|
|
|
|
|
Sorry to double post. the last reply seemed to hang. I also edited the response
reate a 100 bit array, set a bit by each value if that bit is not already set, otherwise report the duplicate number.
Dave.
|
|
|
|
|
Dave,
Thanks for the response. Your solution is basically the same as my solution except that you
use an array of bits where I use an array of booleans. For n numbers, we both need an additional array of n elements. The person interviewing me claims that it can be done with O(1) additional space and time faster than O( n log(n) ). I still do not see how.
Bob
|
|
|
|
|
I suggest you have a better look at my earlier reply...
|
|
|
|
|
Bob,
I dug back 35 years and came up with the answer. Since the numbers are consecutive, add 1 to 100 into a singe location, then clear another location and add all the numbers in the array. That number will exceed the calculated result by the value of the duplicate number.
This was my solution to a problem in the 60's when we booted the mainframe with a deck of binary cards. The decks became huge (more than 1/2 of a tray of cards) and operators would drop the deck and spend hours trying to insure the deck was back in order(smart operations people always kept a spare deck, just in case). The solution was to initialize a location with 0 and then for each card read, increment the location. The location was then accumulated into a sum, and another location was summed with the actual card sequence number. If the two sums came out the same,then each card had been read once and once only. Appropriate measures had to be taken if a card mis-read caused a backspace and reread (dont consider the card read until you got a good status). The only downsize was that we had to insure that overlay loading was not attempted (could not use the ORG directive to backup into a prior defined table etc then restored). You could never be sure of when in the sequence of loading, any particular location was loaded.
Quite soon after that we went to booting from a tape. Note, we saved an autoload file that was the Startup image, and there was a one card boot that could read in that image instead of the whole boot deck, but when necessary, the deck had to be used - at least until the tape boot became implemented.
Dave.
|
|
|
|
|
Right, summing all the numbers is the cheapest and fastest way to identify the duplicate number.
The sum trick only works because you know the range of valid numbers; if the problem was: we have 100 different numbers and 1 got duplicated, then you would need a bitmap. Hence, use all available information. BTW: In order to sum the numbers 1..N you don't need an accumulator, the sum equals N*(N+1)/2 = 5050.
As for the cards, the punch cards I used a long time ago used to have 80 columns; columns 73-80 were not relevant to the program; they typically got used for numbering the cards. And sorting machines were available, so dropping a tray only did cost a few passes through the sorting machine to get everything back in order. IIRC they had around 100 output bins, so for decimal digits, they could handle two digits at a time.
|
|
|
|
|
Luc,
As "Murphy" predicted, the tray usually got dropped in the middle of the night when the Tab room (with the sorter) was locked up at night, so the routine had been to just put them back in some order, and correct the order when an out-of order card was read. The summing solution did the trick, at least until tape boot was perfected.
Dave.
|
|
|
|
|
This sounds like a COBOL deck - it wasn't true of other decks.
I once did some AlgolW stuff on a IBM 360 where all 80 columns could contain code.
But I still think your first reply has it spot on - do a sum and use all available information.
Regards
David Rice
|
|
|
|
|
So you are the first who really read my replies in this thread...
Actually I used punch cards only for Fortran (Fortran IV, Watfor, Watfive) programs;
they had an optional C in column 1 for comment cards; an optional statement number in 2-5,
an optional continuation character in col 6, the actual statement(s) in col 7-72, and
free comment in 73-80, normally used for sequential numbering the cards.
|
|
|
|
|
I'd forgotten about Fortran - last used it 1975.
Not with punched cards but a paper tape reader on a tele-type terminal.
|
|
|
|
|
The only program I ever used paper tape for was a PDP-8 assembly program on a machine without a
disk; so the "development cycle" consisted of:
- load editor
- load source
- edit
- save source
- load assembler
- load source
etc etc
Things were simple back then, now they are comfortable but complex.
|
|
|
|
|
First of all, I don't claim to have THE solution to your problem.
You've stated there are two possible approaches: 1. sorting the array, 2. keeping a separate bit array. I think the actual solution combines both approaches.
When you're sorting an array, there are two basic operations:
1. comparisons
2. swaps
Let's suppose your algorithm looks like this:
<br />
main loop {<br />
int i = ...;<br />
int j = ...;<br />
if (A[i] > A[j])<br />
swap(A[i],A[j]);<br />
}<br />
However, since you only have to find the repeated element, you might change the algorithm so it looks like this:
<br />
main loop {<br />
int i = ...;<br />
int j = ...;<br />
if (A[i] == A[j])<br />
return A[i];
else if (A[i] > A[j])<br />
swap(A[i],A[j]);<br />
}<br />
Provided the sorting algorithm you use is in-place (for example, heapsort), you only need O(1) additional space.
Even though the worst case is still O(n log(n)) , the average case's complexity HAS TO (read: might... hehe) be lower, since you stop the algorithm once the repeated element was found.
Now, you have to choose the sorting algorithm that gives you the best average case while still being O(n log(n)) in the worst case. And that's way too advanced computer science for me.
Hope that helps,
Eduardo
-----------------------------------------------------------------------
EDIT: Wow. I just read the sum one. Wow. Forget everything I said.
|
|
|
|
|
Computing the sum is an easy approach if there is only one duplicate number (though I prefer to formulate the problem as having a missing number rather than a duplicate). Now for the fun part:
Assume there are 65534 numbers in the range 0 to 65535 without duplicates, so two numbers are missing. Can you write a program to identify both missing numbers on a single pass through the data, that could run on a machine with 64K of code storage and 256 bytes of data storage (the proper solution shouldn't require anywhere near 4K, much less 64K; if your code size isn't ridiculous assume it will fit)?
What if three numbers are missing? How about four?
modified on Thursday, December 18, 2008 7:12 PM
|
|
|
|
|