« Building Large Systems in C (and C++) | Main | Owls at Ascot Station »
Sunday
May062007

What happens when you generalize Probability Theory to allow Minus Signs ...

Over at Shtetl-Optimized, Scott Aaronson has a nice introduction to the field of quantum computing disguised as a series of answers to questions he was asked on his recent job interview tour.  I particularly like the following characterization of quantum computing as:

what happens if we generalize probability theory to allow minus signs, and base it on the L2 norm instead of the L1 norm

I now feel I know enough to be able to talk authoritatively on the subject over coffee at work.

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
All HTML will be escaped. Hyperlinks will be created for URLs automatically.