content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Stubborn MuleNatural frequenciesNatural frequencies
Natural frequencies
by Stubborn Mule on 22 October 2010 · 2 comments
In my last post, I made a passing reference to Gerd Gigerenzer’s idea of using “natural frequencies” instead of probabilities to make assessing risks a little easier. My brief description of the idea
did not really do justice to it, so here I will briefly outline an example from Gigerenzer’s book Reckoning With Risk.
The scenario posed is that you are conducting breast cancer screens using mammograms and you are presented with the following information and question about asymptomatic women between 40 and 50 who
participate in the screening:
The probability that one of these women has breast cancer is 0.8%. If a woman has breast cancer, the probability is 90% that she will have a positive mammogram. If a woman does not have breast
cancer, the probability is 7% that she will still have a positive mammogram. Imagine a woman who has a positive mammogram. What is the probability that she actually has breast cancer?
For those familiar with probability, this is a classic example of a problem that calls for the application of Bayes’ Theorem. However, for many people—not least doctors—it is not an easy question.
Gigerenzer posed exactly this problem to 24 German physicians with an average of 14 years professional experience, including radiologists, gynacologists and dermatologists. By far the most common
answer was that there was a 90% chance she had breast cancer and the majority put the odds at 50% or more.
In fact, the correct answer is only 9% (rounding to the nearest %). Only two of the doctors came up with the correct answer, although two others were very close. Overall, a “success” rate of less
than 20% is quite striking, particularly given that one would expect doctors to be dealing with these sorts of risk assessments on a regular basis.
Gigerenzer’s hypothesis was that an alternative formulation would make the problem more accessible. So, he posed essentially the same question to a different set of 24 physicians (from a similar
range of specialties with similar experience) in the following way:
Eight out of every 1,000 women have breast cancer. Of these 8 women with breast cancer, 7 will have a positive mammogram. Of the remaining 992 women who don’t have breast cancer, some 70 will
still have a positive mammogram. Imagine a sample of women who have positive mammograms in screening. How many of these women actually have breast cancer?
Gigerenzer refers to this type of formulation as using “natural frequencies” rather than probabilities. Astute observers will note that there are some rounding differences between this question and
the original one (e.g. 70 out of 992 false positives is actually a rate of 7.06% not 7%), but the differences are small.
Now a bit of work has already been done here to help you on the way to the right answer. It’s not too hard to see that there will be 77 positive mammograms (7 true positives plus 70 false positives)
and of these only 7 actually have breast cancer. So, the chances of someone in this sample of positive screens actually having cancer is 7/77 = 9% (rounding to the nearest %).
Needless to say, far more of the doctors who were given this formulation got the right answer. There were still some errors, but this time only 5 of the 24 picked a number over 50% (what were they
The lesson is that probability is a powerful but confusing tool and it pays to think carefully about how to frame statements about risk if you want people to draw accurate conclusions.
Possibly Related Posts (automatically generated):
Leave a Comment
{ 2 trackbacks }
Recent Comments
|
{"url":"http://www.stubbornmule.net/2010/10/natural-frequencies/","timestamp":"2014-04-19T22:06:21Z","content_type":null,"content_length":"53073","record_id":"<urn:uuid:50364448-f50b-4976-848c-5929f85806b9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: August 2007 [00954]
[Date Index] [Thread Index] [Author Index]
FW: Solving Nonlinear Equations
• To: mathgroup at smc.vnet.net
• Subject: [mg80548] FW: [mg80515] Solving Nonlinear Equations
• From: "Biyana, D. (Dugmore)" <DugmoreB at Nedbank.co.za>
• Date: Sun, 26 Aug 2007 02:56:31 -0400 (EDT)
Let me fill the missing gaps on my question:
m1= Sum[w[[i]]S[[i]]Exp[(r-delta[[i]]),{i,Length[S]}];
> m2==c^2+(c^2/2)(Exp[2/b^2]Cosh[2a/b]-1)-2c*d*Exp[1/(2b^2)]*Sinh[a/b],
> )+(d^3/4)(3*Exp[1/(2b^2)]Sinh[a/b]-Exp[9/(2b^2)]Sinh[3a/b]),
> 8/(b^2)]*Cosh
> [4a/b]-4*Exp[2/(b^2)]Cosh[2a/b]),{{a,-1.},{b,1.},{c,1.},{d,0.05}}] gives
{a->12.1929,b->9.05339,c->0.33727,d->0.393214}. I know that these are
coorrect roots because the results of the underlying problem tally with what
I expect. I must emphasise that the initial values of a, b, c, and d were a
product of trial and error. However, when I change inputs to
,1.0}} I just can't seem be "lucky" to hit the jackpot initial values. My
main point is how do I get input initial values without relying on trial and
error, or what is the alternative approach?
P.S. (I'm aware about the potential confusion that C, and D are likely to
cause on the system as global variables, I apologise for not using the
actual variables used)
MD Biyana
-----Original Message-----
From: DrMajorBob [mailto:drmajorbob at bigfoot.com]
Sent: 24 August 2007 09:46 AM
To: Biyana, D. (Dugmore); mathgroup at smc.vnet.net
Subject: [mg80548] Re: [mg80515] Solving Nonlinear Equations
That's not a legal syntax (mismatched brackets, etc.), and if it were, you
didn't give the initial values or the values of m1, m2, m3, and m4, so...
what can we do?
In addition, C and D are system-defined symbols. FindRoot probably uses D to
take derivatives, so you're just asking for trouble with variable names like
that. I never, never, EVER start one of my own variables with a capital;
that makes it obvious whose symbol it is.
On Fri, 24 Aug 2007 00:56:16 -0500, Biyana, D. (Dugmore)
<DugmoreB at Nedbank.co.za> wrote:
> I'm using Mathematica V6.0.1 and I have a system of 4 nonlinear equations
> which I'm trying to solve using FindRoot:
> FindRoot[{m1==C-D*Exp[1/(2*B^2)]*Sinh[A/B],
> m2==C^2+(D^2/2)(Exp[2/B^2]Cosh[2A/B]-1)-2C*D*Exp[1/(2B^2)]*Sinh[A/B],
> )+(D^3/4)(3*Exp[1/(2B^2)]Sinh[A/B]-Exp[9/(2B^2)]Sinh[3A/B]),
> 8/(B^2)]*Cosh
> I get the message " FindRoot::cvmit : Failed to converge to the requested
> accuracy..." which I suspect is a result of initial values of A, B,C,
> and D.
> What trick can one use to get accepatble initial values?
> MD Biyana
> ********************
> Nedbank Limited Reg No 1951/000009/06. The following link displays the
> names of the Nedbank Board of Directors and Company Secretary.
> [ http://www.nedbank.co.za/terms/DirectorsNedbank.htm ]
> This email is confidential and is intended for the addressee only. The
> following link will take you to Nedbank's legal notice.
> [ http://www.nedbank.co.za/terms/EmailDisclaimer.htm ]
> ********************
DrMajorBob at bigfoot.com
Nedbank Limited Reg No 1951/000009/06. The following link displays the names of the Nedbank Board of Directors and Company Secretary. [ http://www.nedbank.co.za/terms/DirectorsNedbank.htm ]
This email is confidential and is intended for the addressee only. The following link will take you to Nedbank's legal notice. [ http://www.nedbank.co.za/terms/EmailDisclaimer.htm ]
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2007/Aug/msg00954.html","timestamp":"2014-04-17T04:04:47Z","content_type":null,"content_length":"39063","record_id":"<urn:uuid:d6ef565a-771b-4f9b-abdc-5387675c43b8>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Closures and Multiple Return Values
Your use of this web site or any of its content or software indicates your agreement to be bound by these Terms of Participation.
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective
|
{"url":"https://www.java.net/print/239204","timestamp":"2014-04-17T05:04:25Z","content_type":null,"content_length":"11995","record_id":"<urn:uuid:158d8553-ce7c-4be1-9723-d9bc354be354>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Anh văn chuyên ngành Toán - ĐHSP TP HCM
Little mouse: – Mommy! He’s saying something that I don’t understand at all?
Mother mouse: – Silence! It’s our enemy. Don’t go out of the house. That dirty cat
is threatening us.
Little mouse: – How did you understand what he said?
Mother mouse: – Consider it a very good reason to learn a foreign language.
163 trang
Chia sẻ: zimbreakhd07 | Ngày: 05/01/2013 | Lượt xem: 570 | Lượt tải: 2
Tóm tắt tài liệu Anh văn chuyên ngành Toán - ĐHSP TP HCM, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
1 HO CHI MINH CITY UNIVERSITY OF EDUCATION FOREIGN LANGUAGE SECTION Compiled by: HO THI PHUONG LE THI KIEU VAN HO CHI MINH CITY, 2003 ENGLISH For MATHEMATICS 2 ENGLISH For MATHEMATICS Compilers: LE
THI KIEU VAN HO THI PHUONG Consultant: NGUYEN VAN DONG, Ph.D HoChiMinh City, September 2003. 3 CONTENTS UNIT TEXT GRAMMAR Page Preface................................................
.................................... 5 UNIT 1 – The Internet distance education – My future profession – Arithmetic operations – Present Simple and Present Continuous 7 UNIT 2 – The history of
personal computing – What is mathematics? – Fermat’s last theorem – Present Simple – Past Simple 13 UNIT 3 – Fractions – J.E.Freund’s System of Natural Numbers Postulates – The Present Perfect 20
UNIT 4 – Something about mathematical sentences – Inequalities – Mathematical signs and symbols – Degrees of comparison 27 UNIT 5 – Thinking and reasoning in maths – Points and lines – How to find a
website for information you need – ING ending forms 34 UNIT 6 – Some advices to buying a computer – The Pythagorean property – Drawing a circle – Modal verbs 40 UNIT 7 – Mathematical logic – The
coordinate plane – Infinitive after adjectives – Infinitive of purpose 47 UNIT 8 – Ratio and Proportion – History of the terms “ellipse”, “hyperbola” and “parabola” – Algorithms – Past Participle –
The Passive 55 UNIT 9 – What is an electronic computer? – Probability of occurence – Relative clauses 62 UNIT 10 – Sequences obtained by repeated multiplication. – Topology – Conditionals – First and
Zero – Some cases of irre– gular plural nouns. 69 – Unending progressions 4 UNIT 11 – Mappings – Why learn mathematics? – Second conditionals 77 UNIT 12 – Multimedia – Matrices – William Rowan
Hamilton – ing / –ed participle clauses – Some cases of irregular plural nouns (continued) 85 UNIT 13 – Mathematics and modern civilization – The derivative of a function and some applications of the
derivative – Past Perfect Simple and Continuous – Adverbs 93 UNIT 14 – Thinking about the use of virtual reality in computer war games – Zeno’s paradoxes – George Cantor – Reported speech – Some
cases of irregular plural nouns (continued) 102 References.......................................... .................................... 161 5 PREFACE This course is intended for students of
non−English major in the Department of Mathematics, Ho Chi Minh City University of Pedagogy. The course aims at developing students’ language skills in an English context of mathematics with emphasis
on reading, listening, speaking and writing. The language content, mainly focuses on: firstly, key points of grammar and key functions appropriate to this level; secondly, language items important
for decoding texts mathematical; thirdly, language skills developed as outlined below. This textbook contains 14 units with a Glossary of mathematical terms and a Glossary of computing terms and
abbreviations designed to provide a minimum of 150 hours of learning. Course structural organization: Each unit consist of the following components: PRESENTATION: The target language is shown in a
natural context. • Grammar question: Students are guided to an understanding of the target language, and directed to mastering rules for their own benefit. PRACTICE: Speaking, listening, reading and
writing skills as well as grammar exercises are provided to consolidate the target language. SKILLS DEVELOPMENT: Language is used for realistic purposes. The target language of the unit reappears in
a broader context. • Reading and speaking: At least one reading text per unit is intergrated with various free speaking activities. • Listening and speaking: At least one listening activity per unit
is also intergrated with free speaking activities. • Writing: Suggestions are supplied for writing activities per unit. • Vocabulary: At least one vocabulary exercise per unit is available.
TRANSLATION: 6 The translation will encourage students to review their performance and to decide which are the priorities for their own future self-study. Acknowledgements: We would like to express
our gratitude to Nguyen Van Dong, Ph.D., for editing our typescript, for giving us valuable advice and for helping all at stages of the preparation of this course; to TranThi Binh, M.A., who gave the
best help and encouragement for us to complete this textbook. We would also like to thank Le Thuy Hang, M.A., who has kindly and in her spare time contributed comments and suggestions, to Mr. Chris
La Grange, MSc., for his suggestions and helpful comments for the compilation of this text book. Our special thanks are extended to the colleagues, who have done with their critical response and
particular comments. Also, we would like to thank all those student–mathematicians who supplied all the necessary mathematical material to help us write this textbook. Le Thi Kieu Van Ho Thi Phuong 7
UNIT 1 PRESENT SIMPLE & PRESENT CONTINUOUS PRESENTATION 1. Read the passage below. Use a dictionary to check vocabulary where necessary. INTERNET DISTANCE EDUCATION The World Wide Web (www) is
beginning to see and to develop activity in this regard and this activity is increasing dramatically every year. The Internet offers full university level courses to all registered students, complete
with real time seminars and exams and professors’ visiting hours. The Web is extremely flexible and its distance presentations and capabilities are always up to date. The students can get the text,
audio and video of whatever subject they wish to have. The possibilities for education on the Web are amazing. Many college and university classes presently create web pages for semester class
projects. Research papers on many different topics are also available. Even primary school pupils are using the Web to access information and pass along news to others pupils. Exchange students can
communicate with their classmates long before they actually arrive at the new school. There are resources on the Internet designed to help teachers become better teachers – even when they cannot
offer their students the benefits of an on-line community. Teachers can use university or college computer systems or home computers and individual Internet accounts to educate themselves and then
bring the benefits of the Internet to their students by proxy. 2. Compare the sentences below. a. “ This activity increases dramatically every year”. b. “Even primary school pupils are using the Web
to access information”. 3. Grammar questions a. Which sentence expresses a true fact? b. Which sentence expresses an activity happening now or around now? ♦ Note Can is often used to express one’s
ability, possibility and permission. It is followed by an infinitive (without to). Read the passage again and answer the questions. a. What can students get from the Web? b. How can Internet help
teachers become better teachers? PRACTICE 1. Grammar 8 1.1 Put the verb in brackets into the correct verb form (the Present Simple or the Present Continuous) and then solve the problem. Imagine you
…………. (wait) at the bus stop for a friend to get off a bus from the north. Three buses from the north and four buses from the south ……… (arrive) about the same time. What ………. (be) the probability
that your friend will get off the first bus? Will the first bus come / be from the north? 1.2 Complete these sentences by putting the verb in brackets into the Present Simple or the Present
Continuous. a. To solve the problem of gravitation, scientists …………… (consider) time– space geometry in a new way nowadays. b. Quantum rules …………… (obey) in any system. c. We …………… (use) Active
Server for this project because it ……… (be) Web–based. d. Scientists …………… (trace and locate) the subtle penetration of quantum effects into a completely classical domain. e. Commonly we ………… (use) C
+ + and JavaScript. f. At the moment we …………… (develop) a Web–based project. g. Its domain ………… (begin) in the nucleus and ………… (extend) to the solar system. h. Right now I …………. (try) to learn how
to use Active Server properly. 1.3 Put “can”, “can not”, ”could”, ”could not” into the following sentences. a. Parents are finding that they ………….. no longer help their children with their arithmetic
homework. b. The solution for the construction problems …………… be found by pure reason. c. The Greeks …………….. solve the problem not because they were not clever enough, but because the problem is
insoluble under the specified conditions. d. Using only a straight-edge and a compass the Greeks …………. easily divide any line segment into any number of equal parts. e. Web pages…………. offer access to
a world of information about and exchange with other cultures and communities and experts in every field. 9 5 cm 5 cm 2. Speaking and listening 2.1 Work in pairs Describe these angles and figures as
fully as possible. Example: ABC is an isosceles triangle which has one angle of 300 and two angles of 750. (a) (c) (b) d) 2.2 How are these values spoken? a) 2x d) 1nx − g) 3 x b) 3x e) nx− h) n x c)
nx f) x i) 23 ( )x a− SKILLS DEVELOPMENT • Reading 1. Pre – reading task 1.1 Do you know the word “algebra”? Do you know the adjective of the noun “algebra”? Can you name a new division of algebra?
1.2 Answer following questions. a. What is your favourite field in modern maths? b. Why do you like studying maths? 25 cm 10 cm 10 2. Read the text. MY FUTURE PROFESSION When a person leaves high
school, he understands that the time to choose his future profession has come. It is not easy to make the right choice of future profession and job at once. Leaving school is the beginning of
independent life and the start of a more serious examination of one’s abilities and character. As a result, it is difficult for many school leavers to give a definite and right answer straight away.
This year, I have managed to cope with and successfully passed the entrance exam and now I am a “freshman” at Moscow Lomonosov University’s Mathematics and Mechanics Department, world-famous for its
high reputation and image. I have always been interested in maths. In high school my favourite subject was Algebra. I was very fond of solving algebraic equations, but this was elementary school
algebra. This is not the case with university algebra. To begin with, Algebra is a multifield subject. Modern abstract deals not only with equations and simple problems, but with algebraic structures
such as “groups”, “fields”, “rings”, etc; but also comprises new divisions of algebra, e.g., linear algebra, Lie group, Boolean algebra, homological algebra, vector algebra, matrix algebra and many
more. Now I am a first term student and I am studying the fundamentals of calculus. I haven’t made up my mind yet which field of maths to specialize in. I’m going to make my final decision when I am
in my fifth year busy with my research diploma project and after consulting with my scientific supervisor. At present, I would like to be a maths teacher. To my mind, it is a very noble profession.
It is very difficult to become a good maths teacher. Undoubtedly, you should know the subject you teach perfectly, you should be well-educated and broad minded. An ignorant teacher teaches ignorance,
a fearful teacher teaches fear, a bored teacher teaches boredom. But a good teacher develops in his students the burning desire to master all branches of modern maths, its essence, influence,
wide–range and beauty. All our department graduates are sure to get jobs they would like to have. I hope the same will hold true for me. Comprehension check 1. Are these sentences True (T) or False
(F)? Correct the false sentences. a. The author has successfully passed an entrance exam to enter the Mathematics and Mechanics Department of Moscow Lomonosov University. b. He liked all the subjects
of maths when he was at high school. c. Maths studied at university seems new for him. d. This year he’s going to choose a field of maths to specialize in. e. He has a highly valued teaching career.
f. A good teacher of maths will bring to students a strong desire to study maths. 2. Complete the sentences below. a. To enter a college or university and become a student you have to
pass..................... b. Students are going to write their ....................... ...in the final year at university. c. University students show their essays to
their............................ 11 3. Work in groups a. Look at the words and phrases expressing personal qualities. − sense of humour − good knowledge of maths − sense of adventure − children –
loving − patience − intelligence − reliability − good teaching method − kindness − interest in maths b. Discussion What qualities do you need to become a good maths teacher? c. Answer the following
questions. c.1. Why should everyone study maths? What about others people? c.2. University maths departments have been training experts in maths and people take it for granted, don’t they? c.3. When
do freshmen come across some difficulties in their studies? c.4. How do mathematicians assess math studies? • Listening 1. Pre – listening All the words below are used to name parts of computers.
Look at the glossary to check the meaning. mainframe – mouse – icon – operating system – software – hardware – microchip 2. Listen to the tape. Write a word next to each definition. a. The set of
software that controls a computer system………………….. . b. A very small piece of silicon carrying a complex electrical circuit.…….. c. A big computer system used for large - scale operations. …………….. d.
The physical portion of a computer system. …………………..………. . e. A visual symbol used in a menu instead of natural language. ……….... f. A device moved by hand to indicate positions on the screen.……..….
. g. Date, programs, etc., not forming part of a computer, but used when operating it . …………… . TRANSLATION Translate into Vietnamese. Arithmetic operations 12 1. Addition: The concept of adding
stems from such fundamental facts that it does not require a definition and cannot be defined in formal fashion. We can use synonymous expressions, if we so much desire, like saying it is the process
of combining. Notation: 8 + 3 = 11; 8 and 3 are the addends, 11 is the sum. 2. Subtraction: When one number is subtracted from another the result is called the difference or remainder. The number
subtracted is termed the subtrahend, and the number from which the subtrahend is subtracted is called minuend. Notation: 15 – 7 = 8; 15 is the subtrahend, 7 is the minuend and 8 is the remainder.
Subtraction may be checked by addition: 8 + 7 = 15. 3. Multiplication: is the process of taking one number (called the multiplicand) a given number of times (this is the multiplier, which tells us
how many times the multiplicand is to be taken). The result is called the product. The numbers multiplied together are called the factors of the products. Notation: 12 × 5 = 60 or 12.5 = 60; 12 is
the multiplicand, 5 is the multiplier and 60 is the product (here, 12 and 5 are the factors of product). 4. Division: is the process of finding one of two factors from the product and the other
factor. It is the process of determining how many times one number is contained in another. The number divided by another is called the dividend. The number divided into the dividend is called the
divisor, and the answer obtained by division is called the quotient. Notation: 48 : 6 = 8; 48 is the dividend, 6 is the divisor and 8 is the quotient. Division may be checked by multiplication. 13
UNIT 2 PAST SIMPLE PRESENTATION 1. Here are the past tense forms of some verbs. Write them in the base forms. ………………… took ………………… decided ………………… believed ………………… set ………………… was (were) ………………… went
………………… reversed ………………… made Three of them end in –ed. They are the past tense form of regular verbs. The others are irregular. 2. Read the text below. In 1952, a major computing company made a
decision to get out of the business of making mainframe computers. They believed that there was only a market for four mainframes in the whole world. That company was IBM. The following years they
reversed their decision. In 1980, IBM determined that there was a market for 250,000 PCs, so they set up a special team to develop the first IBM PC. It went on sale in 1987 and set a world wide
standard for compatibility i.e. IBM-compatible as opposed the single company Apple computers standard. Since then, over seventy million IBM-compatible PCs, made by IBM and other manufacturers, have
been sold. Work in pairs Ask and answer questions about the text. Example: What did IBM company decide to do in 1952? − They decided to get out of the business of making mainframe computers. •
Grammar questions − Why is the past simple tense used in the text? − How do we form questions? − How do we form negatives? PRACTICE 1. Grammar 14 The present simple or the past simple. Put the verbs
in brackets in the correct forms. a. The problem of constructing a regular polygon of nine sides which …………..(require) the trisection of a 600 angle ……… (be) the second source of the famous problem.
b. The Greeks ……… (add) “the trisection problem” to their three famous unsolved problems. It ……… (be) customary to emphasize the futile search of the Greeks for the solution. c. The widespread
availability of computers …………… (have) in all, probability changed the world for ever. d. The microchip technology which ………… (make) the PC possible has put chips not only into computers, but also
into washing machines and cars. e. Fermat almost certainly ………… (write) the marginal note around 1630, when he first ………… (study) Diophantus’s Arithmetica. f. I ………… (protest) against the use of
infinitive magnitude as something completed, which ……… (be) never permissible in maths, one ………… (have) in mind limits which certain ratio ……….. (approach) as closely as desirable while other ratios
may increase indefinitely (Gauss). g. In 1676 Robert Hooke .……………(announce) his discovery concerning springs. He ……………..(discover) that when a spring is stretched by an increasing force, the stretch
varies directly according to the force. 2. Pronunciation There are three pronunciations of the past tense ending –ed: / t /, / id /, / d /. Put the regular past tense form in exercise 1 into the
correct columns. Give more examples. / t / / id / / d / ……………………... …………………………. ……………………… ……………………... .……………………….... ……………………… ……………………... …………………………. ……………………… ……………………... …………………………. ………………………
……………………... …………………………. ……………………… ……………………... …………………………. ……………………… ……………………... …………………………. ……………………… ……………………... …………………………. ……………………… 15 1 3. Writing Put the sentences into the right order to make
a complete paragraph. WHAT IS MATHEMATICS ? The largest branch is that which builds on ordinary whole numbers, fractions, and irrational numbers, or what is called collectively the real number
system. Hence, from the standpoint of structure, the concepts, axioms and theorems are the essential components of any compartment of maths. Maths, as science, viewed as whole, is a collection of
branches. These concepts must verify explicitly stated axioms. Some of the axioms of the maths of numbers are the associative, commutative, and distributive properties and the axioms about
equalities. Arithmetic, algebra, the study of functions, the calculus differential equations and other various subjects which follow the calculus in logical order are all developments of the real
number system. This part of maths is termed the maths of numbers. Some of the axioms of geometry are that two points determine a line, all right angles are equal, etc. From these concepts and axioms,
theorems are deduced. A second branch is geometry consisting of several geometries. Maths contains many more divisions. Each branch has the same logical structure: it begins with certain concepts,
such as the whole numbers or integers in the maths of numbers or such as points, lines, triangles in geometry. • Speaking and listening Work in pairs to ask and answer the question about the text in
exercise3. For example: How many branches are there in maths? What are they? Speaking a. Learn how to say these following in English. 1) ≡ 4) → 7) 10) ≥ 13) ± 2) ≠ 5) < 8) 11) α 14) / 3) ≈ 6) > 9) ≤
12) ∞ 16 b. Practice saying the Greek alphabet. A H N T B α η ν τ β θ ξ υ γ ι ο ϕ δ κ π ζ ε λ ρ ψ ς µ σ ω Θ Ξ ϒ Γ Ι Ο Φ ∆ Κ Π Χ Ε Λ Ρ Ψ Ζ Μ Σ Ω SKILLS DEVELOPMENT • Reading 1. Pre – reading task 1.1
Use your dictionary to check the meaning of the words below. triple (adj.) utilize (v.) conjecture (v.) bequeath (v.) conjecture (n.) tarnish (v.) subsequent (adj.) repute (v.) [ be reputed ] 1.2
Complete sentences using the words above. a. The bus is traveling at………………………………….... the speed. b. What the real cause was is open to……………………………….. . c. ……………………………………………events proved me wrong. d.
He is…………………………… as / to be the best surgeon in Paris. e. People’ve ……………………… solar power as a source of energy. f. Discoveries………………….. to us by scientists of the last century. g. The firm’s good
name was badly……………………by the scandal. 2. Read the text. FERMAT’S LAST THEOREM Pierre de Fermat was born in Toulouse in 1601 and died in 1665. Today we think of Fermat as a number theorist, in fact as
perhaps the most famous number theorist who ever lived. The history of Pythagorean triples goes back to 1600 B.C, but it was not until the seventeenth century A.D that mathematicians seriously
attacked, in general terms, the problem of finding positive integer solutions to the equation n n nx y z+ = . Many mathematicians conjectured that there are no positive integer solutions to this
equation if n is greater than 2. Fermat’s now famous conjecture was inscribed in the margin of his copy of the Latin translation of 17 Diophantus’s Arithmetica. The note read: “To divide a cube into
two cubes, a fourth power or in general any power whatever into two powers of the same denomination above the second is impossible and I have assuredly found an admirable proof of this, but the
margin is too narrow to contain it”. Despite Fermat’s confident proclamation the conjecture, referred to as “Fermat’s last theorem” remains unproven. Fermat gave elsewhere a proof for the case n = 4.
it was not until the next century that L.Euler supplied a proof for the case n = 3, and still another century passed before A.Legendre and L.Dirichlet arrived at independent proofs of the case n = 5.
Not long after, in 1838, G.Lame established the theorem for n = 7. In 1843, the German mathematician E.Kummer submitted a proof of Fermat’s theorem to Dirichlet. Dirichlet found an error in the
argument and Kummer returned to the problem. After developing the algebraic “theory of ideals”, Kummer produced a proof for “most small n”. Subsequent progress in the problem utilized Kummer’s ideals
and many more special cases were proved. It is now known that Fermat’s conjecture is true for all n < 4.003 and many special values of n, but no general proof has been found. Fermat’s conjecture
generated such interest among mathematicians that in 1908 the German mathematician P.Wolfskehl bequeathed DM 100.000 to the Academy of Science at Gottingen as a prize for the first complete proof of
the theorem. This prize induced thousands of amateurs to prepare solutions, with the result that Fermat’s theorem is reputed to be the maths problem for which the greatest number of incorrect proofs
was published. However, these faulty arguments did not tarnish the reputation of the genius who first proposed the proposition – P.Fermat. Comprehension check 1. Answer the following questions. a.
How old was Pierre Fermat when he died? b. Which problem did mathematicians face in the 17 century A.D? c. What did many mathematicians conjecture at that time? d. Who first gave a proof to Fermat’s
theorem? e. What proof did he give? f. Did any mathematicians prove Fermat’s theorem after him?
Các file đính kèm theo tài liệu này:
|
{"url":"http://www.zun.vn/tai-lieu/anh-van-chuyen-nganh-toan-dhsp-tp-hcm-1758/","timestamp":"2014-04-18T08:02:35Z","content_type":null,"content_length":"37092","record_id":"<urn:uuid:2009b05e-ba67-4058-a8c2-8e24861b6f90>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Circle inscribed in an equlitaeral triangle
May 31st 2012, 01:38 AM
Circle inscribed in an equlitaeral triangle
A circle of radius 4 is inscribed within an equilateral triangle. Find the area of the triangle.
I am unsure on how to find the height of the triangle and the base.
May 31st 2012, 03:08 AM
Re: Circle inscribed in an equlitaeral triangle
in circle radius =4
circumcircle radius = 8
length of altitude =12
You do the rest using the properties of 30-60-90 triangles
May 31st 2012, 04:56 AM
Re: Circle inscribed in an equlitaeral triangle
If an equilateral triangle has side length s, then any altitude divides it into two right triangles with hypotenuse of length s and one leg of length s/2. By the pythagorean theorem, the other
leg, the altitude of the equilateral triangle has length b given by $b^2+ \frac{s^2}{4}= s^2$ so that $b^2= s^2- \frac{s^2}{4}= \frac{3s^2}{4}$ so that $b= \frac{\sqrt{3}}{2}s$.
Of course all three altitudes cross at a single point. Let x be the distance from the foot of one altitude to that point so the distance from that point to a vertex is $\frac{\sqrt{3}}{2}s- x$.
Then we have a right triangle whose vertices are that point of intersection, the foot of an altitude, and one vertex on that base. That right triangle has one leg of length s/2, another leg of
length x, and hypotenuse of length $\frac{\sqrt{3}}{2}s- x$. Put those into the Pythagorean theorem to find x, the radius of the circle, as a function of s.
May 31st 2012, 07:12 AM
Re: Circle inscribed in an equlitaeral triangle
Hello, johnsy123!
Here is another solution, using some clever formulas.
A circle of radius 4 is inscribed within an equilateral triangle.
Find the area of the triangle.
/ \
/ \
/ \
/ \
/ \
/ * * * \
/* *\
* *
* *
/ \
/* *\
/ * * * \
/ * | * \
/ | \
/ * |4 * \
/ * | * \
/ * | * \
B *---------------*-*-*---------------* C
: - - - - - - - - x - - - - - - - - :
The area of an equilateral triangle with side $x$ is: . $A \:=\:\tfrac{\sqrt{3}}{4}x^2$ .[1]
The area of a triangle is given by: . $A \:=\:\tfrac{1}{2}pr$
. . where $p$ is the perimeter and $r$ is the radius of the inscribed circle.
So we have: . $A \:=\:\tfrac{1}{2}(3x)(4) \:=\:6x$ .[2]
Equate [1] and [2]: . $\tfrac{\sqrt{3}}{4}x^2 \:=\:6x \quad\Rightarrow\quad \sqrt{3}x^2 \:=\:24x$
Since $x e 0$, divide by $x\!:\;\sqrt{3}x \:=\:24 \quad\Rightarrow\quad x \:=\:\tfrac{24}{\sqrt{3}} \:=\:8\sqrt{3}$
Substitute into [1]: . $A \;=\;\tfrac{\sqrt{3}}{4}(8\sqrt{3})^2 \;=\;48\sqrt{3}$
June 2nd 2012, 04:37 PM
Re: Circle inscribed in an equlitaeral triangle
I don't understand how you got the area of the equilateral triangle to be A= SQRT{3}X^2/4.......
June 2nd 2012, 05:14 PM
Re: Circle inscribed in an equlitaeral triangle
How do you prove that the altitude is 12? you can't really just say that the distance from the tip of the circle to the sharp point of the triangle is the same as the radius. A proven statement
would better clarify my understanding.
June 3rd 2012, 03:47 AM
Re: Circle inscribed in an equlitaeral triangle
Hi johnsey123,
Draw an equilateral triangle and its three medians.These are perpendicular to each side.Note that you have created 6 triangles which are congruent to each other.Each one is a 30-60- 90 triangle
with one side given (4).(incircle radius)The alttude has another segment which you derive from the property of 30-60-90 triangle. The ratio of its sides is 2-1-rad3 so the hyp is 2 times 4= 8
making the altitude 4+8 =12.The other leg of this triangle is 4rad3= 1/2 of the side of the equil tria so the area of the equilateral tri is 12 *4rad3 = 48rad3. You could also calculate the area
by caculating the area of the small tri and multiplying by 6
June 3rd 2012, 06:41 PM
Re: Circle inscribed in an equlitaeral triangle
Can you please construct a diagram to show this? I think i may have constructed one but i am not so sure if it is correct.
June 3rd 2012, 07:08 PM
Re: Circle inscribed in an equlitaeral triangle
Go to post 4 and finish adding lines to the drawing there as I described
|
{"url":"http://mathhelpforum.com/trigonometry/199494-circle-inscribed-equlitaeral-triangle-print.html","timestamp":"2014-04-18T16:04:56Z","content_type":null,"content_length":"14427","record_id":"<urn:uuid:cb0052b1-7132-46d3-89d0-b29a6d1bd816>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integral ∫x^3*√(x^2-5) dx
You don't need parts. Your first substitution is the one to use, u = x^2-5. Notice that means x^2=5+u.
I did that, u=x^2-5, du= 2xdx, x^2=u+5 and I got :
so I multiplied and I got : 1/2*∫u^(3/2)+5u^(1/2)
1/2*1/5 * ∫u^(3/2)+u^(1/2)
and I got u^(5/2)/25+(2u^(3/2))/3 and It's wrong :/ where did I failed?
|
{"url":"http://www.physicsforums.com/showthread.php?p=4172727","timestamp":"2014-04-17T21:31:49Z","content_type":null,"content_length":"49993","record_id":"<urn:uuid:56d26e82-fc9f-4f05-a09d-4be1441ed816>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Introduction to Mathematical Statistics and Its Applications
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help
|
{"url":"http://www.knetbooks.com/introduction-mathematical-statistics-its/bk/9780321693945","timestamp":"2014-04-19T20:04:13Z","content_type":null,"content_length":"35949","record_id":"<urn:uuid:0b5f20d3-ac1b-46c0-b151-c89619318efd>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Naplan Style Maths Tests - Year 9
By Eureka Multimedia Pty Ltd
Open the Mac App Store to buy and download apps.
Based on Australian Schools Naplan Tests.
Eureka's Naplan Style Maths Tests for Year 9 is a revolutionary maths tests creator that help students to reinforce maths lessons taught in class and in particular, the the type of questions asked in
Australia's National Assessment Program - Literacy and Numeracy (NAPLAN) type tests.
Maths skills improve with practice. Eureka's Naplan Style Maths Tests generates 1000's of unique Naplan-style maths questions providing students with a wealth of practice questions.
Step-by-step worked solutions on the answer sheets or on-screen allow students to compare their solutions and identify any mistakes.
A student's progress is tracked which allows areas of weakness to be identified.
Eureka's Naplan Style Maths Tests - Year 9 allows the student to tailor exams to concentrate on particular topics for additional practice and to improve overall test outcomes.
* Generates 1000's of unique NAPLAN-type maths questions.
* Based on past NAPLAN tests.
* Tests can be completed on-screen or in printed form to simulate test conditions.
* Includes Step by step worked solutions.
* Progress is tracked and areas of weakness are identified.
* Combines calculator and non-calculator tests
Some of the topics covered:
• Indices
- Index Notation
- Index Laws
- Negative Indices
- Fractional Indices
- Scientific Notation
- Comparing Numbers Expressed in Scientific Notation
- Calculations in Scientific Notation Using the Index Laws
• Perimeter, Area and Volume
- Perimeter
- Perimeter of a Sector
- Some Special Sectors
- Perimeter of Composite Figures
- Applications of Perimeter
- Area
- Area of a Sector
- Applications of Area
- Surface Area
- Volume of Prisms and Cylinders
- Capacity and Mass
• Algebra
- Number Patterns
- Number Sentences
- Algebraic Terms
- Four Operations
- Four Laws of Algebra
- Substitution in Algebraic Expressions
- Expansion of Algebraic Expressions
- Factorisation of Algebraic Expressions
- Addition and Subtraction of Algebraic Fractions
- Multiplication and Division of Algebraic Fractions
• Equations and Inequalities
- Linear Equations
- Equations with Pronumerals on One Side
- Equations with Pronumerals on Both Sides
- Problem Solving
- Formulae
- Inequalities
- Graphing Inequalities
• Probability
- Terms Used in Probability
- Relative Frequency
- Experimental Probability
- Equally Likely Outcomes
- Theoretical Probability
- Probability Scale
- Complementary Events
- Representing Sample Spaces
• Coordinate Geometry
- Coordinates of a Point
- Length of an Interval
- Midpoint of an Interval
- Graphs of Linear Relations
- Sketching a Linear Graph
- Horizontal and Vertical Lines
- Gradient of a Line
- Finding the Gradient of a Line
- Graphs of Non-Linear Relations
- The Parabola
• Data, Graphs and Statistics
- Data
- Graphs
- Tables
- Range
- Measures of Central Tendency
- Dot Plot
- Stem-and-Leaf Plot
- Back-to-Back Stem-and-Leaf Plot
- Frequency Distribution Table
- Histogram and Frequency Polygon
- Cumulative Frequency
- Finding Mean and Median from a Frequency Table
- Groups of Data
- Histogram and Frequency Polygon using Grouped Data
- Cumulative Frequency Histogram and Polygon
- Finding the Median from the Cumulative Frequency Ploygon
• Trigonometry
- Identifying and Naming the Sides of a Right-Angled Triangle
- Ratios of Sides in Similar Right-Angled Triangles
- Trigonometric Ratios
- The Calculator in Trigonometry
- Finding Angles of a Right-Angled Triangle
- Finding the Length of a Side of a Right-Angled Triangle
- Angles of Elevation and Depression
- Directions and Bearings
View in Mac App Store
• USD 18.99
• Category: Education
• Released: 10 May 2012
• Version: 1.0.0
• Size: 36.8 MB
• Language: English
• Developer: Eureka Multimedia Pty Ltd
Compatibility: OS X 10.6.6 or later
More Apps by Eureka Multimedia Pty Ltd
|
{"url":"https://itunes.apple.com/ug/app/naplan-style-maths-tests-year/id524594519?mt=12","timestamp":"2014-04-20T07:12:15Z","content_type":null,"content_length":"30962","record_id":"<urn:uuid:7bd4dfc2-7236-4133-9458-ead1a1de9b81>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is Lemma A.1.5.7 in Higher Topos Theory correct?
up vote 16 down vote favorite
Hello to everyone,
I am studying the properties of combinatorial model categories, following the exposition given by Jacob Lurie in Higher Topos Theory ([HTT] from now on), in section A.2.6.
At some point, he needs to show that in a presentable category $\mathcal C$ and a large enough set $S$ of morphisms in $\mathcal C$, the class generated by $S$ under transfinite pushouts is the same
as the class generated by $S$ under retracts and transfinite pushouts; that is: we don't need retracts. This is accomplished in Proposition A.1.5.12.
In the proof of Proposition A.1.5.12, he needs to replace a sequence of morphisms with a tree, satisfying some additional condition; the existence of such a replacement would be Lemma A.1.5.7, but I
have problems in understanding why the proof should be correct.
In particular, with the notations used there, I could consider for every $\beta \in A$ the subset $B := \{\alpha \in A \mid \alpha \preceq \beta\}$; this would be $\preceq$-downward closed by
construction; since it has a final object, we obtain $$ Y_B^\prime := \varinjlim_{\alpha \in B} Y_\alpha^\prime \simeq Y_\beta^\prime $$ On the other side, condition (1) implies that $B$ has a final
object also when thought as subset of $(A,\le)$; it follows that $$ Y_B := \varinjlim_{\alpha \in B} Y_\alpha \simeq Y_\beta $$ i.e. $Y_\beta \simeq Y_\beta^\prime$, so that the diagram shouldn't be
changed. But then, I don't see how to prove that $\{Y_\alpha\}_{\alpha \in A'}$ is a $S$-tree (Definition A.1.5.1 in [HTT]).
Therefore, my questions are:
1. do you agree with me that the result is seemingly false or can you explain me how the proof is supposed to work?
2. do you think that the Proposition A.1.5.12 is correct?
3. do you have any other reference for a proposition which is similar to Proposition A.2.6.8 (which is used in the proof of the Smith's characterization of combinatorial model structures)?
Edit. I found a related question here. Even though it doesn't answer my question, it fixes the notations I am using, hence I am signaling it for your convenience.
ct.category-theory at.algebraic-topology topos-theory higher-category-theory
add comment
1 Answer
active oldest votes
Looks like a typo. Condition $(4)$ should say that $B$ is downward closed under $\leq$, not under $\preceq$ (otherwise, $Y_B$ is not defined).
up vote 29 down vote accepted
Now I see. Thank you very much for your answer! – Mauro Porta Jun 14 '13 at 9:45
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory at.algebraic-topology topos-theory higher-category-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/133672/is-lemma-a-1-5-7-in-higher-topos-theory-correct/133703","timestamp":"2014-04-16T22:47:48Z","content_type":null,"content_length":"53616","record_id":"<urn:uuid:9c537fdf-31d7-4298-8aae-62578d8c8476>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lab 7 - Simple Harmonic Motion
Have you ever wondered why a grandfather clock keeps accurate time? The motion of the pendulum is a particular kind of repetitive or periodic motion called simple harmonic motion, or SHM. The
position of the oscillating object varies sinusoidally with time. Many objects oscillate back and forth. The motion of a child on a swing can be approximated to be sinusoidal and can therefore be
considered as simple harmonic motion. Some complicated motions like turbulent water waves are not considered simple harmonic motion. When an object is in simple harmonic motion, the rate at which it
oscillates back and forth as well as its position with respect to time can be easily determined. In this lab, you will analyze a simple pendulum and a spring-mass system, both of which exhibit simple
harmonic motion.
Discussion of Principles
A particle that vibrates vertically in simple harmonic motion moves up and down between two extremes y = ±A. The maximum displacement A is called the amplitude. This motion is shown graphically in
the position-versus-time plot in Fig. 1.
Figure 1: Position plot showing sinusoidal motion of an object in SHM
One complete oscillation or cycle or vibration is the motion from, for example,
y = −A
y = +A
and back again to
y = −A.
The time interval T required to complete one oscillation is called the period. A related quantity is the frequency f, which is the number of vibrations the system makes per unit of time. The
frequency is the reciprocal of the period and is measured in units of Hertz, abbreviated Hz;
1 Hz = 1 s^−1.
If a particle is oscillating along the y-axis, its location on the y-axis at any given instant of time t, measured from the start of the oscillation is given by the equation Recall that the velocity
of the object is the first derivative and the acceleration the second derivative of the displacement function with respect to time. The velocity v and the acceleration a of the particle at time t are
given by
( 4 )
a = −(2πf)^2[A sin(2πft)]
Notice that the velocity and acceleration are also sinusoidal. However the velocity function has a 90° or π/2 phase difference while the acceleration function has a 180° or π phase difference
relative to the displacement function. For example, when the displacement is positive maximum, the velocity is zero and the acceleration is negative maximum. Substituting from Eq. (1)
f = 1/T
into Eq. (4)
a = −(2πf)^2[A sin(2πft)]
yields From Eq. (5)
a = −4π^2f^2y
we see that the acceleration of an object in SHM is proportional to the displacement and opposite in sign. This is a basic property of any object undergoing simple harmonic motion. Consider several
critical points in a cycle as in the case of a spring-mass system in oscillation. A spring-mass system consists of a mass attached to the end of a spring that is suspended from a stand. The mass is
pulled down by a small amount and released to make the spring and mass oscillate in the vertical plane. Figure 2 shows five critical points as the mass on a spring goes through a complete cycle. The
equilibrium position for a spring-mass system is the position of the mass when the spring is neither stretched nor compressed.
Figure 2: Five key points of a mass oscillating on a spring.
The mass completes an entire cycle as it goes from position A to position E. A description of each position is as follows:
• Position A: The spring is compressed; the mass is above the equilibrium point at
y = A
and is about to be released.
• Position B: The mass is in downward motion as it passes through the equilibrium point.
• Position C: The mass is momentarily at rest at the lowest point before starting on its upward motion.
• Position D: The mass is in upward motion as it passes through the equilibrium point.
• Position E: The mass is momentarily at rest at the highest point before starting back down again.
By noting the time when the negative maximum, positive maximum, and zero values occur for the oscillating object's position, velocity, and acceleration, you can graph the sine (or cosine) function.
This is done for the case of the oscillating spring-mass system in the table below and the three functions are shown in Fig. 3. Note that the positive direction is typically chosen to be the
direction that the spring is stretched. Therefore, the positive direction in this case is down and the initial position A in Fig. 2 is actually a negative value. The most difficult parameter to
analyze is the acceleration. It helps to use Newton's second law, which tells us that a negative maximum acceleration occurs when the net force is negative maximum, a positive maximum acceleration
occurs when the net force is positive maximum and the acceleration is zero when the net force is zero.
Position Velocity Acceleration
Point A neg max zero pos max
Point B zero pos max zero
Point C pos max zero neg max
Point D zero neg max zero
Point E neg max zero pos max
Figure 3: Position, velocity and acceleration vs. time
For this particular initial condition (starting position at A in Fig. 2), the position curve is a cosine function (actually a negative cosine function), the velocity curve is a sine function, and the
acceleration curve is just the negative of the position curve.
Mass and Spring
A mass suspended at the end of a spring will stretch the spring by some distance y. The force with which the spring pulls upward on the mass is given by Hooke's law where k is the spring constant and
y is the stretch in the spring when a force F is applied to the spring. The spring constant k is a measure of the stiffness of the spring. The spring constant can be determined experimentally by
allowing the mass to hang motionless on the spring and then adding additional mass and recording the additional spring stretch as shown below. In Fig. 4a the weight hanger is suspended from the end
of the spring. In Fig. 4b, an additional mass has been added to the hanger and the spring is now extended by an amount
. This experimental set-up is also shown in the photograph of the apparatus in Fig. 5.
Figure 4: Set up for determining spring constant
Figure 5: Photo of set-up for determining spring constant
When the mass is motionless, its acceleration is zero. According to Newton's second law the net force must therefore be zero. There are two forces acting on the mass; the downward gravitational force
and the upward spring force. See the free-body diagram in Fig. 6 below.
Figure 6: Free-body diagram for the spring-mass system
So Newton's second law gives us where
is the change in mass and
is the change in the stretch of the spring caused by the change in mass, g is the gravitational acceleration, and k is the spring constant. Eq. (7)
Δmg − kΔy = 0
can also be expressed as Newton's second law applied to this system is
ma = F = −ky.
Substitute from Eq. (5)
a = −4π^2f^2y
for the acceleration to get from which we get an expression for the frequency f and the period T. Using Eq. (11) we can predict the period if we know the mass on the spring and the spring constant.
Alternately, knowing the mass on the spring and experimentally measuring the period, we can determine the spring constant of the spring. Notice that in Eq. (11) the relationship between T and m is
not linear. A graph of the period versus the mass will not be a straight line. If we square both sides of Eq. (11), we get Now a graph of
versus m will be a straight line and the spring constant can be determined from the slope.
Simple Pendulum
The other example of simple harmonic motion that you will investigate is the simple pendulum. The simple pendulum consists of a mass m, called the pendulum bob, attached to the end of a string. The
length L of the simple pendulum is measured from the point of suspension of the string to the center of the bob as shown in Fig. 7 below.
Figure 7: Experimental set-up for a simple pendulum
If the bob is moved away from the rest position through some angle of displacement θ as in Fig. 7, the restoring force will return the bob back to the equilibrium position. The forces acting on the
bob are the force of gravity and the tension force of the string. The tension force of the string is balanced by the component of the gravitational force that is in line with the string (i.e.
perpendicular to the motion of the bob). The restoring force here is the tangential component of the gravitational force.
Figure 8: Simple pendulum
When we apply trigonometry to the smaller triangle in Fig. 8, we get the magnitude of the restoring force
|F| = mg sin θ.
This force depends on the mass of the bob, the acceleration due to gravity g and the sine of the angle through which the string has been pulled. Again Newton's second law must apply, so
( 13 )
ma = F = −mg sin θ
where the negative sign implies that the restoring force acts opposite to the direction of motion of the bob. Since the bob is moving along the arc of a circle, the angular acceleration is given by
α = a/L.
From Eq. (13)
ma = F = −mg sin θ
we get In Fig. 9 the blue solid line is a plot of θ versus sin(θ) and the straight line is a plot of θ in degrees versus θ in radians. For small angles these two curves are almost indistinguishable.
Therefore, as long as the displacement θ is small we can use the approximation sin θ ≅ θ.
Figure 9: Graphs of sin θ versus θ
With this approximation Eq. (14) becomes Equation (15) shows the (angular) acceleration to be proportional to the negative of the (angular) displacement and therefore the motion of the bob is simple
harmonic and we can apply Eq. (5)
a = −4π^2f^2y
to get Combining Eq. (15) and Eq. (16)
α = −4π^2f^2 θ
, and simplifying, we get and Note that the frequency and period of the simple pendulum do not depend on the mass.
The objective of this lab is to understand the behavior of objects in simple harmonic motion by determining the spring constant of a spring-mass system and a simple pendulum.
• Assorted masses
• Spring
• Meter stick
• Stand
• Stopwatch
• String
• Pendulum bob
• Protractor
• Balance
Using Hooke's law you will determine the spring constant of the spring by measuring the spring stretch as additional masses are added to the spring. You will determine the period of oscillation of
the spring-mass system for different masses and use this to determine the spring constant. You will then compare the spring constant values obtained by the two methods. In the case of the simple
pendulum, you will measure the period of oscillation for varying lengths of the pendulum string and compare these values to the predicted values of the period.
Procedure A: Determining Spring Constant Using Hooke's Law
• 1
Starting with 50 g, add masses in steps of 50 g to the hanger. As you add each 50 g mass, measure the corresponding elongation y of the spring produced by the weight of these added masses. Enter
these values in Data Table 1.
• 2
Use Excel to plot m versus y. See Appendix G.
• 3
Use the trendline option in Excel to determine the slope of the graph. Record this value on the worksheet. See Appendix H.
• 4
Use the value of the slope to determine the spring constant k of the spring. Record this value on the worksheet.
Checkpoint 1:
Ask your TA to check your table and Excel graph.
Procedure B: Determining Spring Constant from
vs. m Graph
We have assumed the spring to be massless, but it has some mass, which will affect the period of oscillation. Theory predicts and experience verifies that if one-third the mass of the spring were
added to the mass m in Eq. (11), the period will be the same as that of a mass of this total magnitude, oscillating on a massless spring.
• 5
Use the balance to measure the mass of the spring and record this on the worksheet. Add one-third this mass to the oscillating mass before calculating the period of oscillation. If the mass of
the spring is much smaller than the oscillating mass, you do not have to add one-third the mass of the spring.
• 6
Add 200 g to the hanger.
• 7
Pull the mass down a short distance and let go to produce a steady up and down motion without side-sway or twist. As the mass moves downward past the equilibrium point, start the clock and count
"zero." Then count every time the mass moves downward past the equilibrium point, and on the 50th passage stop the clock.
• 8
Repeat step 7 two more times and record the values for the three trials in Data Table 2 and determine an average time for 50 oscillations.
• 9
Determine the period from this average value and record this on the worksheet.
• 10
Repeat steps 7 through 9 for three other significantly different masses.
• 11
Use Excel to plot a graph of
• 12
Use the trendline option in Excel to determine the slope and record this value on the worksheet.
• 13
Determine the spring constant k from the slope and record this value on the worksheet.
• 14
Calculate the percent difference between this value of k and the value obtained in procedure A using Hooke's law. See Appendix B.
Checkpoint 2:
Ask your TA to check your table values and calculations.
Procedure C: Simple Pendulum
• 15
Adjust the pendulum to the greatest length possible and firmly fasten the cord. With a 2-meter stick, carefully measure the length of the string, including the length of the pendulum bob. Use a
vernier caliper to measure the length of the pendulum bob. See Appendix D. Subtract one-half of this value from the length previously measured to get the value of L and record this in Data Table
3 on the worksheet.
• 16
Using the accepted value of 9.81
for g, predict and record the period of the pendulum for this value of L.
• 17
Pull the pendulum bob to one side and release it. Use as small an angle as possible, less than 10°. Make sure the bob swings back and forth instead of moving in a circle. Using the stopwatch
measure the time required for 50 oscillations of the pendulum and record this in Data Table 3.
• 18
Repeat step 17 two more times and record the values for the three trials in Data Table 3 and determine an average time for 50 oscillations.
• 19
Determine the period from this average value and record this on the worksheet.
• 20
Calculate the percent error between this value and the predicted value of the period.
• 21
Repeat steps 16 through 20 for three other significantly different lengths.
Checkpoint 3:
Ask your TA to check your table values and calculations.
|
{"url":"http://www.webassign.net/labsgraceperiod/ncsulcpmech2/lab_7/manual.html","timestamp":"2014-04-18T20:50:02Z","content_type":null,"content_length":"54663","record_id":"<urn:uuid:82857f3c-184a-484f-b848-ef9220d2a5ab>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Congruence Proofs: Summations
October 28th 2009, 06:52 AM #1
Sep 2009
Congruence Proofs: Summations
Although I can prove these algebraically via induction, how are the following proved using modular arithmetic (where "=" means "congruent to")?
1+2+...+(n-1)=0 mod n iff n is odd
1^2+2^2+...+(n-1)^2=0 mod n iff n=+/-1 mod 6
1^3+2^3+...+(n-1)^3=0 mod n iff n is not congruent to 2 mod 4
$1+2 +...+(n-1)=\frac{(n-1)n}{2}$...can you see how the given condition on n makes this expression an integer multiple of n?
Try to do the next ones in a similar way.
For the sum of the first n cubes, if we let n=4k+i, where i={0, 1, 2, 3} we expect to NOT get an integer multiple of k in the case of 4k+2, which happens to be true when you work out the algebra.
But, we would expect integer multiples of k in the other three cases. However, 4k+3 does not produce an integer multiple of k. What am I doing wrong?
For the sum of the first n cubes, if we let n=4k+i, where i={0, 1, 2, 3} we expect to NOT get an integer multiple of k in the case of 4k+2, which happens to be true when you work out the algebra.
But, we would expect integer multiples of k in the other three cases. However, 4k+3 does not produce an integer multiple of k. What am I doing wrong?
$n=4k+3 \Longrightarrow \left(\frac{n(n-1)}{2}\right)^2=\left(\frac{(4k+3)\cdot 2(2k+1)}{2}\right)^2=$$(2k+1)^2(4k+3)^2=(2k+1)^2\cdot n$ , an integer multiple of n, indeed.
OOPS! I forgot to substitute on that next to last step. I can do the rest of these questions now. Thanks again, brother!
October 28th 2009, 08:24 AM #2
Oct 2009
October 28th 2009, 08:41 AM #3
Sep 2009
October 28th 2009, 09:14 AM #4
Oct 2009
October 28th 2009, 09:41 AM #5
Sep 2009
|
{"url":"http://mathhelpforum.com/number-theory/110991-congruence-proofs-summations.html","timestamp":"2014-04-16T09:12:13Z","content_type":null,"content_length":"46655","record_id":"<urn:uuid:22625fa2-86c0-40b8-a0eb-ad4c9c2de119>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
|
St Albans, NY Algebra Tutor
Find a St Albans, NY Algebra Tutor
...Sometimes that's because students feel they weren't very good at math in school, or because it has been awhile since they have used formal equations in their everyday life. I help students
prepare for the QR section by reviewing all the concepts tested, spending the most time on the ones likely ...
18 Subjects: including algebra 1, algebra 2, geometry, GRE
...I was a math major at Washington University in St. Louis, and minored in German, economics, and writing. While there, I tutored students in everything from counting to calculus, and beyond.
26 Subjects: including algebra 2, geometry, algebra 1, physics
...I'm certified in general education and special education. I received my M.S.Ed from Hunter College. I work in an integrated classroom which includes 6 students who have special needs.
19 Subjects: including algebra 1, reading, writing, geometry
...I believe education is a gift to be enjoyed, and I make learning fun and rewarding, as it should be. Contact me through WyzAnt today so I can help you or your child do your/his/her best and
achieve your/his/her goals!As an interviewer for applicants to Harvard University and an alumna of the Spe...
31 Subjects: including algebra 2, algebra 1, reading, English
...I am currently applying to medical schools. I have tutored for over two years around NYC and am now available to tutor both in NYC and Long Island. I have tutored students in preparation for
the SAT and the SHSAT, as well as in topics ranging from biology and physics to English (essay compositi...
41 Subjects: including algebra 1, chemistry, algebra 2, reading
Related St Albans, NY Tutors
St Albans, NY Accounting Tutors
St Albans, NY ACT Tutors
St Albans, NY Algebra Tutors
St Albans, NY Algebra 2 Tutors
St Albans, NY Calculus Tutors
St Albans, NY Geometry Tutors
St Albans, NY Math Tutors
St Albans, NY Prealgebra Tutors
St Albans, NY Precalculus Tutors
St Albans, NY SAT Tutors
St Albans, NY SAT Math Tutors
St Albans, NY Science Tutors
St Albans, NY Statistics Tutors
St Albans, NY Trigonometry Tutors
|
{"url":"http://www.purplemath.com/st_albans_ny_algebra_tutors.php","timestamp":"2014-04-17T15:44:15Z","content_type":null,"content_length":"23891","record_id":"<urn:uuid:d5a28643-ec9c-4702-a8ec-66c84edbfa79>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pascal's Mystic Hexagram
One of the first applications of modern mathematics to emerge during the Renaissance was the study of perspective for the purposes of painting and architecture. This led naturally to a consideration
of projection, i.e., the mapping of images from one plane surface S to another plane surface S' by projection from a point O. The point p on S maps to a point p' on S' such that O, p and p' are
colinear. Obviously the distances between points and the angles between lines are not preserved under a projective mapping, but certain properties of figures are preserved. For example, straight
lines map to straight lines, and conics map to conics. In addition to these qualitative invariants, there are also quantitative invariants. Consider four points A,B,C,D on a line in S, projected
from the point O to the points A', B', C', D' on a line in S' as shown below.
Applying the law of sines to the triangles OBA and OBD, we have
The ratio of these two is
Likewise if we apply the law of sines to the triangles OCD and OCA we get
Thus if we multiply AB/BD by DC/CA the sines of m and n cancel out, and are left with
This quantity is called a cross-ratio, and we see that it depends only on the angles between the projecting rays emanating from the point O, so it has the same value if we project the four points
A,B,C,D onto any other line. In other words, we have
Furthermore, since the distances between the points don't change if we change the position of the projecting point O, it follows that the cross-ratio is the same for any projection point. Also,
given the positions of any three of the co-linear points A,B,C,D and the value of their cross-ratio, the position of the fourth point is fully determined.
It's worth mentioning that although we speak of "the" cross-ratio of four points, the value depends on the order in which we take the points. There are 4! = 24 possible permutations, but it's not
difficult to show that, because of symmetries, there are only six distinct values of the cross-ratio, and these come in reciprocal pairs. Thus there are three real values p,q r such that the six
cross-ratios for four given points are p, 1/p, q, 1/q, r, 1/r. In addition, we have the relations
All 24 permutations of A,B,C,D can be produced by combining the six possible transpositions of the points. These transpositions have the effect of cycling between pairs of values, as summarized
So, to define a single cross-ratio we need to specify four co-linear points (or, equivalently, four lines that intersect at a given point) in a particular order. We will use the notation [ABCD] to
indicate the cross-ratio of the co-linear points A,B,C,D in that order. When referring to four lines emanating from the point O and passing through the points A,B,C,D we will designate the origin
point with a subscript, as in [ABCD][O].
Incidentally, the cross-ratio is also an important quantity in complex analysis. Given any four complex numbers z[1], z[2], z[3], z[4], the cross-ratio defined as
is invariant under arbitrary Mobius (i.e., linear fractional) transformations. This quantity is a purely real number if and only if z[1] through z[4] lie on a straight line or a circle.
Geometrically we can define a cross-ratio either in terms of four points on a line or in terms of an origin and four arbitrary points in a plane with the origin, since these four points specify a set
of four lines through the origin. For example, we could specify a cross-ratio in terms of a given origin point O and four points A,B,C D on the perimeter of a circle as shown below.
Since the points A,B,C,D are not co-linear, the cross-ratio depends on the position of the origin point O. Thus if we take the same four points but consider the lines through them emanating from a
different origin located, say, on the circle itself, the cross-ratio is different, because the angles between the lines are different. However, it's easy to see that the cross-ratio for four given
points on a circle is the same for an origin located anywhere on the same circle. This is illustrated in the figure below with the origin point O located at an arbitrary point on the circle.
We know from elementary geometry that if N is the center of the circle then the angle ANB is twice the angle AOB for any point O on the circle. Therefore, if the points A through D are fixed, the
cross-ratio for any origin point on the circle is invariant. Furthermore, since the cross-ratio is invariant under arbitrary projections, this proposition is valid for arbitrary conics as well. In
other words, given any four points on a conic (such as an ellipse, hyperbola, or parabola), the cross-ratio for the lines through those points is invariant for any origin point located anywhere on
the same conic.
To illustrate the usefulness of these ideas, consider six points A through F placed arbitrarily on a conic, such as on an ellipse as shown below, draw the lines AB, BC, ..., EF, FA, making a hexagon.
Visually it appears that the points G, H, I at the intersections of opposite edges of this hexagon are co-linear, as indicated by the red line, but we would like to prove this. In other words, we
wish to prove that the segments GH and HI really lie along a single line.
A typical proof making use of cross-ratios begins with the fact that [EKIF][H] is equal to [EKIF][C]. Now, the lines emanating from C and passing through K and I also pass through D and B
respectively, so [EKIF][C] is equal to [EDBF][C], and we can switch origin points from C to A, because they are both on the ellipse, so these cross-ratios equal [EDBF][A]. We notice that the lines
emanating from A and passing through the points B and F also pass through G and J respectively, so the cross-ratio equals [EDGJ][A]. Since these points are co-linear, the cross-ratio also applies to
the origin H, so it equals [EDGJ][H]. Thus we've shown that [EKIF][H] equals [EDGJ][H]. The points K and D represent the same line through H, and the points F and J represent the same line through
H, so it follows that the points I and G fall on a single line through H, which was to be shown.
As noted previously, a conic curve is projected to a conic curve, because a projection maps curves of second degree to curves of second degree. Therefore, the above theorem applies to any hexagon
inscribed in any conic. For example, if we place the vertices of a hexagon on a hyperbola we arrive at a figure as shown below.
Notice that we extend the edges of the hexagon as necessary to find the intersection points of opposite edges. We can also arrange the vertices so that the points of intersection lie entirely
outside the figure, as shown for the ellipse below.
This theorem was first stated in 1640 by Blaise Pascal (1623-1662) when he was just 16 years of age. His father, Etienne, had retired from his civil service job in 1631 to devote himself to the
education of Blaise and his two older sisters. At the age of 14 Pascal joined his father as a member of the group of mathematicians and scientists associated with Marin Mersenne. In addition to the
Pascals, this remarkable group included (either in the weekly meetings or by correspondence) Descartes, Fermat, Gassendi, Desargues, Roberval, Beeckman, Peiresc, and Hobbes, along with extensive
communications with Huygens, Torricelli, Galileo, and many others. Girard Desargues (1593-1662) originated the idea of projective geometry, and Pascal wrote that he (Pascal) owed everything he had
found on the subject to the writings of Desargues. Unfortunately Desargues' brilliant innovation occurred simultaneously with the introduction of analytic geometry by Descartes, so the ideas of
projective geometry were over-shadowed for nearly two hundred years. The younger Pascal was one of the few people to appreciate the power and beauty of Desargues' approach to geometry, but Pascal
himself soon gave up mathematics and devoted most of the rest of his short life to theology.
Oddly enough, Pascal didn't actually present "his theorem" as a theorem, nor did he ever publish a proof of it. The only thing he ever published on the subject was a brief summary, announcing
several results without proof. This was to be followed by a comprehensive treatise on conics, which Pascal apparently worked on for many years, but then abandoned. No copies have survived, so we
have only the brief statements from the 1640 "Essay on Conics" (along with notes that Leibniz took when he saw a copy of Pascal's unpublished treatise). In terms of the lettering in the figure
above, Pascal stated that:
If through points K,V any conic section whatever passes cutting the lines MK, MV, SK, SV in points P,O,N,Q, then the lines MS, NO, PQ will be of the same order.
Pascal carried over from Desargues the definition of an ordonnance of lines as a set of lines passing through a common point, so when he says the lines MS, NO, PQ are of the same order, he means they
pass through a single point. This of course is equivalent to saying that the point of intersection of the lines NO and PQ is co-linear with the points M and S, which is the way "Pascal's Theorem" is
usually expressed today.
The above statement follows two lemmas, the first of which is identical to the above statement except that it refers only to a circle rather than a general conic section. The second lemma states
simply that "If through the same line several planes are passed, and are cut by another plane, all lines of intersection of these planes are of the same order as the line through which these planes
pass". In other words, if a plane containing a certain line is cut by another plane, then the line of intersection of these two planes passes through the given line. (Of course, this is understood
to be in the sense of projective geometry with its lines and points at infinity to cover the cases of parallel lines and planes.) Then Pascal says "On the basis of these two lemma's and several easy
deductions from them, we can demonstrate" the general proposition about conics quoted above. After listing several other results, the brief essay concludes
There are many other problems and theorems, and many deductions which can be made from what has been stated above, but the lack of confidence which I have, owing to my little experience and capacity,
does not allow me to go further into the subject until it has passed the examination of able men who may be willing to take this trouble. After that if someone thinks the subject worth continuing,
we shall endeavor to extend it as far as God gives us the strength.
Alas these were the last words Pascal ever published on the subject.
Of course, Pascal's celebrated theorem is a generalization of Pappus' Theorem. Ironically, it can be argued that a true understanding of it comes only when viewed from the algebraic standpoint of
Descartes' analytic geometry, as pointed out by Julius Plücker (1801-1868) using the fact that two algebraic plane curves of degree m and n with no common factor have mn points of intersection
(counting multiplicities and points at infinity). This is called Bezout's Theorem for obscure reasons, since Bezout apparently never proved it, and the proposition had been stated and used by
earlier mathematicians such as Newton. In any case, the equation of a line L(x,y) = 0 has degree 1, and the union of m lines is just the product of the equations of the individual lines, so it has
degree m. Thus for both the theorem of Pappus and the theorem of Pascal the inscribing figure is a locus Q(x,y) = 0 of degree 2, and we can split the six lines into two sets of three to give the two
functions L[AB]L[CD]L[EF] and L[BC]L[DE]L[FA], each of degree 3. Therefore, these loci intersect with each other in nine points, six of which are on the inscribing locus and three of which are
Now, any linear combination of the two cubics also intersects with Q at the six given points, but in addition we can choose the coefficients (of the linear combination) so that the combined function
intersects Q at a seventh point, which implies that Q has a common factor with the combination of cubics. If Q is an irreducible conic, the common factor can only be Q itself. If Q is the product
of two linear factors (as in Pappus' Theorem), then we need simply note that the combination of cubics can be forced to pass through the intersection of the two factors, so the combination of degree
3 intersects with each line in four points, and therefore it must have a common factor with each of them, which again implies that the entire inscribing function Q divides some linear combination of
the two cubics. Thus there must be constants a,b and a linear function L(x,y) such that
Therefore, the three zeros of the two cubics that do not lie on Q(x,y) = 0 must lie on the straight line L(x,y) = 0. This proves Pappus' and Pascal's theorems together, and immediately points to
higher-order generalizations.
For a simple example, consider the instance of Pappus' theorem in the figure below.
Algebraically, the inscribing locus consists of the two lines y = 0 and x - 2y = 0, so it is represented by the quadratic
The edges of the hexagon ABCDEF are the lines
So, by Bezout's Theorem, we know there exist real numbers a,b and a linear function L(x,y) such that equation (1) is satisfied. We find that a = 1 and b = -12 gives
Therefore, the three points G, H, I must lie on the straight line 4x - 11y - 78 = 0.
Return to MathPages Main Menu
|
{"url":"http://www.mathpages.com/home/kmath543/kmath543.htm","timestamp":"2014-04-17T01:49:23Z","content_type":null,"content_length":"37718","record_id":"<urn:uuid:a7ab3da7-329b-4bb8-b276-7abf5f437496>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Digits - Math Genius
It will be seen in the diagram that we have so arranged the nine digits in a square that the number in the second row is twice that in the first row, and the number in the bottom row three times that
in the top row.
There are three other ways of arranging the digits so as to produce the same result.
Can you find them?
See answer
|
{"url":"http://www.pedagonet.com/mathgenius/test77.html","timestamp":"2014-04-17T06:42:51Z","content_type":null,"content_length":"4240","record_id":"<urn:uuid:b04f93d4-5905-4e5d-86d4-51c0133a2dfd>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
|
quadratic and exponential functions in the 3rd year course Advanced ... Remind the students that they learned a method of solving systems of equations using matricies in ...
ADOPT SC PH Math Algebra 1 2009 Elem Alg Final
solving linear equations; graphs and characteristics of linear equations; and quadratic relationships and functions. Teachers, schools, and districts should use the ...
Unit 7: Acute Triangle Trigonometry (5 days + 1 jazz day + 1 ...
Solving Problems Involving Sine and Cosine Law Tie up loose ends from days 2 ... angle for a non-right triangle using the tools they already have (SOH CAH
TI-84 Plus TI-84 Plus Silver Edition Guidebook
You can store equations to any VARS Y-VARS variables, such as Y1 or r6, and then ... Chapter 2: Math, Angle, and Test Operations 64 Solving for a Variable in the ...
The Conjugate Gradient Method
Solve linear system of equations: A~x = ~b Denitions ... other matricies need extensions to the algorithm ... Find the minimum of f(~x) instead of solving the linear ...
10.6 Applications of Quadratic Equations
application problems that use quadratic equations, however, we will concentrate on these types to simplify the matter. We must be very careful when solving these problems ...
TI Graphing Calculator comparison table
power, quadratic polynomial, cubic polynomial, and quartic polynomial regression models ... Full screen interactive editor for solving TVM problems Business functions including ...
Math 12 - for 4th year math students
Students will have already learned about linear, quadratic and ... Remind the students that they learned a method of solving systems of equations using matricies in ...
Solving Systems of Equations Using Mathcad
Solving Systems of Equations Using Mathcad Charles Nippert This set of notes is written to ... processor finds the complex roots of a quadratic Figure 9a Finding Complex ...
Word Problem Practice
... Nancy have in her hallway? 3t t w w 10 Word Problem Practice Solving Equations ... of The McGraw-Hill Companies, Inc. Word Problem Practice Solving Quadratic Equations by ...
Equations (1.6) and (1.7) satisfy the governing partial ... the example under consideration, the element matricies ... standard matrix methods for solving systems of the ...
CRAMERS RULE for solving simultaneous equations Given the equations: 2x ... a quadratic form and may be written in the form q(x) = x Ax. Notice in ...
The China Papers , November 2004
and quadratic forms. All engineering students have a uniform set of ... formula for solving polynomial equations of degree bigger than 3, the only way to do it is to ...
TI-84 Plus TI-84 Plus Silver Edition Guidebook
You can store equations to any VARS Y-VARS variables, such as Y1 or r6, and then ... Chapter 2: Math, Angle, and Test Operations 64 Solving for a Variable in the ...
Solving Systems of Equations using Matrices
Solving Systems of Equations using Matrices A common application of statics is the analysis of structures, which gen-erally involves computing a large number of ...
Matrix Algebra: Determinants, Inverses, Eigenvalues
This method of solving simultaneous equations is known as Cramers rule. Because the explicit computation of determinants is impractical for n u003E 3 as explained in ...
Chapter 5 Least Squares
of solving the equations exactly, we seek only to minimize the sum of the squares ... (a) Determine the coecients in the quadratic form that ts these data in
Chapter 5 Least Squares
of solving the equations exactly, we seek only to minimize the sum of the squares ... (a) Determine the coecients in the quadratic form that ts these data in
... Evens, quiz next day 1,2,6,8,9,10 6.11.03, 6.11.12 36 4-3 Muliplying Matricies P ... 9.C.5A 2,3,6,8,9,10 6.11.08, 8.11.02, 8.11.06, 8.11.11, 8.11.14 46 5-2 Solving Quadratic Equations ...
Solving Systems of Equations Using Mathcad
Solving Systems of Equations Using Mathcad Charles Nippert This set of notes is written to ... processor finds the complex roots of a quadratic Figure 9a Finding Complex ...
Right Triangle Trigonometry
Right Triangle Trigonometry Worksheets Page 1 of 1 WS_Right_Triangle_Trig.doc Adapted from Bob Jensen, ANHS Right Triangle Trigonometry *****In Physics the calculator ...
solving and critical thinking skills, and an ability to ... Solutions of equations and inequalities Graphs of functions ... Quadratic functions Permutations and combinations
UNIT PLAN: SOLVING SYSTEMS OF EQUATIONS Course - Algebra 1 Grade Level - 9 th Time Span - 6 Days, 80 minute periods Tools - Algebra tiles, TI Graphing Calculators ...
Matrix Algebra: Determinants, Inverses, Eigenvalues
This method of solving simultaneous equations is known as Cramers rule. Because the explicit computation of determinants is impractical for n u003E 3 as explained in ...
|
{"url":"http://www.cawnet.org/docid/solving+quadratic+equations+with+matricies/","timestamp":"2014-04-20T08:20:57Z","content_type":null,"content_length":"56261","record_id":"<urn:uuid:22b63cbf-f1cb-41c3-af75-65d79937786c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/511dc82de4b06821731bc577","timestamp":"2014-04-21T08:03:40Z","content_type":null,"content_length":"99075","record_id":"<urn:uuid:f3273330-a946-4c96-9cd8-8b37fdccb6a3>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Course MAT204
Companion course to MAT203, a more abstract treatment of linear algebra than MAT202, but more concrete than MAT217. Introduces basic algebraic tools such as matrices, vector spaces and linear
transformations, bases and coordinates, eigenvalues and eigenvectors and their applications. Exams test for thorough conceptual understanding as well as computational fluency. In this course we
assume that students will not need much help to master standard, straightforward calculations, allowing the instructor to emphasize more subtle aspects of the definitions and important exceptional
cases that come up in applications. Offered Spring semester only.
|
{"url":"http://www.math.princeton.edu/undergraduate/course/mat204","timestamp":"2014-04-20T11:09:32Z","content_type":null,"content_length":"33103","record_id":"<urn:uuid:8fec322b-70c5-40ee-b00b-67c9fcda2d94>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Factoring Polynomials
Factoring Polynomials
6.1 Greatest Common Factor and Factoring by Grouping. 6.2 Factoring ... Step 3: Check your work by multiplying the binomials using the FOIL method. ... – PowerPoint PPT presentation
Number of Views:1101
Avg rating:3.0/5.0
Slides: 64
Added by: Anonymous
more less
Transcript and Presenter's Notes
|
{"url":"http://www.powershow.com/view/b4b91-MTgxZ/Factoring_Polynomials_powerpoint_ppt_presentation","timestamp":"2014-04-17T06:45:54Z","content_type":null,"content_length":"118801","record_id":"<urn:uuid:7165a5bd-9758-4107-bd34-0ab1e2dd90de>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
|
June 16th 2008, 11:20 PM #1
Junior Member
Sep 2007
1) Simpifly
sin^2 (sin^-1 (4/x))
2) a,b (0,2pi)and sin (a)=sqrt(3)/2, cos(b)=-sqrt(3)/2, tan(a+b)=-1/sqrt(3), then give the exact value of a and b in terms of pi
a: ____pi b:_____pi
There are two angles such that $\sin(a)=\frac{\sqrt{3}}{2} \\\ a=\frac{\pi}{3}\mbox{ or }\frac{2\pi}{3}$
Also with cosine $\cos(b)=-\frac{3}{2} \\\ b=\frac{5\pi}{6} \mbox{ or } \frac{7\pi}{6}$
and $\tan(a+b)=-\frac{1}{\sqrt{3}} \\\ a+b=\frac{5\pi}{6} \mbox{ or } \frac{11\pi}{6}$
So for this to be true $a=\frac{2\pi}{3} \\\ b=\frac{7\pi}{6} \\\ a+b=\frac{11\pi}{6}$
Hint: for the first one $\sin^2\left( \sin^{-1}\left( \frac{4}{x}\right)\right)= \left[\sin\left(\sin^{-1}\left( \frac{4}{x}\right)\right)\right]^2$
June 16th 2008, 11:46 PM #2
|
{"url":"http://mathhelpforum.com/trigonometry/41761-trignometry.html","timestamp":"2014-04-17T10:23:17Z","content_type":null,"content_length":"34348","record_id":"<urn:uuid:b5dbbbee-5055-4bdf-9242-2fbc782e3553>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Quantum in Chemistry: An Experimentalist's View
ISBN: 978-0-470-01317-5
474 pages
November 2005
Read an Excerpt
This book explores the way in which quantum theory has become central to our understanding of the behaviour of atoms and molecules. It looks at the way in which this underlies so many of the
experimental measurements we make, how we interpret those experiments and the language which we use to describe our results. It attempts to provide an account of the quantum theory and some of its
applications to chemistry.
This book is for researchers working on experimental aspects of chemistry and the allied sciences at all levels, from advanced undergraduates to experienced research project leaders, wishing to
improve, by self-study or in small research-orientated groups, their understanding of the ways in which quantum mechanics can be applied to their problems. The book also aims to provide useful
background material for teachers of quantum mechanics courses and their students.
See More
Chapter 1: The Role of Theory in the Physical Sciences.
1.0 Introduction.
1.1 What is the role of theory in science?
1.2 The gas laws of Boyle and Gay-Lussac.
1.3 An absolute zero of temperature.
1.4 The gas equation of Van der Waals.
1.5 Physical laws.
1.6 Laws, postulates, hypotheses, etc.
1.7 Theory at the end of the 19th century.
1.8 Bibliography and further reading.
Chapter 2: From Classical to Quantum Mechanics.
2.0 Introduction.
2.1 The motion of the planets: Tycho Brahe and Kepler.
2.2 Newton, Lagrange and Hamilton.
2.3 The power of classical mechanics.
2.4 The failure of classical physics.
2.5 The black-body radiator and Planck’s quantum hypothesis.
2.6 The photoelectric effect.
2.7 The emission spectra of atoms.
2.8 de Broglie’s proposal.
2.9 The Schrödinger equation.
2.10 Bibliography and further reading.
Chapter 3: The Application of Quantum Mechanics.
3.0 Introduction.
3.1 Observables, operators, eigenfunctions and eigenvalues.
3.2 The Schrödinger method.
3.3 An electron on a ring.
3.4 Hückel’s (4N + 2) rule: aromaticity.
3.5 Normalisation and orthogonality.
3.6 An electron in a linear box.
3.7 The linear and angular momenta of electrons confined within a one-dimensional box or on a ring.
3.8 The eigenfunctions of different operators.
3.9 Eigenfunctions, eigenvalues and experimental measurements.
3.10 More about measurement: the Heisenberg uncertainty principle.
3.11 The commutation of operators.
3.12 Combinations of eigenfunctions and the superposition of states.
3.13 Operators and their formulation.
3.14 Summary.
3.15 Bibliography and further reading.
Chapter 4: Angular Momentum.
4.0 Introduction.
4.1 Angular momentum in classical mechanics.
4.2 The conservation of angular momentum.
4.3 Angular momentum as a vector quantity.
4.4 Orbital angular momentum in quantum mechanics.
4.5 Spin angular momentum.
4.6 Total angular momentum.
4.7 Angular momentum operators and eigenfunctions.
4.8 Notation.
4.9 Some examples.
4.10 Bibliography and further reading.
Chapter 5: The Structure and Spectroscopy of the Atom.
5.0 Introduction.
5.1 The eigenvalues of the hydrogen atom.
5.2 The wave functions of the hydrogen atom.
5.3 Polar diagrams of the angular functions.
5.4 The complete orbital wave functions.
5.5 Other one-electron atoms.
5.6 Electron spin.
5.7 Atoms and ions with more than one electron.
5.8 The electronic states of the atom.
5.9 Spin-orbit coupling.
5.10 Selection rules in atomic spectroscopy.
5.11 The Zeeman effect.
5.12 Bibliography and further reading.
Chapter 6: The Covalent Chemical Bond.
6.0 Introduction.
6.1 The binding energy of the hydrogen molecule.
6.2 The Hamiltonian operator for the hydrogen molecule.
6.3 The Born–Oppenheimer approximation.
6.4 Heitler and London: The valence bond (VB) model.
6.5 Hund and Mulliken: the molecular orbital (MO) model.
6.6 Improving the wave functions.
6.7 Unification: Ionic structures and configuration interaction.
6.8 Electron correlation.
6.9 Bonding and antibonding Mos.
6.10 Why is there no He–He Bond?
6.11 Atomic orbital overlap.
6.12 The Homonuclear diatomic molecules from lithium to fluorine.
6.13 Heteronuclear diatomic molecules.
6.14 Charge distribution.
6.15 Hybridisation and resonance.
6.16 Resonance and the valence bond theory.
6.17 Molecular geometry.
6.18 Computational developments.
6.19 Bibliography and further reading.
Chapter 7: Bonding, Spectroscopy and Magnetism in Transition-Metal Complexes.
7.0 Introduction.
7.1 Historical development.
7.2 The crystal field theory.
7.3 The electronic energy levels of transition-metal complexes.
7.4 The electronic spectroscopy of transition-metal complexes.
7.5 Pairing energies; low-spin and high-spin complexes.
7.6 The magnetism of transition-metal complexes.
7.7 Covalency and the ligand field theory.
7.8 Bibliography and further reading.
Chapter 8: Spectroscopy.
8.0 The interaction of radiation with matter.
8.1 Electromagnetic radiation.
8.2 Polarised light.
8.3 The electromagnetic spectrum.
8.4 Photons and their properties.
8.5 Selection rules.
8.6 The quantum mechanics of transition probability.
8.7 The nature of the time-independent interaction.
8.8 Spectroscopic time scales.
8.9 Quantum electrodynamics.
8.10 Spectroscopic units and notation.
8.11 The Einstein coefficients.
8.12 Bibliography and further reading.
Chapter 9: Nuclear Magnetic Resonance Spectroscopy.
9.0 Introduction.
9.1 The magnetic properties of atomic nuclei.
9.2 The frequency region of NMR spectroscopy.
9.3 The NMR selection rule.
9.4 The chemical shift.
9.5 Nuclear spin–spin coupling.
9.6 The energy levels of a nuclear spin system.
9.7 The intensities of NMR spectral lines.
9.8 Quantum mechanics and NMR spectroscopy.
9.9 Bibliography and further reading.
Chapter 10: Infrared Spectroscopy.
10.0 Introduction.
10.1 The origin of the infrared spectra of molecules.
10.2 Simple harmonic motion.
10.3 The quantum-mechanical harmonic oscillator.
10.4 Rotation of a diatomic molecule.
10.5 Selection rules for vibrational and rotational transitions.
10.6 Real diatomic molecules.
10.7 Polyatomic molecules.
10.8 Anharmonicity.
10.9 The ab-initio calculation of IR spectra.
10.10 The special case of near infrared spectroscopy.
10.11 Bibliography and further reading.
Chapter 11: Electronic Spectroscopy.
11.0 Introduction.
11.1 Atomic and molecular orbitals.
11.2 The spectra of covalent molecules.
11.3 Charge transfer (CT) spectra.
11.4 Many-electron wave functions.
11.5 The 1s^12s^1 configuration of the helium atom; singlet and triplet states.
11.6 The Π-electron spectrum of benzene.
11.7 Selection rules.
11.8 Slater determinants (Appendix 6).
11.9 Bibliography and further reading.
Chapter 12: Some Special Topics.
12.0 Introduction.
12.1 The Hückel molecular orbital (HMO) theory.
12.2 Magnetism in chemistry.
12.3 The band theory of solids.
12.4 Bibliography and further reading.
1 Fundamental Constants and Atomic Units.
2 The Variation Method and the Secular Equations.
3 Energies and Wave Functions by Matrix Diagonalisation.
4 Perturbation Theory.
5 The Spherical Harmonics and Hydrogen Atom Wave Functions.
6 Slater Determinants.
7 Spherical Polar Co-ordinates.
8 Numbers: Real, Imaginary and Complex.
9 Dipole and Transition Dipole Moments.
10 Wave Functions for the 3F States of d2 using Shift Operators.
See More
• Addressed to experimentalists rather than to theoreticians
• Aims to assist the reader in understanding the application of quantum mechanics to chemical problems rather than simply knowing the mathematical techniques
• Emphasis on the historical development of the subject
• Worked examples and solutions
• Highly respected author with international reputation
• Extensive market for undergraduates, post-graduates and researchers in chemistry, biochemistry, physics, and related topics
See More
"…recommended for upper-division undergraduates and first-year graduates...a pleasure to read…and deserves a place on the shelf of any physical chemist who has an interest in quantum mechanics." (
Journal of Chemical Education, June 2007)
See More
The Quantum in Chemistry
The solutions for Grinter's "The Quantum in Chemistry" are now available on the author's website.
See More
Buy Both and Save 25%!
The Quantum in Chemistry: An Experimentalist's View (US $210.00)
-and- The Physical Chemist's Toolbox (US $170.00)
Total List Price: US $380.00
Discounted Price: US $285.00 (Save: US $95.00)
Cannot be combined with any other offers. Learn more.
|
{"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470013176.html","timestamp":"2014-04-19T05:31:03Z","content_type":null,"content_length":"60252","record_id":"<urn:uuid:9db97a58-d5db-4346-b680-1a6704e794ae>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1.4 How Parallel Computing Works
Next: 2 Technical Backdrop Up: 1 Introduction Previous: 1.3 Caltech Concurrent Computation
Parallel Computers work in a large class of scientific and engineering computations.
The book quantifies and exemplifies this assertion.
In Chapter 2, we provide the national overview of parallel computing activities during the last decade. Chapter 3 is somewhat speculative as it attempts to provide a framework to quantify the
previous PCW statement.
We will show that, more precisely, parallel computing only works in a ``scaling'' fashion in a special class of problems which we call synchronous and loosely synchronous.
By scaling, we mean that the parallel implementation will efficiently extend to systems with large numbers of nodes without levelling off of the speedup obtained. These concepts are quantified in
Chapter 3 with a simple performance model described in detail in [Fox:88a].
The book is organized with applications and software issues growing in complexity in later chapters. Chapter 4 describes the cleanest regular synchronous applications which included many of our
initial successes. However, we already see the essential points:
Domain decomposition (or data parallelism) is a universal source of scalable parallelism
CrOS and its follow-on Express, described in Chapter 5, support this software paradigm. Explicit message passing is still an important software model and in many cases, the only viable approach to
high-performance parallel implementations on MIMD machines.
Chapters 6 through 9 confirm these lessons with an extension to more irregular problems. Loosely synchronous problem classes are harder to parallelize, but still use the basic principles DD and MP.
Chapter 7 describes a special class, embarrassingly parallel, of applications where scaling parallelism is guaranteed by the independence of separate components of the problem.
Chapters 10 and 11 describe parallel computing tools developed within C (intractable) optimization problem. However, effective heuristic methods were developed which avoid the exponential time
complexity of NP-complete problems by searching for good but not exact minima.
In Chapter 12, we describe the most complex irregular loosely synchronous problems which include some of the hardest problems tackled in C
As described earlier, we implemented essentially all the applications described in the book using explicit user-generated message passing. In Chapter 13, we describe our initial efforts to produce a
higher level data-parallel Fortran environment, which should be able to provide a more attractive software environment for the user. High Performance Fortran has been adopted as an informal industry
standard for this language.
In Chapter 14, we describe the very difficult asynchronous problem class for which scaling parallel algorithms and the correct software model are less clear. Chapters 15, 16, and 17 describe four
software models, Zipcode, MOOSE, Time Warp, and MOVIE which tackle asynchronous and the mixture of asynchronous and loosely synchronous problems one finds in the complex system simulations and
analysis typical of many real-world problems. Applications of this class are described in Chapter 18, with the application of Section 18.3 being an event-driven simulation-an important class of
difficult-to-parallelize asynchronous applications.
In Chapter 19 we look to the future and describe some possibilities for the use of parallel computers in industry. Here we note that Cnot be the dominant industrial use of parallel computers where
information processing is most important. This will be used for decision support in the military and large corporations, and to supply video, information and simulation ``on demand'' for homes,
schools, and other institutions. Such applications have recently been termed national challenges to distinguish them from the large scale grand challenges, which underpinned the initial HPCC
initiative [FCCSET:94a]. The lessons C
Chapter 20 includes a discussion of education in computational science-an unexpected byproduct of C
Next: 2 Technical Backdrop Up: 1 Introduction Previous: 1.3 Caltech Concurrent Computation
Guy Robinson
Wed Mar 1 10:19:35 EST 1995
|
{"url":"http://www.netlib.org/utk/lsi/pcwLSI/text/node7.html","timestamp":"2014-04-17T04:23:53Z","content_type":null,"content_length":"8834","record_id":"<urn:uuid:f2a8b9bf-3d63-4301-80c4-a4333a09d86f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Way of the Java/Heap
A heap is a special kind of tree that happens to be an efficient implementation of a priority queue. This figure shows the relationships among the data structures in this chapter.
Ordinarily we try to maintain as much distance as possible between an ADT and its implementation, but in the case of the Heap, this barrier breaks down a little. The reason is that we are interested
in the performance of the operations we implement. For each implementation there are some operations that are easy to implement and efficient, and others that are clumsy and slow.
It turns out that the array implementation of a tree works particularly well as an implementation of a Heap. The operations the array performs well are exactly the operations we need to implement a
To understand this relationship, we will proceed in a few steps. First, we need to develop ways of comparing the performance of various implementations. Next, we will look at the operations Heaps
perform. Finally, we will compare the Heap implementation of a Priority Queue to the others (arrays and lists) and see why the Heap is considered particularly efficient.
Performance analysisEdit
performance analysis
run time
algorithm analysis
When we compare algorithms, we would like to have a way to tell when one is faster than another, or takes less space, or uses less of some other resource. It is hard to answer those questions in
detail, because the time and space used by an algorithm depend on the implementation of the algorithm, the particular problem being solved, and the hardware the program runs on.
The objective of this section is to develop a way of talking about performance that is independent of all of those things, and only depends on the algorithm itself. To start, we will focus on run
time; later we will talk about other resources.
Our decisions are guided by a series of constraints:
First, the performance of an algorithm depends on the hardware it runs on, so we usually don't talk about run time in absolute terms like seconds. Instead, we usually count the number of abstract
operations the algorithm performs.
Second, performance often depends on the particular problem we are trying to solve -- some problems are easier than others. To compare algorithms, we usually focus on either the worst-case scenario
or an average (or common) case.
Third, performance depends on the size of the problem (usually, but not always, the number of elements in a collection). We address this dependence explicitly by expressing run time as a function of
problem size.
Finally, performance depends on details of the implementation like object allocation overhead and method invocation overhead. We usually ignore these details because they don't affect the rate at
which the number of abstract operations increases with problem size.
To make this process more concrete, consider two algorithms we have already seen for sorting an array of integers. The first is selection sort, which we saw in Section sorting. Here is the pseudocode
we used there.
selectionsort (array)
for (int i=0; i<array.length; i++)
// find the lowest item at or to the right of i
// swap the ith item and the lowest item
To perform the operations specified in the pseudocode, we wrote helper methods named findLowest and swap. In pseudocode, findLowest looks like this
// find the index of the lowest item between
// i and the end of the array
findLowest (array, i)
// lowest contains the index of the lowest item so far
lowest = i;
for (int j=i+1; j<array.length; j++)
// compare the jth item to the lowest item so far
// if the jth item is lower, replace lowest with j
return lowest;
And swap looks like this:
swap (i, j)
// store a reference to the ith card in temp
// make the ith element of the array refer to the jth card
// make the jth element of the array refer to temp
To analyze the performance of this algorithm, the first step is to decide what operations to count. Obviously, the program does a lot of things: it increments i, compares it to the length of the
deck, it searches for the largest element of the array, etc. It is not obvious what the right thing is to count.
selection sort
It turns out that a good choice is the number of times we compare two items. Many other choices would yield the same result in the end, but this is easy to do and we will find that it allows us to
compare most easily with other sort algorithms.
The next step is to define the ``problem size. In this case it is natural to choose the size of the array, which we'll call .
Finally, we would like to derive an expression that tells us how many abstract operations (specifically, comparisons) we have to do, as a function of .
helper method
We start by analyzing the helper methods. swap copies several references, but it doesn't perform any comparisons, so we ignore the time spent performing swaps. findLowest starts at i and traverses
the array, comparing each item to lowest. The number of items we look at is , so the total number of comparisons is .
Next we consider how many times findLowest gets invoked and what the value of is each time. The last time it is invoked, is so the number of comparisons is 1. The previous iteration performs 2
comparisons, and so on. During the first iteration, is and the number of comparisons is .
So the total number of comparisons is . This sum is equal to . To describe this algorithm, we would typically ignore the lower order term () and say that the total amount of work is proportional to .
Since the leading order term is quadratic, we might also say that this algorithm is quadratic time.
quadratic time
Analysis of mergesortEdit
algorithm analysis
In Section mergesort I claimed that mergesort takes time that is proportional to , but I didn't explain how or why. Now I will.
Again, we start by looking at pseudocode for the algorithm. For mergesort, it's
mergeSort (array)
// find the midpoint of the array
// divide the array into two halves
// sort the halves recursively
// merge the two halves and return the result
At each level of the recursion, we split the array in half, make two recursive calls, and then merge the halves. Graphically, the process looks like this:
Each line in the diagram is a level of the recursion. At the top, a single array divides into two halves. At the bottom, arrays (with one element each) are merged into arrays (with 2 elements each).
The first two columns of the table show the number of arrays at each level and the number of items in each array. The third column shows the number of merges that take place at each level of
recursion. The next column is the one that takes the most thought: it shows the number of comparisons each merge performs.
If you look at the pseudocode (or your implementation) of merge, you should convince yourself that in the worst case it takes comparisons, where is the total number items being merged.
The next step is to multiply the number of merges at each level by the amount of work (comparisons) per merge. The result is the total work at each level. At this point we take advantage of a small
trick. We know that in the end we are only interested in the leading-order term in the result, so we can go ahead and ignore the term in the comparisons per merge. If we do that, then the total work
at each level is simply .
Next we need to know the number of levels as a function of . Well, we start with an array of items and divide it in half until it gets to 1. That's the same as starting at 1 and multiplying by 2
until we get to . In other words, we want to know how many times we have to multiply 2 by itself before we get to . The answer is that the number of levels, , is the logarithm, base 2, of .
Finally, we multiply the amount of work per level, , by the number of levels, to get , as promised. There isn't a good name for this functional form; most of the time people just say, ``en log en.
It might not be obvious at first that is better than , but for large values of , it is. As an exercise, write a program that prints and
for a range of values of .
Performance analysis takes a lot of handwaving. First we ignored most of the operations the program performs and counted only comparisons. Then we decided to consider only worst case performance.
During the analysis we took the liberty of rounding a few things off, and when we finished, we casually discarded the lower-order terms.
When we interpret the results of this analysis, we have to keep all this hand-waving in mind. Because mergesort is , we consider it a better algorithm than selection sort, but that doesn't mean that
mergesort is always faster. It just means that eventually, if we sort bigger and bigger arrays, mergesort will win.
How long that takes depends on the details of the implementation, including the additional work, besides the comparisons we counted, that each algorithm performs. This extra work is sometimes called
overhead. It doesn't affect the performance analysis, but it does affect the run time of the algorithm.
For example, our implementation of mergesort actually allocates subarrays before making the recursive calls and then lets them get garbage collected after they are merged. Looking again at the
diagram of mergesort, we can see that the total amount of space that gets allocated is proportional to , and the total number of objects that get allocated is about . All that allocating takes time.
Even so, it is most often true that a bad implementation of a good algorithm is better than a good implementation of a bad algorithm. The reason is that for large values of the good algorithm is
better and for small values of it doesn't matter because both algorithms are good enough.
As an exercise, write a program that prints values of
for a range of values of . For what value of
are they equal?
Priority Queue implementationsEdit
implementation!Priority Queue
analysis!Priority Queue
priority queue!sorted list implementation
In Chapter queue we looked at an implementation of a Priority Queue based on an array. The items in the array are unsorted, so it is easy to add a new item (at the end), but harder to remove an item,
because we have to search for the item with the highest priority.
An alternative is an implementation based on a sorted list. In this case when we insert a new item we traverse the list and put the new item in the right spot. This implementation takes advantage of
a property of lists, which is that it is easy to insert a new node into the middle. Similarly, removing the item with the highest priority is easy, provided that we keep it at the beginning of the
Performance analysis of these operations is straightforward. Adding an item to the end of an array or removing a node from the beginning of a list takes the same amount of time regardless of the
number of items. So both operations are constant time.
constant time
Any time we traverse an array or list, performing a constant-time operation on each element, the run time is proportional to the number of items. Thus, removing something from the array and adding
something to the list are both linear time.
linear time
So how long does it take to insert and then remove items from a Priority Queue? For the array implementation, insertions takes time proportional to , but the removals take longer. The first removal
has to traverse all items; the second has to traverse , and so on, until the last removal, which only has to look at 1 item. Thus, the total time is , which is (still) . So the total for the
insertions and the removals is the sum of a linear function and a quadratic function. The leading term of the result is quadratic.
The analysis of the list implementation is similar. The first insertion doesn't require any traversal, but after that we have to traverse at least part of the list each time we insert a new item. In
general we don't know how much of the list we will have to traverse, since it depends on the data and what order they are inserted, but we can assume that on average we have to traverse half of the
list. Unfortunately, even traversing half of the list is still a linear operation.
So, once again, to insert and remove items takes time proportional to . Thus, based on this analysis we cannot say which implementation is better; both the array and the list yield quadratic run
If we implement a Priority Queue using a heap, we can perform both insertions and removals in time proportional to . Thus the total time for items is , which is better than . That's why, at the
beginning of the chapter, I said that a heap is a particularly efficient implementation of a Priority Queue.
Definition of a HeapEdit
complete tree
heap property
A heap is a special kind of tree. It has two properties that are not generally true for other trees:
[completeness:] The tree is complete, which means that nodes are added from top to bottom, left to right, without leaving any spaces.
[heapness:] The item in the tree with the highest priority is at the top of the tree, and the same is true for every subtree.
Both of these properties call for a little explaining. This figure shows a number of trees that are considered complete or not complete:
An empty tree is also considered complete. We can define completeness more rigorously by comparing the height of the subtrees. Recall that the height of a tree is the number of levels.
Starting at the root, if the tree is complete, then the height of the left subtree and the height of the right subtree should be equal, or the left subtree may be taller by one. In any other case,
the tree cannot be complete.
Furthermore, if the tree is complete, then the height relationship between the subtrees has to be true for every node in the tree.
It is natural to write these rules as a recursive method:
public static boolean isComplete (Tree tree)
// the null tree is complete
if (tree == null) return true;
int leftHeight = height (tree.left);
int rightHeight = height (tree.right);
int diff = leftHeight - rightHeight;
// check the root node
if (diff < 0 diff > 1) return false;
// check the children
if (!isComplete (tree.left)) return false;
return isComplete (tree.right);
For this example I used the linked implementation of a tree. As an exercise, write the same method for the array implementation. Also as an exercise, write the height method. The height of a null
tree is 0 and the height of a leaf node is 1.
recursive definition
The heap property is similarly recursive. In order for a tree to be a heap, the largest value in the tree has to be at the root, and the same has to be true for each subtree. As another exercise,
write a method that checks whether a tree has the heap property.
Heap removeEdit
It might seem odd that we are going to remove things from the heap before we insert any, but I think removal is easier to explain.
At first glance, we might think that removing an item from the heap is a constant time operation, since the item with the highest priority is always at the root. The problem is that once we remove
the root node, we are left with something that is no longer a heap. Before we can return the result, we have to restore the heap property. We call this operation reheapify.
The situation is shown in the following figure:
The root node has priority r and two subtrees, A and B. The value at the root of Subtree A is a and the value at the root of Subtree B is b.
We assume that before we remove r from the tree, the tree is a heap. That implies that r is the largest value in the heap and that a and b are the largest values in their respective subtrees.
Once we remove r, we have to make the resulting tree a heap again. In other words we need to make sure it has the properties of completeness and heapness.
The best way to ensure completeness is to remove the bottom-most, right-most node, which we'll call c and put its value at the root. In a general tree implementation, we would have to traverse the
tree to find this node, but in the array implementation, we can find it in constant time because it is always the last (non-null) element of the array.
Of course, the chances are that the last value is not the highest, so putting it at the root breaks the heapness property. Fortunately it is easy to restore. We know that the largest value in the
heap is either a or b. Therefore we can select whichever is larger and swap it with the value at the root.
Arbitrarily, let's say that b is larger. Since we know it is the highest value left in the heap, we can put it at the root and put c at the top of Subtree B. Now the situation looks like this:
Again, c is the value we copied from the last entry in the array and b is the highest value left in the heap. Since we haven't changed Subtree A at all, we know that it is still a heap. The only
problem is that we don't know if Subtree B is a heap, since we just stuck a (probably low) value at its root.
Wouldn't it be nice if we had a method that could reheapify Subtree B? Wait... we do!
Heap insertEdit
Inserting a new item in a heap is a similar operation, except that instead of trickling a value down from the top, we trickle it up from the bottom.
Again, to guarantee completeness, we add the new element at the bottom-most, rightmost position in the tree, which is the next available space in the array.
Then to restore the heap property, we compare the new value with its neighbors. The situation looks like this:
The new value is c. We can restore the heap property of this subtree by comparing c to a. If c is smaller, then the heap property is satisfied. If c is larger, then we swap c and a. The swap
satisfies the heap property because we know that c must also be bigger than b, because c > a and a > b.
Now that the subtree is reheapified, we can work our way up the tree until we reach the root.
Performance of heapsEdit
For both insert and remove, we perform a constant time operation to do the actual insertion and removal, but then we have to reheapify the tree. In one case we start at the root and work our way
down, comparing items and then recursively reheapifying one of the subtrees. In the other case we start at a leaf and work our way up, again comparing elements at each level of the tree.
As usual, there are several operations we might want to count, like comparisons and swaps. Either choice would work; the real issue is the number of levels of the tree we examine and how much work we
do at each level. In both cases we keep examining levels of the tree until we restore the heap property, which means we might only visit one, or in the worst case we might have to visit them all.
Let's consider the worst case.
At each level, we perform only constant time operations like comparisons and swaps. So the total amount of work is proportional to the number of levels in the tree, a.k.a. the height.
So we might say that these operations are linear with respect to the height of the tree, but the ``problem size we are interested in is not height, it's the number of items in the heap.
As a function of , the height of the tree is . This is not true for all trees, but it is true for complete trees. To see why, think of the number of nodes on each level of the tree. The first level
contains 1, the second contains 2, the third contains 4, and so on. The th level contains
nodes, and the total number in all levels up to is
. In other words, , which means that .
logarithmic time
Thus, both insertion and removal take logarithmic time. To insert and remove items takes time proportional to .
The result of the previous section suggests yet another algorithm for sorting. Given items, we insert them into a Heap and then remove them. Because of the Heap semantics, they come out in order. We
have already shown that this algorithm, which is called heapsort, takes time proportional to , which is better than selection sort and the same as mergesort.
As the value of gets large, we expect heapsort to be faster than selection sort, but performance analysis gives us no way to know whether it will be faster than mergesort. We would say that the two
algorithms have the same order of growth because they grow with the same functional form. Another way to say the same thing is that they belong to the same complexity class.
big-O notation
complexity class
order of growth
constant time
linear time
quadratic time
logarithmic time
Complexity classes are sometimes written in ``big-O notation. For example, , pronounced ``oh of en squared is the set of all functions that grow no faster than for large values of . To say that an
algorithm is is the same as saying that it is quadratic. The other complexity classes we have seen, in decreasing order of performance, are:
0.2in tabularll
& constant time
& logarithmic
& linear
& ``en log en
& quadratic
& exponential
tabular 0.2in
So far none of the algorithms we have looked at are exponential. For large values of , these algorithms quickly become impractical. Nevertheless, the phrase ``exponential growth appears frequently in
even non-technical language. It is frequently misused so I wanted to include its technical meaning.
People often use ``exponential to describe any curve that is increasing and accelerating (that is, one that has positive slope and curvature). Of course, there are many other curves that fit this
description, including quadratic functions (and higher-order polynomials) and even functions as undramatic as . Most of these curves do not have the (often detrimental) explosive behavior of
As an exercise, compare the behavior of and as the value of increases.
selection sort
complexity class
order of growth
[selection sort:] The simple sorting algorithm in Section sorting.
[mergesort:] A better sorting algorithm from Section mergesort.
[heapsort:] Yet another sorting algorithm.
[complexity class:] A set of algorithms whose performance (usually run time) has the same order of growth.
[order of growth:] A set of functions with the same leading-order term, and therefore the same qualitative behavior for large values of .
[overhead:] Additional time or resources consumed by a programming performing operations other than the abstract operations considered in performance analysis.
Last modified on 30 May 2011, at 09:54
|
{"url":"http://en.m.wikibooks.org/wiki/The_Way_of_the_Java/Heap","timestamp":"2014-04-16T07:26:00Z","content_type":null,"content_length":"39533","record_id":"<urn:uuid:b27cd724-f1d6-4251-8e85-85c36d9a20e1>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent US7451012 - Fault electric arc protection circuits and method for detecting fault electric arc
Publication number US7451012 B2
Publication type Grant
Application number US 11/709,064
Publication date Nov 11, 2008
Filing date Feb 21, 2007
Priority date Feb 21, 2007
Fee status Paid
Also published as US20080201021
Publication number 11709064, 709064, US 7451012 B2, US 7451012B2, US-B2-7451012, US7451012 B2, US7451012B2
Inventors Xin Wang, Qianjun Shen
Original Assignee Gree Electric Applicances Inc. Of Zhuhai
Export Citation BiBTeX, EndNote, RefMan
Patent Citations (5), Referenced by (1), Classifications (4), Legal Events (3)
External Links: USPTO, USPTO Assignment, Espacenet
Fault electric arc protection circuits and method for detecting fault electric arc
US 7451012 B2
The present invention relates to an fault electric arc protection circuits which comprising a power source, a signal sampling device, a signal processing module, an arc detection control device and a
power cut-off module. This invention also provides a method for detecting arc-fault, comprising the steps of S1, sampling the current signal of the circuits to be protected in real time, and
outputting the sampled signal; S2, processing the sampled signal, then outputting the processed result; S3, detecting the processed result, and then determining whether an arc symbol has occurred
based on the detected result. The advantages of the present invention is, when using the electric equipment, once a continuous fault electric arc occurred in the wires, the protection circuits can
detect the fault electric are and cut off the power, thus to prevent the fire caused by the fault electric arc.
1. A fault electric arc protection circuits system comprising a power source, wherein the circuits system comprising:
signal sample device for sampling current signals simultaneously, and output the sampled signals;
signal processing module for processing the sampled signals, and output processed results;
arc detecting control device for receiving the processed results from the signal processing module, and detecting the results and determining whether to output control signals or not; and
power supply cutting off module for cutting off the power supply according to the control signals from the arc detecting control device,
wherein said signal sampling device comprising a matching resistor and a current transformer without iron core: said signal processing module comprising a rectifying tube and a voltage divider; said
arc detecting control device is a single chip micyoco; said power cut-off module comprising a set of audion that orderly connected, and an actuator portion; one end of the current transformer is
grounded and an other end is connected with the rectifying tube thus to connect to the single chip micyoco; the matching resistor is coordinately connected between the rectifying tube and the current
transformer, and the voltage divider is coordinately connected between the rectifying tube and the single chip micyoco.
2. The fault electric are protection circuits system as in claim 1, wherein the single chip micyoco further connects a reposition resistor that comprising a DC power supply and a resistor connecting
between the single chip micyoco and the DC power supply.
3. The fault electric arc protection circuits system as in claim 2, wherein the actuator portion is a relay with closed contact.
4. A method for detecting fault electric arc, wherein comprising the steps of:
S1, sampling a current signal of circuits being protected simultaneously, and outputting the sampled signal;
S2, processing the sampled signal, then outputting the processed result;
S3, detecting the processed result, and then estimating whether an arc symbol has occurred based on the detected result; and
S4, outputting a controlling signal according to the detected arc symbol, and then cutting off the circuits being protected,
in step S1, sampling the current signal via a signal sampling device simultaneously, and the sampled signal is a current waveform:
in step S2, processing the sampled signal by a signal process module, and the output result is a DC signal;
in step S3, the detecting and determining is conducted by an arc detecting control device.
5. The method as in
claim 4
, wherein the arc detecting control device is a single chip micyoco, the detecting and determining processes are:
(S3-1) resetting the single chip micyoco, then proceed to step S3-2;
(S3-2) program initialization in the single chip micyoco, all symbol bits are as “0” at this time; once the signal sampling device and signal processing device converting analog signals into digital
signals, the relevant numerical value is indicated as AD, and input into the single chip micyoco through one pin; three symbol bits are set in the single chip micyoco as AD0, AD1, and AD2
respectively to save the latest three values of AD, in terms of push stack; while initializing, AD0=AD1=AD2=; the control signal output to a power cut-off module from the single chip micyoco via
another pin; once the program of the single chip micyoco is initialized, proceed to step S3-3;
(S3-3) determining whether the AD conversion is finished, this is achieved by setting an AD converted symbol bit to indicate; if no AD value newly occurred, the AD converted symbol bit is set as “0”
to indicate the conversion is not finished, and continually estimating whether a new AD value occurs; if a pew AD value occurred, the AD converted symbol bit will set as “1” to indicate the
conversion is achieved, then proceed to step S3-4;
(S3-4) resetting the AD converted symbol bit, timer A starts timing, when the time is up, proceed to step S3-5, otherwise, proceed to step S3-6;
(S3-5) setting the arc-occurred time as “0” , timer A starts timing again, then proceed to step S3-6;
(S3-6) impulse detection: detecting the numbers of current waveforms that induce by the current transformer 2 in the time period when performing the AD transforming, and setting the impulse number
counting bit, recording the numbers of the impulse, then proceed to step S3-7;
(S3-7) saving the present AD values: saving the present AD conversion value into ADO, then proceed to step S3-8;
(S3-8) saving the present AD values: saving the present AD conversion value into ADO, then proceed to step S3-8;
(S3-9) arc-occurred times increment equals to 1: setting a symbol bit T to indicate the times that arc occurred, increment of T equals to 1, then proceed to step S3-l0;
(S3-10) in order to save the next detected AD value into AD0, update the values of AD1 and AD2, and saving the value of AD0 into AD1, the value of AD1 into AD2, then proceed to step S3-11;
(S3-11) timer B starts timing, when the time is up, proceed to step S3-15, otherwise, proceed to step S3-12;
(S3-12) comparing the numbers of the impulse with the predefined value X, the number of the impulse is accumulating the detected impulse number during the time period set in the timer B, the
detection of the numbers of the impulse is finished in step S3-6, if the detected number is greater than the predefined value X. then proceed to step S3-13, otherwise, proceed to step S3-14;
(S3- 13) arc-occurred time increment equals to 1, that is, adding 1 to the value of symbol bit T that indicating the times that arc occurred, then proceed to step S3-14;
(S3-14) searching for AD_MAX: in the time period set in timer B, searching for the maximum AD conversion value, and save as AD_MAX that is, setting an AD_MAX symbol bit in the single chip micyoco,
and the initial value is set as “0”, then comparing the newest transformed AD0 value with AD_MAX value, if AD0>AD_MAX, then the AD_MAX value will be updated by this AD0 value; each time a new AD
value is transformed, conduct a comparison to get the latest AD_MAX value, then proceed to step S3-19;
(S3-15) timer C starts timing, once the time reached the set time, then proceed to step S3-17, otherwise proceed to step S3-16;
(S3-16) searching for min _MAX: in the time period set in timer C, searching for the minimum AD_MAX conversion value, and save as min _MAX, that is, setting an min _MAX symbol bit in the single chip
micyoco, and the initial value is set as “0”, then comparing the AD_MAX value in step S3-14 with this min _MAX value, if AD_MAX >min_MAX, then the min_MAX value will be updated by this AD_MAX value;
each time a new AD_MAX value is updated, conduct a comparison to get the latest min__MAX value, then proceed to step S3-19;
(S3-17) comparing the min_MAX comparison value with a set value “y”: that is, setting two symbol bits of min_MAX0 and min_MAXI in the single chip micyoco updated in terms of push stack, and saving
the latest min_MAX value in step S3-16 as min_MAX0, and saving the original min_MAX0 value as min_MAX), then comparing the difference value attained by deducting min_MAX1 from min_MAX0 with the set
value “y”, if the difference value greater than set value “y”, then proceed to step S3-18, otherwise, proceed to step S3-19; and then saving this min_MAX0 as min_MAX1, in order to save the next
min_MAX value as a new min_MAX0;
(S3-18) arc-occurred time increment equals to 1, that is, adding 1 to the value of symbol bit T that indicating the times that arc occurred, then proceed to step S3-19;
(S3-19) comparing the times that are occurred with a set value “z”, that is, comparing the value of symbol bit T that indicating the times arc occurred with the set value “z”, if T> z, then proceed
to step S3-20, otherwise, proceed to S3-3;
(S3-20) cutting off the power supply: the single chip micyoco set the level of the control output pin as high level, the audion will conducted and the actuator will cut-off, thus to cut off the power
6. The method as in claim 5, wherein the processes of detecting and determining of the single chip micyoco comprising an interruption step, which is, timely interrupting, setting a tine period S, by
each period of S, the program will timely interrupt, and then re-calculate the time period again.
7. The method as in claim 5, wherein the processes of detecting and determining of the single chip micyoco comprising an interruption step, which is, AD transforming interruption, saving the value of
current, resetting the AD converted symbol bit, which is, setting the symbol bit of AD conversion as “1”.
8. The method as in
claim 4
, wherein comprising the steps of:
(S5-1) resetting the single chip micyoco, then proceed to step S5-2;
(S5-2) initializing program in the single chip micyoco, all the symbol bits are “0” at this time; wherein AD0=AD1=AD2=0, the symbol bits of AD0, AD1 and AD2 are the three AD values that corresponding
to the latest sampled three signals transformed from analog to digital signals, the sampled AD signals are input from one pin, and the AD control signals are output from another pin of the single
chip micyoco, then proceed to step S5-3;
(S5-3) determining whether the AD conversion is finished, this is indicated by setting an AD convened symbol bit; if no new AD value occurred, the AD converted symbol bit is set as “0” to indicate
the conversion is not finished, and continually determining whether a new AD value occurred; if a new AD value occurred, the AD convened symbol bit will set as “1”, then proceed to step S5-4;
(S5-4) zero setting the AD converted symbol bit, timer A starts timing, once the time is up, proceed to step S5-5, otherwise, proceed to step S5-6;
(S5-5) setting the times that arc occurred as ″0″, time setting A and then re-timing, then proceed to step S5-6;
(S5-6) saving the present AD values: saving the present AD conversion value into ADO, then proceed to step S5-7;
(S5-7) determining whether the values on the three symbol bits measure up with AD2>AD0>AD1, if that, proceed to step S5-8, otherwise, proceed to step S5-9;
(S5-8) arc-occurred time increment equals to 1: setting a symbol bit T to indicate the times that arc occurred, adding 1 on T, then proceed to step S5-9;
(S5-9) updating the values of AD1 and AD2, and saving the value of AD0 into AD1, the value of AD1 into AD2, then proceed to step S5-10;
(S5-10) timer B starts timing, once the time is up, proceed to step S5-11, otherwise, proceed to step S5-12;
(S5-11) searching for AD_MAX: in the time period set in timer B, searching for the maximum AD conversion value, and save as AD_MAX, that is, setting an AD_MAX zone bit in the single chip micyoco, and
the initial value is set as “0”, then comparing the lately transformed ADO value with AD_MAX value, if AD0>AD_MAX, then the AD_MAX value will be updated by this AD0 value; each time a new AD value is
transformed, conduct a comparison to get the latest AD_MAX value, ten proceed to step S5-16;
(S5-12) setting time on timer C, once the time reached the set time, then proceed to step S5-14, otherwise proceed to step S5-13;
(S5-13) searching for min_MAX: in the time period set in timer C, searching for the minimum AD_MAX conversion value, and save as min_MAX, that is, setting an min_MAX symbol bit in the single chip
micyoco, and the initial value is set as “0”, then comparing the AD_MAX value achieved in step S5-11 with this min_MAX value, if AD_MAX > min_MAX, then the min_MAX value will be updated by this
AD_MAX value; each time a new AD_MAX value is updated, conduct a comparison to get the latest min_MAX value, then proceed to step S5-16;
(S5-14) comparing the min_MAX comparison value with a set value “y”: that is, setting two updated symbol bits of min_MAX0 and min_MAX1 in the single chip micyoco, and saving the latest min_MAX value
achieved instep S5-13 as min_MAX0, and saving the original min_MAX0 value as min_MAX1, then comparing the difference value attained by deducting min_MAX1 from min_MAX0 with the set value “y”, if the
difference value is greater than set value “y”, then proceed to step S5-15, otherwise, proceed to step S5-16; and then saving this min_MAX0 as min_MAX1;
(S5-15) adding 1 to the times that arc occurred, that is, adding 1 to the value of symbol bit T indicating the times that arc occurred, then proceed to step S5-16;
(S5-16) comparing the times that arc occurred with a set value “z”, that is, comparing the value of zone bit T that indicating the times arc occurred with the set value “z”, if T> z, then proceed to
step S5-17, otherwise, proceed to S5-3;
(S5-17) cutting off the power supply: the single chip micyoco set the level of the control output pin as high level, the audion will conducted and the actuator will shut-off, thus to cut off the
power supply.
1. Technical Field
The present invention relates to electrical technology, more particularly, relates to an Fault electric arc protection circuits and method for detecting fault electric arc.
2. Description of Related Art
Nowadays, with the popularization of home appliances, fire caused by electric equipments are increasing, and the fire caused by fault electric arc (such as arcs and electric sparks) is a substantial
reason. The fault electric arc can be divided as shunt-wound fault electric arc, grounded fault electric arc and continuous fault electric arc.
At present, the over flow, creepage and over voltage protections can only protect the shunt-wound and grounded fault electric arcs, but not the continuous fault electric arc.
The fault electric arc protection devices in the prior art is focused on the fault electric arc in power circuit only, this fault electric arc occurs when a heavy current is constantly discharging;
but there is no protection for the fault electric arc in electric circuits, such as the constant fault electric arc occurs at the connection portion of an electric circuit. Furthermore, most of the
methods for detecting the fault electric arc protection circuits in the prior art is limited to estimate whether an fault electric arc occurred by detecting variation of the current wave, and the
veracity and anti-interference ability of the detection is poor. Therefore, there is still limitation in preventing fire accidents caused by the continuous fault electric arc in the electrical
The present invention provides an fault electric arc protection circuits that can automatically detect the dangerous fault electric arc and cut off the circuits once a continuous arc is occurred in
the lines being protected; this invention also provides a method for detecting fault electric arc, once a continuous fault electric arc occurs in the lines being protected, the method can detect it
in time, and cut off the power by an fault electric arc protection circuits before electrical fire is caused, thus to avoid fire and other fatal accident, and to protect the electric equipment.
The technical solution of the present invention is:
Providing an fault electric arc protection circuits which comprising a power source, a signal collection device, a signal processing module, an arc detecting control device and a power cut-off
module; wherein the signal sampling device comprises a matching resistor R[3 ], and a current transformer without iron core; said signal processing module comprises a rectifying tube D1 and a voltage
divider R[2]; the fault electric arc detection control device is a single chin micyoco (SCM); the power cut-off module comprises a set of audion that is orderly connected, and an actuator portion;
one end of the current transformer is grounded and the other end is connected to the rectifying tube D1 thus to connect to the SCM; the matching resistor R[3 ]is coordinately connected between the
rectifying tube D1 and the current transformer, and the voltage divider R[2 ]is coordinately connected between the rectifying tube D1 and the SCM.
Advantageously, the SCM further connects a resetting resistor that having a DC source and a resistor R[1 ]connecting between the SCM and the DC source.
Advantageously, the actuator portion is a relay with closed contact.
The present invention further provides a method for detecting fault electric arc, wherein comprising the steps of:
□ S1, sampling the current signal of the circuits being protected simultaneously, and outputting the sampled signal;
□ S2, processing the sampled signal, then outputting the processed result;
□ S3, detecting the processed result, and then estimating whether an arc symbol has occurred based on the detected result.
Advantageously, the method further comprises step S4 of outputting a control signal according to the detected arc symbol to cut off the circuits being protected.
Advantageously, in step S1, sampling the current signal via a signal sampling device simultaneously, and the sampled signal is a current wave;
In step S2, processing the sampled signal by a signal process module, and the output result is a DC signal;
In step S3, the detection and estimation is conducted by an arc detection control device.
The advantage of the present invention is, when using the electric equipment, when a continuous fault electric arc occurs in the wires, the protection circuits can detect the arc and cut off the
power, thus to prevent the fire caused by fault electric arc.
FIG. 1 is the schematic circuit diagram of the first embodiment of the present invention.
FIG. 2 is the flow chart of detecting and estimating of the processed current waveform by the SCM.
FIGS. 3 and 4 are the flow charts of the program interruption of the SCM.
FIG. 5 is the flow chart of detecting and estimating of the processed current waveform by the SCM in the second embodiment.
FIG. 1 is the schematic circuit diagram of the first embodiment of the present invention, comprising a power source, a signal sampling device, a signal processing module, an fault electric arc
detection control device and a power cut-off module; wherein said signal sampling device comprising a watching resistor 3 and a current transformer without iron core; the signal processing module
having a rectifying tube 4 and a voltage divider 5; the arc detection control device is SCM 7; the power cut-off module comprising a set of audion 10 that orderly connected, arid an actuator portion.
The actuator portion in the present invention is relay 9 with closed contact.
One end of the current transformer 2 is grounded and the other end is connected with the rectifying tube 4 thus to input the SCM 7. The matching resistor 3 is coordinately connected between the
rectifying tube 4 and the current transformer 2, and the voltage divider 5 is coordinately connected between the rectifying tube 4 and the SCM 7, to act as a voltage divider.
The circuits being protected is input with 220V AC power via an AC220V input end 8, the current pass through the relay 9 in the fault electric arc protection circuits, and outputs via an AC220V
output end 11, thus provides rower for the circuits being protected. The circuits being protected is connected to the fault electric axe protection circuits, a DC power 1 provides working power for
the fault electric arc protection circuits, and the matching resistor 3 is connected to both ends of the current transformer 2. The current transformer 2 is sleeved on the outside of the circuits
being protected, while sampling the AC current signal of such circuits. If the circuits is working well, the waveform sampled by the current transformer 2 is an 50Hz impulse Cyc; if the circuits is
small current inductive load, end an arc occurred, the Cyc of the inductive waveform in the current transformer 2 will change; and if the circuits is a large current inductive load, and an arc is
occurred, the swing of the inductive waveform in the current transformer 2 will change, and the waveform will aberrance. Afterwards, the detected current signal will be commutated by a rectifier
diode and voltage-divided by resistor 5 thus to transit as a DC signal; then the DC signal will be input to SCM 7, and evaluated by SCM 7 to determine whether an fault electric are occurred. If no
fault electric arc is detected, the SCM 7 Will send out a low level signal, the audion 10 will cut-off, the power will pass through the relay 9 with closed contact to provide working voltage for the
electric equipment being protected; if fault electric arc is detected in the circuits being protected, the SCM 7 will send out a high level signal. and the audion 10 Will conduct, the relay 9 with
closed contact will shut down then, thus to cut off the power supply to implement the protection for the circuits, and avoid fire in the electric equipment.
In the present embodiment, the SCM further comprises a resetting circuits which comprises a DC power supply and a resistor connecting between the DC power and the SCM. When the fault electric arc is
eliminated, DC power supply 1 will reset and re-conduct the power to the fault electric arc protection circuits through the resistor 6, and the SCM will work again.
The present invention further provides a method for detecting fault electric arc, firstly, sampling the current signal of the circuits being protected in real time by a signal sampling device, and
outputting the sampled signal; secondly, processing the sampled signal by a signal processing module, then outputting the processed result of DC signal; and then, detecting the processed result by an
arc detection control device, and then estimating whether an arc symbol has occurred based on the detected result; finally, outputting a control signal according to the detected arc symbol, and then
cutting off the circuits being protected.
The signal sampling device, signal processing module, arc detecting control device and the power cut-off module can be the corresponding devices and modules as in the foresaid protection circuits.
As shown in FIGS. 2, 3, and 4, the flow chart of the SCM are provided as follow:
□ (S3-1) resetting the SCM, then proceed to step S3-2;
□ (S3-2) program initialization in the SCM, all the symbol bits are as “0” at this time. Once the signal sampling device and signal processing device converting the analog signals into digital
signals, the relevant numerical value is indicated as AD, and input into the SCM through one pin. Three symbol bits are set in the SCM as AD0, AD1, and AD2 respectively to save the latest
three values of AD, in terms of push stack. While initializing, AD0=AD1=AD2=0. The control signal output to the power cut-off module from the SCM via another pin. Once the program of the SCM
is initialized, proceed to step S3-3;
(S3-3) determining whether the AD conversion is finished, this is achieved by setting an AD converted symbol bit to indicate. If no AD value newly occurred, the AD converted symbol bit is set as “0”
to indicate the conversion is not finished, and continually estimating whether a new AD value occurs; if a new AD value occurred, the AD converted symbol bit will set as “1” to indicate the
conversion is achieved, then proceed to step S3-4;
□ (S3-4) resetting the AD converted symbol bit, timer A starts timing, when the time is up, proceed to step S3-5, otherwise, proceed to step S3-6;
□ (S3-5) setting the arc-occurred time as “0”, timer A starts timing again, then proceed to step S3-6;
□ (S3-6) impulse detection: detecting the numbers of current waveforms that induce by the current transformer 2 in the time period when performing the AD transforming, and setting the impulse
number counting bit, recording the numbers of the impulse, then proceed to step S3-7;
□ (S3-7) saving the present AD values: saving the present AD conversion value into ADO, then proceed to step S3-8;
(S3-8) estimating whether the values on the three zone bits measure up with AD2>AD0>AD1, if so, proceed to step S3-9, otherwise, proceed to step S3-10;
□ (S3-9) arc-occurred times increment equals to 1: setting a symbol bit T to indicate the times that arc occurred, increment of T equals to 1, then proceed to step S3-10;
□ (S3-10) in order to save the next detected AD value into AD0, update the values of AD1 and AD2, and saving the value of AD0 into AD1, the value of AD1 into AD2, then proceed to step S3-11;
□ (S3-11) timer B starts timing, when the time is up, proceed to step S3-15, otherwise, proceed to step S3-12;
□ (S3-12) comparing the numbers of the impulse with the predefined value X, the number of the impulse is accumulating the detected impulse number during the time period set in the timer B, the
detection of the numbers of the impulse is finished in step S3-6, if the detected number is greater than the predefined value X, then proceed to step S3-13, otherwise, proceed to step S3-14;
□ (S3-13) arc-occurred time increment equals to 1, that is, adding 1 to the value of symbol bit T that indicating the times that arc occurred, then proceed to step S3-14;
□ (S3-14) searching for AD_MAX: in the time period set in timer B, searching for the maximum AD conversion value, and save as AD_MAX, that is, setting an AD_MAX symbol bit in the SCM, and the
initial value is set as “0”, then comparing the newest transformed AD0 value with AD_MAX value, if AD0>AD_MAX, then the AD_MAX value will be updated by this AD0 value; each time a new AD
value is transformed, conduct a comparison to get the latest AD_MAX value, then proceed to step S3-19;
□ (S3-15) timer C starts timing, once the time reached the set time, then proceed to step S3-17, otherwise proceed to step S3-16;
□ (S3-16) searching for min_MAX: in the time period set in timer C, searching for the minimum AD_MAX conversion value, and save as min_MAX, that is, setting an min_MAX symbol bit in the SCM,
and the initial value is set as “0”, then comparing the AD_MAX value in step S3-14 with this min_MAX value, if AD_MAX>min_MAX, then the min_MAX value will be updated by this AD_MAX value;
each time a new AD_MAX value is updated, conduct a comparison to get the latest min_MAX value, then proceed to step S3-19;
□ (S3-17) comparing the min_MAX comparison value with a set value “y”: that is, setting two symbol bits of min_MAX0 and min_MAX1 in the SCM updated in terms of push stack, and saving the latest
min_MAX value in step S3-16 as min_MAX0, and saving the original min_MAX0 value as min_MAX1, then comparing the difference value attained by deducting min_MAX1 from min_MAX0 with the set
value “y”, if the difference value greater than set value “y”, then proceed to step S3-18, otherwise, proceed to step S3-19; and then saving this min_MAX0 as min_MAX1, in order to save the
next min_MAX value as a new min_MAX0;
□ (S3-18) arc-occurred time increment equals to 1, that is, adding 1 to the value of symbol bit T that indicating the times that arc occurred, then proceed to step S3-19;
□ (S3-19) comparing the times that arc occurred with a set value “z”, that is, comparing the value of symbol bit T that indicating the times arc occurred with the set value “z”, if T>z, then
proceed to step S3-20, otherwise, proceed to S3-3;
□ (S3-20) cutting off the power supply: the SCM set the level of the control output pin as high level, the audion will conducted and the actuator will cut-off, thus to cut off the power supply.
Wherein steps S3-4 to S3-10 are aiming at the situation that the when the arc occurred, the current in the circuits being protected is quite low (such as on the level of mA); and steps S3-11 to S3-14
are aiming at the situation that the when the arc occurred, the current in the circuits being protected is quite high (such as 15˜20 A); and steps S3-15 to S3-18 are aiming at the situation that the
when the arc occurred, the current in the circuits being protected is in middle level (such as around 10 A).
The SCM samples in a predefined time, if one of the following two situations occurs, SCM will interrupt its process, execute the interruptive procedure, and then re-start the process at the point of
□ (S4-1) interrupt in a predefined time: setting a time period S, by each period of S, the program will interrupt in a predefined time, and then re-calculating the time period again;
□ (S4-2) AD transformed interrupt: saving the value of current, resetting the AD converted symbol bit, that is, the symbol bit of AD conversion is set to “1”.
It is appreciated by one of the skilled in the art, if the current of the circuits being protected is not very high, some of the steps can be skipped to reduce the burthen of the SCM. FIG. 5 shows
the flow chart that the steps S11 to S13 in FIG. 2 have been skipped.
□ (S5-1) resetting the SCM, then proceed to step S5-2;
□ (S5-2) initializing program in the SCM, all the symbol bits are “0” at this time; wherein AD0=AD1=AD2=0, the symbol bits of AD0, AD1 and AD2 are the three AD values that corresponding to the
latest sampled three signals transformed from analog to digital signals, the sampled AD signals are input from one pin, and the AD control signals are output from another pin of the SCM, then
proceed to step S5-3;
□ (S5-3) determining whether the AD conversion is finished, this is indicated by setting an AD converted symbol bit. If no new AD value occurred, the AD converted symbol bit is set as “0” to
indicate the conversion is not finished, and continually determining whether a new AD value occurred; if a new AD value occurred, the AD converted symbol bit will set as “1”, then proceed to
step S5-4;
□ (S5-4) zero setting the AD converted symbol bit, timer A starts timing, once the time is up, proceed to step S5-5, otherwise, proceed to step S5-6;
□ (S5-5) setting the times that arc occurred as “0”, time setting A and then re-timing, then proceed to step S5-6;
□ (S5-6) saving the present AD values: saving the present AD conversion value into AD0, then proceed to step S5-7;
□ (S5-7) determining whether the values on the three symbol bits measure up with AD2>AD0>AD1, if that, proceed to step S5-8, otherwise, proceed to step S5-9;
□ (S5-8) arc-occurred time increment equals to 1: setting a symbol bit T to indicate the times that arc occurred, adding 1 on T, then proceed to step S5-9;
□ (S5-9) updating the values of AD1 and AD2, and saving the value of AD0 into AD1, the value of AD1 into AD2, then proceed to step S5-10;
□ (S5-10) timer B starts timing, once the time is up, proceed to step S5-11, otherwise, proceed to step S5-12;
□ (S5-11) searching for AD_MAX: in the time period set in timer B, searching for the maximum AD conversion value, and save as AD_MAX, that is, setting an AD_MAX zone bit in the SCM, and the
initial value is set as “0”, then comparing the lately transformed AD0 value with AD_MAX value, if AD0>AD_MAX, then the AD_MAX value will be updated by this AD0 value; each time a new AD
value is transformed, conduct a comparison to get the latest AD_MAX value, then proceed to step S5-16;
□ (S5-12) setting time on timer C, once the time reached the set time, then proceed to step S5-14, otherwise proceed to step S5-13;
□ (S5-13) searching for min_MAX: in the time period set in timer C, searching for the minimum AD_MAX conversion value, and save as min_MAX, that is, setting an min_MAX symbol bit in the SCM,
and the initial value is set as “0”, then comparing the AD_MAX value achieved in step S5-11 with this min_MAX value, if AD_MAX>min_MAX, then the min_MAX value will be updated by this AD_MAX
value; each time a new AD_MAX value is updated, conduct a comparison to get the latest min_MAX value, then proceed to step S5-16;
□ (S5-14) comparing the min_MAX comparison value with a set value “y”: that is, setting two updated symbol bits of min_MAX0 and min_MAX1 in the SCM, and saving the latest min_MAX value achieved
instep S5-13 as min_MAX0, and saving the original min_MAX0 value as min_MAX1, then comparing the difference value attained by deducting min_MAX1 from min_MAX0 with the set value “y”, if the
difference value is greater than set value “y”, then proceed to step S5-15, otherwise, proceed to step S5-16; and then saving this min_MAX0 as min_MAX1;
□ (S5-15) adding 1 to the times that arc occurred, that is, adding 1 to the value of symbol bit T indicating the times that arc occurred, then proceed to step S5-16;
□ (S5-16) comparing the times that arc occurred with a set value “z”, that is, comparing the value of zone bit T that indicating the times arc occurred with the set value “z”, if T>z, then
proceed to step S5-17, otherwise, proceed to S5-3;
□ (S5-17) cutting off the power supply: the SCM set the level of the control output pin as high level, the audion will conducted and the actuator will shut-off, thus to cut off the power
Throughout the specification the aim has been to describe the preferred embodiment of the present invention without limiting the invention to any one embodiment or specific collection of features.
Those skilled in the art may implement variations from the specific embodiment that will nonetheless fall within the scope of the invention.
Cited Patent Filing date Publication date Applicant Title
US6621677 * Dec 21, 1998 Sep 16, 2003 Sicom As Method and system for series fault protection
US7021950 * Apr 8, 2004 Apr 4, 2006 Lear Corporation System and method for preventing electric arcs in connectors feeding power loads and connector used
US7236338 * Sep 16, 2003 Jun 26, 2007 The Boeing Company System and method for remotely detecting and locating faults in a power system
US7317598 * Jun 22, 2006 Jan 8, 2008 Philippe Magnier Electric transformer explosion prevention device
US20070133134 * Dec 9, 2005 Jun 14, 2007 Hamilton Sundstrand Corporation AC arc fault detection and protection
Citing Patent Filing date Publication date Applicant Title
US8054628 * Mar 17, 2008 Nov 8, 2011 Abb Technology Ag Method for operating a sealed for life compact secondary substation
U.S. Classification 700/162
International Classification G06F19/00
Cooperative Classification H02H1/0015
European Classification H02H1/00C2
Date Code Event Description
Apr 27, 2012 FPAY Fee payment Year of fee payment: 4
Owner name: GREE ELECTRIC APPLIANCES INC. OF ZHUHAI, CHINA
May 14, 2007 AS Assignment Free format text: CORRECTION TO SPELLING OF ASSIGNEE NAME RECORDED ON REEL 018975, FRAME 0113;ASSIGNORS:WANG, XIN;CHEN, QIANJUN;REEL/FRAME:019294/0427
Effective date: 20070209
Owner name: GREE ELECTRIC APPLICANCES INC. OF ZHUHAI, CHINA
Feb 21, 2007 AS Assignment Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, XIN;SHEN, QIANJUN;REEL/FRAME:018975/0113
Effective date: 20070209
Original Image
|
{"url":"http://www.google.com/patents/US7451012?ie=ISO-8859-1","timestamp":"2014-04-18T06:41:12Z","content_type":null,"content_length":"83797","record_id":"<urn:uuid:e049b487-4ae6-4b6a-bacb-c352b090cd17>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fast Solutions of Exact Sparse Linear Equations
Shown are timings for solving systems of linear equations represented by sparse square integer matrices. The experiment was performed on an Intel Xeon 3.07 GHz 64-bit Linux system, with a time limit
of 4200 seconds. The number at the bottom tells how many times faster Mathematica 9 is than Maple 16.
Shown are timings for solving systems of linear equations represented by square integer matrices with a permuted triangular block structure.
|
{"url":"http://wolfram.com/mathematica/new-in-9/enhanced-core-algorithms/fast-solutions-of-exact-sparse-linear-equations.html","timestamp":"2014-04-18T13:07:28Z","content_type":null,"content_length":"6139","record_id":"<urn:uuid:166e86c7-f4bd-46bc-bba3-3ea94bcaae78>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Comments on A Neighborhood of Infinity: What does topology have to do with computability? Reloaded.Sorry, Peter, I didn't mean to show you up as the ...Shouldn't "the observable sets are {}, {⊤} a...Shouldn't "the observable sets are {}, {⊤} a...Paul: Sorry, I am not a toplogist of any stripe wh...The analogy between observability and open sets, w...Thanks for an interesting post. To nitpick, you sa...Anonymous,That's kind of the point. There's a long...But not all functions terminate in a finite time, ...Every link's broken? Either I've had a complete me...The links to the paper by Escardo and Paul Taylor ...Hi! The link to your previous post is broken (some...
tag:blogger.com,1999:blog-11295132.post1721695175856952435..comments2014-04-16T10:57:46.206-07:00Dan Piponihttps://plus.google.com/
107913314994758123748noreply@blogger.comBlogger11125tag:blogger.com,1999:blog-11295132.post-31134478616434181142008-03-06T09:19:00.000-08:002008-03-06T09:19:00.000-08:00Sorry, Peter, I didn't mean to
show you up as the class dunce. Dan (sigpfe) and Alex are presenting elementary material in a tutorial fashion that is 20-30 years old. My interjections from the back of the classroom are just meant
to point out how this connects with more contemporary and advanced ideas. The URL above is my home page, for a change.Paul Taylorhttp://
www.paultaylor.eunoreply@blogger.comtag:blogger.com,1999:blog-11295132.post-55711340816165266122008-03-06T04:16:00.000-08:002008-03-06T04:16:00.000-08:00Shouldn't "the observable sets are {}, {&#
8868;} and {{}, ⊤}" be "the observable sets are {}, {⊤} and {⊤,⊥}"?
Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-11295132.post-73456941565215416022008-03-06T04:14:00.000-08:002008-03-06T04:14:00.000-08:00Shouldn't "the observable sets are {}, {⊤} and
{{}, ⊤}" be "the observable sets are {}, {⊤} and {⊤,⊥}
"Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-11295132.post-49108571795680644652008-03-04T20:19:00.000-08:002008-03-04T20:19:00.000-08:00Paul: Sorry, I am not a toplogist of any stripe
whatsoever, so far be it for me to argue with you.<BR/><BR/>To clarify what I meant: in the pointed-CPO semantics that are occupying my brain at the moment, the Hausdorff topology is not interesting
(according to Alex Simpson, lecture notes #3, bottom of p3). I drew the conclusion, without thinking too much harder, that for the traditional notions of computation (e.g. over the natural numbers,
not the exact reals) the Hausdorff spaces were not interesting. By all means correct me if that's wrong.<BR/><BR/>In any case I wanted to draw attention to those lecture notes, which are a nice
(formal) complement to sigfpe's post.Peterhttp://peteg.org/noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-19684266011298207212008-03-04T00:32:00.000-08:002008-03-04T00:32:00.000-08:00The
analogy between observability and open sets, whilst nicely treated in Steven Vickers' book, pre-dates his involvement in the subject. Michael Smyth played a large part in it, though the names
Abramsky, Dijkstra, Floyd, Hoare, Plotkin and Scott also come to mind.<BR/><BR/>Alex Simpson's "lecture 3" covers similar ground to this blog, but I see no mention of Hausdorffness in it. Certainly
cpos (with _|_) are not Hausdoff, except trivially.<BR/><BR/>However, a space X is discrete or Hausdorff iff the diagonal X subset XxX is respectively open or closed. In these cases, there is a
corresponding map to the Sierpinski space. For a discrete space it is called = (equality) and for a Hausdorff space # (inequality or apartness).<BR/><BR/>In particular, the real line is Hausdorff but
not discrete. Does Peter think that this is not an important space? My paper (click on my name above). is about this space.Paul Taylorhttp://www.paultaylor.eu/ASD/
lamcranoreply@blogger.comtag:blogger.com,1999:blog-11295132.post-30915074713620404692008-03-03T21:19:00.000-08:002008-03-03T21:19:00.000-08:00Thanks for an interesting post. To nitpick, you say:<BR/>
<BR/>"At this point you can turn the tools of topology to computer science and you find that terms like discrete, Hausdorff and compact, which are all standard terms from topology defined inn terms
of open sets, all have computational content."<BR/><BR/>Apparently the Hausdorff space is not very interesting; take a look at <A HREF="http://www.dcs.ed.ac.uk/home/als/Teaching/MSfS/" REL=
"nofollow">Alex Simpson's lecture notes</A> (l3.pdf).Peterhttp://peteg.org/
noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-22376416516536202392008-03-02T20:14:00.000-08:002008-03-02T20:14:00.000-08:00Anonymous,<BR/><BR/>That's kind of the point. There's a long
mathematical tradition of reasoning about about functions and it'd be nice to use that in programming. 'Functions' in imperative programming languages aren't functions so it's hard to apply the
method there. In 'functional' programming, the things we call 'functions' still aren't quite functions in the usual sense, for the reasons you point out. But they look like something so similar to
mathematical functions much of the time, so it'd be nice if there were a fix. Well this is one fix: reinterpret types as topological spaces then 'functions' are continuous functions, even when the
underlying algorithm doesn't terminate. So even though we're really talking about algorithms, we can reason about them in a functional way.<BR/><BR/>You could even use a 'total' programming language
where every 'function' is guaranteed to terminate. When that's the case, 'functions' become functions.sigfpehttp://www.blogger.com/profile/
08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-20544028581075724052008-03-02T19:46:00.000-08:002008-03-02T19:46:00.000-08:00<I>But not all functions terminate in a
finite time, so they aren’t quite like ordinary functions after all.</I><BR/><BR/>I hardly know enough to be dangerous, let alone reliable, on this topic, but it seems to me like you are confusung
functions with algorithms. A certain algorithm used to compute a result in a particular way may not terminate, but mathematically speaking, it would be possible, for example, to provide answers by
pulling the values out of a prebuilt map of very large size.<BR/><BR/>It is not about terminating/nonterminating, but rather that the machine must be incomplete and finite so there are certain
functions for which it must answer "I don't know". Non-termination is one way of doing that if it cannot be better programmed to avoid situations beyond the limits of the computer (like even simply
terminating after a maximum time limit and returning the I don't know
answer.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-11295132.post-20850120474126623132008-03-02T19:18:00.000-08:002008-03-02T19:18:00.000-08:00Every link's broken? Either I've had a complete
mental breakdown and forgotten the most basic HTML, or blogger.com is acting up.sigfpehttp://www.blogger.com/profile/
08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-5982047321894373692008-03-02T18:12:00.000-08:002008-03-02T18:12:00.000-08:00The links to the paper by Escardo and Paul
Taylor aren't working but apart from that, thanks for the great post. I'm really enjoying your expositions.Mark Reidhttp://
mark.reid.namenoreply@blogger.comtag:blogger.com,1999:blog-11295132.post-27225149659759615792008-03-02T17:56:00.000-08:002008-03-02T17:56:00.000-08:00Hi! The link to your previous post is broken
(some unwanted characters got into the href).<BR/><BR/>Thanks for the introduction to topology. It was so "elegant", linking automaton theory (the basics of it) with topology. I'm very much
interested in maths, but most of these "modern" theories (like topology, category theory, etc..) don't really have any good introduction. (At least I don't know of any.)PAStheLoDhttp://
|
{"url":"http://blog.sigfpe.com/feeds/1721695175856952435/comments/default","timestamp":"2014-04-18T02:57:34Z","content_type":null,"content_length":"23915","record_id":"<urn:uuid:ff19b31f-1e02-4e5b-8261-8ea6d577ef7f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hamiltonian action
Symplectic geometry
Basic concepts
Classical mechanics and quantization
Given a symplectic manifold $(X,\omega)$, there is the group of Hamiltonian symplectomorphisms $HamSympl(X,\omega)$acting on $X$. If $(X,\omega)$ is prequantizable this lifts to the group of
quantomorphisms, both of them covering the diffeomorphisms of $X$:
quantomorphisms $\to$Hamiltonian symplectomorphisms $\to$diffeomorphisms .
A Hamiltonian action of a Lie group $G$ on $(X,\omega)$ is an action by quantomorphisms, hence a Lie group homomorphism $\hat \phi : G \to Quant(X, \omega)$
$\array{ && Quant(X, \omega) \\ & {}^{\mathllap{\hat \phi}}earrow & \downarrow \\ G &\stackrel{\phi}{\to}& HamSympl(X, \omega) \\ & {}_{\mathllap{}}\searrow & \downarrow \\ && Diff(X) } \,.$
See (Brylinski, prop. 2.4.10).
In the literature this is usually discussed at the infinitesimal level, hence for the corresponding Lie algebras:
smooth functions+Poisson bracket $\to$Hamiltonian vector fields $\to$vector fields
Now an (infinitesimal) Hamiltonian action is a Lie algebra homomorphism $\mu : \mathfrak{g} \to (C^\infty(X), \{-,-\})$
$\array{ && (C^\infty(X),\{-,-\}) \\ & {}^{\mathllap{\mu}}earrow & \downarrow \\ \mathfrak{g} &\stackrel{}{\to}& HamVect(X, \omega) \\ & {}_{\mathllap{}}\searrow & \downarrow \\ && Vect(X) } \,.$
Dualizing, the homomorphism $\mu$ is equivalently a linear map
$\tilde \mu : X \to \mathfrak{g}^*$
which is a homomorphism of Poisson manifolds. This is called the moment map of the (infinitesimal) Hamiltonian $G$-action.
Warning The lift from $\phi$ to $\hat \phi$ above, hence from the existence of Hamiltonians to an actual choice of Hamiltonians is in general indeed a choice. There may be different choices. In the
literature the difference between $\hat \phi$ and $\phi$ (or of their Lie theoretic analogs) is not always clearly made.
By (Atiyah-Bott), the action of a Lie algebra on a symplectic manifold is Hamiltonian if and only if the symplectic form has a (basic, closed) extension to equivariant de Rham cohomology.
A comprehensive account is in (see around section 2.1)
The perspective on Hamiltonian actions in terms of maps to extensions, infinitesimally and integrally, is made explicit in prop. 2.4.10 of
• Jean-Luc Brylinski, Loop spaces, characteristic classes and geometric quantization, Birkhäuser (1993)
The characterization in equivariant cohomology is due to
Generalization to Hamiltonian actions by a Lie algebroid (instead of just a Lie algebra) is discussed in
• Rogier Bos, Geometric quantization of Hamiltonian actions of Lie algebroids and Lie groupoids (arXiv:math.SG/0604027)
|
{"url":"http://ncatlab.org/nlab/show/Hamiltonian+action","timestamp":"2014-04-19T10:05:49Z","content_type":null,"content_length":"40540","record_id":"<urn:uuid:8ecdda22-d68b-4204-8adf-aac1978de4a6>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Determining a recurrence relation
up vote 5 down vote favorite
I would like to solve the general problem of determining a linear recurrence relation that fits a given integer sequence of length $n$, or stating that none exists (with fewer than $n/2-k$
coefficients for some reasonable fixed $k$, perhaps 2, to ensure that I'm not overfitting). Actually, I'd like to find the smallest one.
Of course the problem is apparently easy: just search for one of length 1, 2, ..., n/2-k until one is found. At each step all that is needed is to solve the appropriate matrix equation (tried to
write it out, no tabular environment here, but it's obvious) and check the result against the unused members of the sequence. But for reasonably large $n$ this is impractical: the linear algebra is
too difficult.
Unfortunately, it's not rare for me to work with a sequence that is clearly a recurrence relation, but which appears to have a large order, so I can't simply assume that not finding a relation with
order below 100 means that none exists. This raises two questions for me:
1. Is there a fast way to calculate these, compared to the naive approach above?
2. If one recurrence is known, can this be used to speed the search for recurrences of smaller order (or to prove that none exist)?
One unenlightened approach that comes to mind for #1 is to solve the matrix over the real numbers (using floating-point approximations) rather than solving it exactly over $\mathbb{Q}$. This seems
reasonable, but it's not obvious how much precision is needed nor how far the numbers could be if a solution was actually found (in which case, presumably, the system should be re-solved exactly).
Although solving systems this way results in serious speedup even using quadruple precision, without appropriate numerical analysis I don't think it's usable. Hopefully there is a better way.
On #2, consider a (periodic) sequence with recurrence relation $a_n=a_{n-6}.$ Basic algebra suffices to show that any recurrence of the form $a_n=2a_{n-1}-2a_{n-2}+a_{n-3}$ is also of that form, so
some sequences of order 6 can be simplified (nontrivially, that is not just removing trailing zeros) to order 3. Does it suffice, for example, to check only orders dividing 6?
linear-algebra na.numerical-analysis algorithms co.combinatorics
Only a comment, since I'm not sure whether it answers your question 1): is the approach taken in arxiv.org/abs/math/0702086 useful to you (under the name guessPade)? Essentially it's Hermite-Pade
using modular arithmetic. – Martin Rubey Jun 2 '11 at 15:59
@Martin Rubey: Yes, that's precisely what I need (assuming it can give negative answers as well, i.e. "no such rational gf exists with sum of degrees < N" rather than "no rational gf found"). Is
the source code available? – Charles Jun 2 '11 at 17:30
Yes, if guessPade does not return a function this means that there is no such function with the given sum of degrees. The package is part of FriCAS, fricas.sourceforge.net. The source code of the
solver itself is in the file modhpsol.spad.pamphlet, the user interface in mantepse.spad.pamphlet. The algorithm (but without the modular arithmetic) is due to Beckermann and Labahn, see the
reference in the paper. – Martin Rubey Jun 3 '11 at 11:58
@Martin Rubey: Ah, I had seen the pamphlet files (grepping the FriCAS source) but assumed they were documentation because of the TeX header. – Charles Jun 3 '11 at 14:10
add comment
4 Answers
active oldest votes
This problem comes up often in coding theory, and it can be solved efficiently by the Berlekamp-Massey algorithm (Wikipedia has pseudo-code). This is more or less equivalent to using
continued fractions, although many expositions don't present it that way: given a sequence $a_0,a_1,\dots,a_N$, look at the rational function $\sum_{i=0}^N a_i x^{-i}$ and compute its
continued fraction expansion, with coefficients that are polynomials in $x$. That just amounts to applying Euclid's algorithm to the polynomials $\sum_{i=0}^{N} a_i x^{N-i}$ and $x^N$.
A degree $d$ linear recurrence must be satisfied by more than $2d$ terms of the sequence to mean anything, and any such recurrence will be detected by this method. Specifically, that
up vote 11 means the corresponding rational function with degree $d$ denominator must be a convergent to the continued fraction.
down vote
accepted In practice, as Charles Matthews suggested, you can generally speed up the arithmetic substantially by working modulo a prime (say, a fairly large random prime). This is particularly an
issue when there isn't a low-degree recurrence, since in that case generically you'll get a lot of partial quotients of degree $1$ with rapidly growing denominators. Checking that
there's no low-degree recurrence modulo a prime will be much faster, since it will avoid the huge denominators.
I like the answer but wonder abut the comment A degree $d$ linear recurrence must be satisfied by more than $2d$ terms of the sequence to mean anything, and any such recurrence will
be detected by this method. I take it that that is a rule of thumb? One never knows for sure if there is an irregularity further ahead. And ( for $d$ not too small) would observing a
possible order $d$ recurrence after $2d-2$ steps mean so much less than after $2d$ steps? – Aaron Meyerowitz Jun 2 '11 at 20:57
When I say matching $2d$ or fewer terms doesn't mean anything, I mean there is always a rational function with numerator and denominator of degree at most $d$ that matches $2d$ terms.
The denominator might vanish at $0$ (which would mess up the recurrence interpretation), but another way to look at it is that one can generically choose $d$ coefficients for the
recurrence so that running it starting with the first $d$ terms matches the next $d$. So seeing this happen shouldn't carry much weight. – Henry Cohn Jun 2 '11 at 22:15
On the other hand, as soon as you match $2d+1$ terms something nontrivial is happening, although the pattern might not continue. Berlekamp-Massey finds all these cases, so in
particular if there is an actual recurrence of degree $d$ and the algorithm is given more than $2d$ terms, then it is guaranteed to find the true recurrence. – Henry Cohn Jun 2 '11 at
add comment
For the second question: attached to each linear recurrence relation $$ a_n = c_1 a_{n-1} + c_2 a_{n-2} + \cdots + c_m a_{n-m} $$ is its characteristic polynomial $$ x^m - c_1 x^{m-1} - c_2
x^{m-2} - \cdots - c_m. $$ For example, the polynomial attached to the recurrence relation $a_n = 2a_{n-1} - 2a_{n-2} + a_{n-3}$ is $x^3-2x^2+2x-1$; the polynomial attached to the
recurrence relation $a_n = a_{n-6}$ is $x^6-1$. Of course one can go from characteristic polynomial to recurrence relation as well.
The relevant fact about linear recurrences is this: for every recurrence sequence $S$, there exists a (unique) monic polynomial $p(x)$ of minimal degree with the property that the
up vote 3 recurrence relations satisfied by $S$ are exactly the ones whose characteristic polynomial is a multiple of $p(x)$. Note that $x^3-2x^2+2x-1$ does indeed divide $x^6-1$ in the above
down vote example.
So if you have a known recurrence relation for a given sequence and you want to find the shortest one, the only recurrence relations you have to consider are the ones whose characteristic
polynomials are the factors of the known recurrence relation's characteristic polynomial (and all such recurrence relations could possibly be the answer).
Excellent! I was calculating the characteristic polynomials (and even factoring them, for other purposes) but I didn't see the connection. It's all clear now. – Charles Jun 2 '11 at 14:47
add comment
I'd certainly expect to acquire some useful information by modular arithmetic, in a given situation. For example modulo a prime p, if there is a linear recurrence, then the sequence of
residues must actually be periodic. Further, the period will be a factor of the order mod p of the matrix you are seeking, in the group of invertible matrices, unless you have hit a prime
up vote dividing the determinant. Looking modulo a number of smallish primes may yield some insight. From another perspective, the characteristic polynomial of the matrix factorises over some number
3 down field, and you may gain clues as to what it is. (There is probably some theory here.)
Well, this is only obviously true if the recurrence has integer coefficients. (I think this is known to be true, and there was an MO question about it, but it's not trivial.) – Qiaochu
Yuan Jun 2 '11 at 11:38
To my doppelgänger: That idea is so obvious and useful I can't believe I missed it. Thank you. – Charles Jun 2 '11 at 14:42
add comment
There's a theorem in Raphael Salem's book, Algebraic Numbers and Fourier Analysis, which goes something like this (I don't have the reference handy, so I'm not going to get it exactly
Let $$A_{n.k}=\det\pmatrix{a_n&a_{n+1}&\dots&a_{n+k}\cr a_{n+1}&a_{n+2}&\dots&a_{n+k+1}\cr\dots&\dots&\dots&\dots\cr a_{n+k}&a_{n+k+1}&\dots&a_{n+2k}\cr}$$ Then the $a_r$ satisfy a linear
constant coefficient recurrence of order $m$ if and only if $A_{n,m}=0$ for all $n$.
up vote 2
down vote EDIT: I looked it up. It's Lemma III on page 5, due to Kronecker. It says that $\sum_0^{\infty}c_nz^n$ is a rational function if and only if the determinants $$\Delta_m=\det\pmatrix{c_0&c_1
&\dots&c_m\cr c_1&c_2&\dots&c_{m+1}\cr\dots&\dots&\dots&\dots\cr c_m&c_{m+1}&\dots&c_{2m}\cr}$$ are zero for all $m\ge m_1$. Lemma I says that the power series represents a rational
function if and only if its coefficients satisfy a linear homogeneous constant-coefficient recurrence relation from some point on.
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra na.numerical-analysis algorithms co.combinatorics or ask your own question.
|
{"url":"http://mathoverflow.net/questions/66708/determining-a-recurrence-relation","timestamp":"2014-04-19T12:25:12Z","content_type":null,"content_length":"78994","record_id":"<urn:uuid:574c09fc-6a34-46aa-9603-fd6d854879f9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Newton's Method
December 11th 2007, 04:54 PM #1
Newton's Method
Use Newton's Method to approximate the given number correct to eight decimal places.
$f(x) = x^{100} - 100$
$f'(x) = 100x^{99}$
I know how to use Newton's Method but is there a faster way, a shortcut, to find it or am I going to have to start from $x1 = 1$?
am I going to have to plug in numbers until I get two to match eight decimals?
Second Problem:
Use Newton's Method to approximate the indicated root of the equation correct to six decimal places.
The positive root of $2cosx = x^4$
I know how to do Newton's Method but these two were giving me problems.
Last edited by FalconPUNCH!; December 11th 2007 at 06:22 PM.
For the second one I get
$f(x) = 2cosx - x^4$
$f'(x) = -2sinx - 4x^3$
I get $X1 = 2$ as my initial approximation but when I plug what I get into Newton's Formula I get some weird numbers and none of them are close to each other.
For my second approximation I get 1.5022769 and I don't know but I feel that it's wrong and I don't think I should continue with the wrong answer.
I just did the first one using Newton's Method and got what I was looking for. The second problem I posted seems to be a little tougher and I need help with that one.
Edit: My only problem with the first one is I don't know what to use an initial approximation. It took me almost two pages to get the answer. Can anyone tell me a faster way to get an approximate
number eight decimal places?
Last edited by FalconPUNCH!; December 11th 2007 at 08:16 PM.
Edit: My only problem with the first one is I don't know what to use an initial approximation. It took me almost two pages to get the answer. Can anyone tell me a faster way to get an approximate
number eight decimal places?
get a computer to do the calculating for you. I can't imagine why anyone would be learning Newton's method without some mathematical programming environment, eg. Haskell interpreter, R gui.
Yeah I can do that but I want to know how to find an initial approximation. Whatever I use is always far away from the actual root.
I just did the first one using Newton's Method and got what I was looking for. The second problem I posted seems to be a little tougher and I need help with that one.
Edit: My only problem with the first one is I don't know what to use an initial approximation. It took me almost two pages to get the answer. Can anyone tell me a faster way to get an approximate
number eight decimal places?
I would draw a rough sketch of the 2 graphs. For instance with #1:
Draw the graph $f(x)=2\cos(x)\ ,~-\pi < x < \pi$ anf $p(x)=x^4$. Use the estimated x-value of the intersection as an initial value.
When I used $x_0 = 1$ I got the result with 8 decimals exact after 3 steps.
to #2. If n is a large number then $\sqrt[n]{n} \approx 1$. If you use a value a little bit larger than 1 as an initial value it will be sufficient. But with your example the sequence of x-value
converges very slowly. So when I used $x_0 = 1.5$ it took me more than 20 cycles to get a nearly exact value.
I would draw a rough sketch of the 2 graphs. For instance with #1:
Draw the graph $f(x)=2\cos(x)\ ,~-\pi < x < \pi$ anf $p(x)=x^4$. Use the estimated x-value of the intersection as an initial value.
When I used $x_0 = 1$ I got the result with 8 decimals exact after 3 steps.
to #2. If n is a large number then $\sqrt[n]{n} \approx 1$. If you use a value a little bit larger than 1 as an initial value it will be sufficient. But with your example the sequence of x-value
converges very slowly. So when I used $x_0 = 1.5$ it took me more than 20 cycles to get a nearly exact value.
Thanks for helping me. Yeah for $\sqrt[n]{n}$ it took me around 20 cycles. I'm going to probably get a number closer to one so I can fit it on half a page. Thanks for your assistance.
December 11th 2007, 07:19 PM #2
December 11th 2007, 08:04 PM #3
December 11th 2007, 08:21 PM #4
Senior Member
Dec 2007
December 11th 2007, 08:28 PM #5
December 11th 2007, 09:02 PM #6
December 11th 2007, 09:08 PM #7
December 12th 2007, 08:53 AM #8
|
{"url":"http://mathhelpforum.com/calculus/24733-newton-s-method.html","timestamp":"2014-04-17T13:04:48Z","content_type":null,"content_length":"57868","record_id":"<urn:uuid:058931df-5d02-4771-ab84-ba4e787d90b7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Unlocking the Mystery of the Adjoint Solver
The biggest challenge of gradient-based shape optimization using high-fidelity CFD has always been the formidable computational expense tied to constructing the sensitivity of the objective (or cost)
functions with respect to the design variables.
Take for example the case of a typical external aerodynamics application. An aircraft manufacturer wants to improve on the operational performance of the existing fleet by adding winglets to their
aging aircraft. This modernization will only pay off if the installation of the winglet results in significant fuel savings. The company aerodynamics expert is convinced that a good business case can
be made and she takes the initiative by designing an initial winglet that can be retrofitted on the aging aircraft. The design is parameterized by a significant number of design variables such as
span-wise airfoil sections with varying thickness and camber, winglet sweep, taper ratio, toe angle, cant angle and height. An initial CFD solution at cruise CL confirms that the aerodynamicist knows
her craft because the winglet already shows potential on the first try! Results indicate a slight reduction in overall drag of the system, but unfortunately the wing root bending moment has increased
which will require re-enforcing of the structure, negating the drag reduction in the process. The question for our expert now becomes: how does altering the shape of the winglet affect span-wise lift
distribution (or induced drag) and how can I design an efficient winglet requiring minimal structural modifications and still meet my tight project deadline?
The standard approach to solving this problem has been to use finite-differences to estimate the gradient of the cost function with respect to the design variables. This is a costly procedure that
involves first computing a baseline solution, then perturbing each design parameter one at a time and performing an additional solution following each perturbation; a process that amounts to one
additional flow solution for every design variable. A gradient–based optimizer can then use this information, together with repeated evaluations of the cost, to alter the geometry in such a way that
the cost function is driven to a minimum. You can imagine that in the case of the winglet design, with such a significant number of design variables defining the shape, the high computational cost
associated with this approach makes it almost intractable. In addition, it’s also an inefficient way of doing things because the influence of the parameter changes are only understood after multiple
iterations of the optimization cycle. The only option for the designer is to significantly decrease the CPU requirements by reducing the number of design variables to just a few, greatly limiting the
scope and success of the project.
What if there was a powerful tool available to compute the sensitivity of the cost function with respect to many design variables at the CPU cost equivalent to just one flow solution? And not only
that… What if this method also provided guidance on how to best optimize the design from the start? Integration of an adjoint solver as part of a CFD suite enables this economical sensitivity
analysis and is a promising strategy for performing shape optimization involving few cost functions and many design variables, as is the case for the aircraft manufacturer’s winglet design for
derivative aircraft.
For most of us, adjoint solvers remain a bit of a mystery and the mathematical underpinnings of the method can be quite intimidating. To explain it in very general terms, the approach is based on
optimal control theory where the cost function is defined using Lagrange multipliers to include both the flow solution and the mesh movement solution as constraints on the optimization. By taking the
derivative of this cost function with respect to the design variables first, and by solving the adjoint equations for the Lagrange multipliers next, the sensitivity of cost function with respect to
the flow residual (mass, momentum and energy) and the mesh coordinates is obtained. It is then straightforward to compute the sensitivity of the cost function with respect to all of the design
variables simultaneously through a simple matrix multiplication.
The adjoint solver is a very attractive approach because not only does it provide great insight into the system performance early on, it also offers a faster route to an improved design. Let’s go
back to the winglet design problem. It does not matter how many design variables were required to parameterize the geometry, by performing one solution of the flowfield on the baseline winglet and
one solution of the adjoint equations (at a CPU cost equivalent to one flow solution), all the important information is now available for the optimizer to make an informed decision towards
improvement. Imagine the cost and time savings!
And it doesn’t end here. With the adjoint solver, the sensitivity of the cost function with respect to the flow residual is computed. This is effectively a measure of error in the solution and opens
the door for uncertainty quantification. In addition, it provides a mathematical formulation for identifying areas requiring further mesh resolution to better capture the cost function. Instead of
depending on solution gradients to resolve the mesh, the aerodynamicist can let the code do the work and she is likely to be impressed by the non-intuitive ways the mathematics refines the winglet
mesh. This has the potential of significantly reducing mesh sizes while increasing accuracy resulting in a much more practical optimization process.
Optimal control theory has been around for many years, so why is an adjoint approach to design optimization using CFD not yet main-stream? The bottom line is that implementation with CFD can be
complex (there are several approaches e.g. continuous vs. discrete) and especially the transformation of surface sensitivities into a smooth shape through mesh morphing is challenging. The demand
from our user community has been high and as a result, STAR-CCM+ will include an integrated discrete adjoint solver in v8.04. The solver provides both 1st and 2nd order adjoints for a wide range of
cost functions and is broadly compatible with the existing STAR-CCM+ physics models. CD-adapco has a dedicated team working on delivering additional features for the adjoint solver in the near future
including adding the mesh morpher and integration with optimization tools. Look out for it in future releases!
|
{"url":"http://www.cd-adapco.com/print/6012","timestamp":"2014-04-18T19:41:50Z","content_type":null,"content_length":"19498","record_id":"<urn:uuid:5db350cb-c147-4cdd-bb52-7b678b31a8d1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Where Can I Go To Take My Ged In Detroit Mi Where It Is Free?
At 1300 rosa parks blvd detroit mi 4216
testing is from monday-friday (a) 8:15am
on monday and wednesday arrive no later than 8:30am
first time testers= no appointment necessary
number: 313-596-7615
Resolved Questions:
What Is 100 Times 1000000?
The beauty of multiplying digits that start with 1 and only contain zero is that you do not need a calculator or even a good mathematical brain. All you hold to do is write the two digits together:
100 x 1, 000,000 Now simply remove the 'x 1'...
How Can A Writer Create A Persuasive Message Without The Use Of Body...
If the writer use good examples and the right tone you can be persuasive
How Can I Find The Unknown Base Of A Triangle?
Use Pythagoras theorem (need the know 2 side values) where a^2+b^2=c^2. A and b being the shorter lengths and c individual the hypotenuse. Or use trigonometry (need to know an angle and a side -
works on right-angled triangles only) rule is SOH CAH...
|
{"url":"http://www.isfaq.com/education/52842.html","timestamp":"2014-04-19T12:02:51Z","content_type":null,"content_length":"4478","record_id":"<urn:uuid:10c8bd2d-f28b-4235-ac1c-8f01c2482b66>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sampling is the experimental method by which information can be obtained about the parameters of an unknown distribution. As is well known from the debate over opinion polls, it is important to have
a representative and unbiased sample. For the experimentalist, this means not rejecting data because they do not ``look right''. The rejection of data, in fact, is something to be avoided unless
there are overpowering reasons for doing so.
Given a data sample, one would then like to have a method for determining the best value of the true parameters from the data. The best value here is that which minimizes the variance between the
estimate and the true value. In statistics, this is known as estimation. The estimation problem consists of two parts: (1) determining the best estimate and (2) determining the uncertainty on the
estimate. There are a number of different principles which yield formulae for combining data to obtain a best estimate. However, the most widely accepted method and the one most applicable to our
purposes is the principle of maximum likelihood. We shall very briefly demonstrate this principle in the following sections in order to give a feeling for how the results are derived. The reader
interested in more detail or in some of the other methods should consult some of the standard texts given in the bibliography. Before treating this topic, however, we will first define a few terms.
|
{"url":"http://ned.ipac.caltech.edu/level5/Leo/Stats4.html","timestamp":"2014-04-17T06:45:07Z","content_type":null,"content_length":"2472","record_id":"<urn:uuid:6a601156-9257-44ad-ab60-fa30523bdf04>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
The Facilities Location Problem
The facilities location problem (also known as the -median problem), in its continuous version, starts with a shape in the plane and seeks to locate hosts so as to minimize the average travel
distance if each point goes to the nearest host. This is more familiar in the discrete version, where one has points in the plane and wants to select of them as hosts so as to minimize the total
travel distance. When , this is known as the Fermat–Weber problem. In this Demonstration, a discrete version based on a number of points inside the polygon is solved by integer-linear programming
(ILP) and the Voronoi diagram of the computed hosts is shown. Such a discrete problem is an approximation to the continuous problem.
The distance function used here is planar distance, and motion from a point to the host is straight-line motion, not the shortest path inside the polygon. This distinction is relevant only for
nonconvex polygons, and can lead to disconnected Voronoi regions, as in the second snapshot. For the continuous version of the problem for the square, it is not known what the optimal solution is,
even when is, say, 7. Using ILP, as done here, and perhaps following up the computation with some local optimization, allows one to investigate possible solutions. But the big picture is known in
that, under certain conditions, the optimal solution consists almost entirely of regular hexagons, as discussed in [2]. That paper also shows that the general problem is NP-complete. For more
information on the continuous version, see [1].
To set up the problem as an ILP, variables and are introduced, where all subscripts run from 1 to and are the given points. The idea is that a
value of 1 indicates that is a host, and an value of 1 indicates that is the nearest host to . The objective function is , where gives the distance from to . The constraints are:
• all variables are integers;
• , ;
• ;
• for each pair , ;
• , for each .
The last constraint ensures that each point goes to one host; the next-to-last constraint ensures that a point goes to a host iff the
value for that host-point gets a value of 1. Thus the use of ≤ in the
sum constraint is a convenience; the travel distance always decreases, so the sum will in fact be exactly .
[1] S. P. Fekete, J. S. B. Mitchell, and K. Beurer, "On the Continuous Fermat–Weber Problem,"
Operations Research, 53
(1), 2005 pp. 61–76. doi:
[2] C. H. Papadimitriou, "Worst-Case and Probabilistic Analysis of a Geometric Location Problem,"
SIAM Journal of Computing, 10
(3), 1981 pp. 542–557. doi:
|
{"url":"http://www.demonstrations.wolfram.com/TheFacilitiesLocationProblem/","timestamp":"2014-04-20T11:46:18Z","content_type":null,"content_length":"48099","record_id":"<urn:uuid:57f4894b-85cc-471a-955d-22fce4becf01>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2-d object avoidance. Help please! (basic stuff, I think)
06-14-2004 #1
Registered User
Join Date
Jun 2004
Hello. Is there any way to do simple collision avoidance for a system where one object is being chased by two others? The "prey" should be avoiding the "two predators". Two dimensional would be
preferred. I can't seem to get anything to work correctly . Thank you! Here's the list of variables I'm using...(for reference)
Predator 1's x-coord: alpha
Predator 1's y-coord: beta
Predator 1's angle (from x-axis, angle>=0): theta
Predator 2's x-coord: iota
Predator 2's y-coord: kappa
Predator 2's angle (from x-axis, angle>=0): delta
Prey's x-coord: zeta
Prey's y-coord: epsilon
Prey's angle (from x-axis, angle>=0): eta
Here's my previous attempt that met very limited success.
a = alpha+x*cos(theta*(3.141596/180))*(3.141596/180);
b = beta+x*sin(theta*(3.141596/180))*(3.141596/180);
c = iota+y*cos(delta*(3.141596/180))*(3.141596/180);
d = kappa+y*sin(delta*(3.141596/180))*(3.141596/180);
f = zeta;
g = epsilon;
RAX = (f-a);
RAY = (g-b);
RAV = sqrt((RAX)*(RAX)+(RAY)*(RAY));
RAXB = (f-c);
RAYB = (g-d);
RAVB = sqrt((RAXB)*(RAXB)+(RAYB)*(RAYB));
if((((WT*(-delta))+(-theta))/2)>0 and (((WT*(-delta))+(-theta))/2)<180) eta+=1.0f;
if((((WT*(-delta))+(-theta))/2)>180 and (((WT*(-delta))+(-theta))/2)<360) eta-=1.0f;
Any suggestions? Thanks! I was referred here from the C++ board by elad.
Best way to handle this would be via scripting and script commands. However, if you know the angles of the two pursuers then you can average them and use that as the angle for your unit. Or
component wise you can average the xvelocity and yvelocity of the pursuers and use the result of that for the new xvelocity and yvelocity of the pursuee.
Of course, xvelocity and yvelocity should be unit vectors in all case.
Thanks Bubba. I'll try averaging the angles. Yep x and y-coords are unit vectors. Thanks.
Eventually you will have a situation where there exists no angle to use to flee from the pursuers. My idea is to deduce which unit poses the most threat and then flee from that one.
For instance if you have a heavy tank....you probably wouldnt flee from a soldier into another tank. You would flee from the tank and not worry about the soldier whom you could crush at will.
If all units are of equal strength then you could simpy eliminate units based on distance from the unit being chased.
Once you narrow down which unit to run from you could simply get the velocity vector of the pursuer. Then set the velocity vector of the fleeing unit to that vector. Of course you would want your
fleeing unit to turn gradually to the vector of the pursuing unit. This would cause the fleeing unit to run away from the pursuing unit. You could then alter this somewhat so that the fleeing
unit does not look like it is simply copying the pursuer.
Another idea is to start at the pursuing unit. Cast a ray from the pursuing unit to a set distance in back of or on the opposite side of the pursuing unit. (ie: you will need to find the correct
vector that will cause this to happen). Then use this point as a point to flee to for the fleeing unit. Continual calculation of this point should cause the fleeing unit to look more natural when
it flees from the pursuing unit.
Example: The pursuer is south of or directly below the fleeing unit.
□ 1. Calculate the angle between the units. In this case it will be 0 (if 0 degrees is up on your screen in your angle orientation).
□ 2. Use a set distance from the fleeing unit as the end point of your ray cast. - probably will be an average of or a factor of the max pursuing unit speed averaged with the max speed of the
fleeing unit. It would do no good to flee in the correct direction at a speed slower than the pursuing unit. If the max speed is too high for the fleeing unit....then you should probably turn
and fight it out.
□ 3. In this case we know the 'fleeing' vector will be 0 degrees (cos(0),sin(0)). So let's say we want to set a point 20 world units from the fleeing unit as a destination. That is the point we
need to move to.
□ 4. If our fleeing unit or the chicken is not facing that direction then it would probably be a good idea to turn him so that he does.
□ 5. You could gradually increase speed as the chicken turns towards the new point - this would look more natural than just wasting time sitting and turning and then moving. So you turn and
accelerate at the same time. Of course you will have to ensure that the chicken doesn't bump into anything else like other tanks, perhaps his own soldiers who wouldn't appreciate being
squashed by him, and perhaps some trees that wouldn't take to kindly to being moved....and would probably resist to being moved at all.
Hope this helps.
Last edited by VirtualAce; 06-15-2004 at 09:50 AM.
Sweet! Thanks again Bubba. I was figuring on having to do some sort of a weighting system judging on distances from the 'chicken' to the 'chaser' but eliminating a chaser's effect after a certain
distance is a good concept, thanks! Also your idea of a point to flee from is great, a whole new way to look at the problem. You're great!
06-14-2004 #2
06-15-2004 #3
Registered User
Join Date
Jun 2004
06-15-2004 #4
06-15-2004 #5
Registered User
Join Date
Jun 2004
|
{"url":"http://cboard.cprogramming.com/game-programming/53954-2-d-object-avoidance-help-please-basic-stuff-i-think.html","timestamp":"2014-04-16T14:19:46Z","content_type":null,"content_length":"60343","record_id":"<urn:uuid:1f6c48e2-aa72-4d31-add6-155a47306cc0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Posts by ann
Total # Posts: 1,163
How is -16/(4x+6)^5 and -1/(2(2x+3)^5) equivalent? I think -1/2(2x+3)^5 is the more simplified version of the two, but I just can't figure out how -16/(4x+6)^5 simplifies to -1/(2(2x+3)^5). please
thank you
Let f(x) = [(√x)-7]/[(√x)+7]. What is f'(x)? What is the easiest way to find the derivative of this? Should I remove all the radicals and use quotient rule, like f'(x)= ((x^0.5) + 7)(0.5x^-0.5) - ((x
^0.5)-7)(0.5x^-0.5) / ((x^0.5) + 7)^2 Is this right? How d...
find an equation of the line passing through the pair of points. write the equation in Ax+By=C (-9,7), (-10,-4)
find an equation of the line with the given slope that passes through the given point. write the equation in the form Ax+By=C M=3/2, (7,-3)
Ancient History Art
Can the extravagance and costs, both of money and lives, be justified in the opulent architecture commissioned by "divinely empowered" individuals of ancient cultures? Name a comparable example from
today's society.
ok 35020/1.03=34000 ....but where did the 1.03 come from?
Find last years salary if after a 3% pay raise this years salary is $35,020
this is weird...but thanks!
Determine the derivative at the point (2,−43) on the curve given by f(x)=7−7x−9x^2. I know that the answer is -43, but I was wondering if it was just a coincidence that the derivative at the point
(2,−43) is -43, or is there a reason why -43 is the same...
Thank you very much
Yeah, that was a stupid mistake... But now I have another question. Isn't the final answer supposed to be in −1/(ah+b) form? Why is my answer 1/(196+14h) still positive? Did I do something else
Let f(x) be the function 1/(x+9). Then the quotient [f(5+h)−f(5)]/h can be simplified to −1/(ah+b). What does a and b equal to? So I understand the overall concept, and did f(5+h) = 1/(14+h) and f(5)
= 1/14. Then to subtract the two, you get the common denominator ...
The Hotel Ventor has 400 rooms. Currently the hotel is filled. The daily rental is $ 600 per room. For every $6 increase in rent the demand for rooms decreases by 7 rooms. Let x = the number of $ 6
increases that can be made. What should x be so as to maximize the revenue of t...
Nevermind, now I see that all the question asks for is the y-intercept. Why didn't I see that before...
It has been found that the supply of golf clubs varies linearly with its price. When the price per item was $ 76.00 ,32 items are supplied; When the price was $ 90.25 , 70 items are supplied. What is
the lowest price above which golf clubs will be supplied ? So I know the slop...
find measurement angle bac db and dc are paralell, da intersects bc, angle a is 65 degrees so i multipied by two and came up with the answer 130 degrees, is this correct? thanks
I tried and got 3.371880575, but I think I'm still wrong...
You are given a pair of equations, one representing a supply curve and the other representing a demand curve, where p is the unit price for x items. −p+0.0208333333333333x+2=0 and p = √(57-x) What is
the market equilibrium for x? I know I'm supposed to set both...
Actually I think you used the wrong function. The 1st one is the demand, not the second.
You are given a pair of equations, one representing a supply curve and the other representing a demand curve, where p is the unit price for x items. 466p+90x−2390=0 and 484p−22x−978=0 Determine the
revenue function. Revenue function R(x)=? I got (-90x/466) + ...
For the function f(x)=3x^3−42x Compute the difference quotient [f(x+h)−f(x)]/h,h≠0 I checked my work like 3 times and still got [(-2x^3)+(3hx^2)+(3xh^2)+(h^3)-(42h)]/h Is it wrong?
Oh yeah I forgot it meant that P = 0 thanks
The function is P=−11x^2+346.5x−2612.5
The profit in dollars in producing x items of some commodity is given by the equation P=−11x2+346.5x−2612.5. How many items should be produced to break even?(If there are two break-even points, then
enter the smaller value of x. Your solution may not be an integer....
thanks I figured out what I did wrong
Find the unknown in the following equation. If there are more than one solution, then separate the solutions with a comma. (−8y−4)^2+10=14 y=? I keep on getting y=-0.5, is that correct?
The Hotel Bellville has 400 rooms. Currently the hotel is filled . The daily rental is $ 250 per room. For every $ 14 increase in rent the demand for rooms decreases by 5 rooms. Let x = the number of
$ 14 increases that can be made. What should x be so as to maximize the reven...
oh ok. But if I wanted to find how long it takes the ball to come back to the ground then I would have to factor it right?
Sorry, the equation is −16t^2+72t+364
A ball is thrown up at the edge of a 364 foot cliff. The ball is thrown up with an initial velocity of 72 feet per second. Its height measured in feet is given in terms of time t, measured in seconds
by the equation h=−16t2+72t+364. How high will the ball go? To get how ...
So the answer is C(x)= 2574 + 3x?
I solved that and got C(x) = 2580 + 3x, but apparently it's still not the right answer...
A local photocopying store advertises as follows. " We charge 14 cents per copy for 150 copies or less, 6 cents per copy for each copy over 150 but less than 230, and 3 cents per copy for each copy
230 and above. " Let x be the number of copies ordered and C(x) be th...
Then what would the cost be for 150<x<230? I don't think there's a fixed cost for this either, but I don't think it'd be correct to write C(x) = 6x.
A local photocopying store advertises as follows. " We charge 14 cents per copy for 150 copies or less,6 cents per copy for each copy over 150 but less than 230, and 3 cents per copy for each copy
230 and above. " Let x be the number of copies ordered and C(x) be the...
Oh, ok, so it's not equivalent. I get it now. It is 2.2. Thanks!
Wait, Steve, I think that what you wrote: A(t) = 11000 - (11000-10995.60)*t/4.4 is equivalent to what I wrote: A(t) = -4.4t + 11,000 but when I solve mine, why do I keep on getting 0.5 instead of
A machine worth $ 11000 new and having a scrap value of $ 10995.6 is to be depreciated over a 4.4 -year life. Find the function that describes straight line depreciation for this situation. At what
time will the machine be worth $ 10997.8 according to this model? So I know tha...
Nevermind, I figured it out, it's $63.
If a manufacturer has fixed costs of $ 300 , a cost per item for production of $ 60 , and expects to sell at least 100 items, how should he set the selling price to guarantee breaking even? I set up
the cost function by writing 300 + 60x, but I don't know what to do next...
Describe the steps for simplifying the expression 3a^3 + 4 - 4a - 2a^3 + 6 + 12a - b^3. Be specific and be sure to include the meaning of like terms in your explanation.
Financial management
how do I solve for x in the following equation: 1.2 ($1,364,994,000 + x) = $1, 807,626,000 + x
Two identical capacitors store different amounts of energy: capacitor A stores 2.5 x 10-3 J, and capacitor B stores 2.8 x 10-4 J. The voltage across the plates of capacitor B is 11 V. Find the
voltage across the plates of capacitor A.
A catapult launches a test rocket vertically upward from a well, giving the rocket an initial speed of 79.0 m/s at ground level. The engines then fire, and the rocket accelerates upward at 3.90 m/s2
until it reaches an altitude of 980 m. At that point its engines fail, and the...
Ned s Sheds purchases building materials from Timbertown Lumber for $3,700 with terms of 4/15, n/30. The invoice is dated October 17. Ned s decides to send in a $2,000 partial payment. By what date
must the partial payment be sent to take advantage of the cash disco...
An object with an initial velocity of 5 m/s has a constant acceleration of 2 m/s^2. When its speed is 15 m/s how far has it traveled?
Identify a current event or contemporary social issue that involves ethical values.
Net ionic equations for: Br- + AgNO3 CO3 + AgNO3 Cl-+ AgNO3 I + AgNO3 PO4^-3 + AgNO3 SO4^-2 + AgNO3 S^-2 + AgNO3 They all formed ppt,but I don't know where to go from here. Even a couple to get me
started would be greatly appreciated!
Are these reactions Exothermic or Endothermic? How do I know? 1.) Mg(OH)2(s)<==>Mg+2(aq) + 2OH-(aq) Saw solution lighten from medium pink to very light when heated in hot water bath. (It had
phenolphthalein drop for indicator.) 2.) HSO4-(aq)+ H2O(l)<==>H3O+(aq)+SO4...
Aftercare school curriculum
Most common early childhood program in the united states
Do I solve this by multiplying 1.8645 by 1.14??The equation of your calibration curve from a spectrophotometry experiment was y = 1.8645x. Assuming your calibration curve is set up exactly like it
was in lab (concentration in mM on the x axis and absorbance on the y axis), wha...
A seesaw is 4 meters long and is pivoted in the middle. There is a 350 N child on the left end. Where will a 540 N person have to sit to balance the seesaw?
La palabra escondida es unscramble these letters sneauodsyma
7/4 as f/15 therefore 4 times 3.75 = 15 then 7 times 3.75 = 26.25 Answer: 26.25 is the force needed to stretch 15 inches which is m.
My mean is 175.5 My standard deviation is=90.57 Sample=25 Formula to be used: P(X>190)=P((X-mean)/s sqrt of my sample is = 5 (Which is the square root of 25.)(190-175.5) / 90.57/5) 14.5/18.114
calculator says 0.80048581207905487468256...
Facebook reports that the average number of Facebook friends worldwide is 175.5 with a standard deviation of 90.57. If you were to take a sample of 25 students, what is the probability that the mean
number Facebook friends in the sample will be 190 friends or more?
health care
How does the system of health care provision in the U.S. support goals of public health? Provide an example.
MATH - PLEASE HELP
A biased coin is tossed 5 times where p(t) = .6. determine the probability that if you have 2 tails, you have 3 tails.
thanks so much!
A biased coin is tossed 5 times where p(t) = .6. determine the probability that if you have 2 tails, you have 3 tails.
Edna has scored in 80% of the games she's played.What is the probability she woulds score a goal in at least 7 out of her 8 games. P (if she scored in 3 or 4 games, what is the probability that she
scores in exactly 4).?
uppose the demand for these jets is given by the equation: P = 3000 - Q, where Q denotes the quantity of jets, and P denotes its price. So that the marginal revenue facing the firm is: MR = 3000 -
2Q. The marginal cost of Lockheed Martin is given by the equation: MC(Q) = 2Q wh...
Fantasia Florist Shop purchases an order of imported roses with a list price of $2,375 less trade discounts of 15/20/20. What is the dollar amount of the trade discount? Answer: 1306.25 4. Using your
answer to the question above, what is the net dollar amount of that rose orde...
What is the net price factor for trade discounts of 25/15/10? Answer: .57375 Just checking my homework answers
Natureland Garden Center buys lawn mowers that list for $679.95 less a 30% trade discount. What is the dollar amount of the trade discount? Answer: 203.99 What is the net price of each Natureland
Garden Center lawn mower? Answer: 475.96 Just want to make sure my answer is corr...
By what percent is a 100-watt light bulb brighter than a 60-watt light bulb? Round to the nearest tenth. 40/60 = 2/3 = 2 ÷ 3 = 0.666667 0.666667 x 100 = 66.6667% Answer: 66.7% Is this correct?
What is the portion if the base is 900 and the rate is 12¾ %? P=RxB P=12.75x900 P=11,475 Is this correct?
Sunshine Honda sold 112 cars this month. If that is 40% greater than last month, how many cars were sold last month? x=112*100/140 x= 80 Answer: 80 Is this correct?
Michael Reeves, an ice cream vendor, pays $17.50 for a five-gallon container of premium ice cream. From this quantity he sells 80 scoops at $0.90 per scoop. If he sold smaller scoops, he could sell
98 scoops from the same container; however, he could charge only $0.80 per scoo...
By what percent is a 100-watt light bulb brighter than a 60-watt light bulb? I'm not sure the equation to use to solve this one.
Sorry Ms. Sue. I put your name in by mistake.
After a 15% pay raise, Scott Walker now earns $27,600. What was his salary before the raise? Answer: 27600=.15x*27600 x=4140 27600-4140=23,460 Is my answer correct?
A letter carrier can deliver mail to 112 homes per hour by walking and 168 homes per hour by driving. By what percent is productivity increased by driving? Answer: 56% Is my answer correct?
If 453 runners out of 620 completed a marathon, what percent of the runners finished the race? (Round percent to the nearest tenth.) Answer: 453/620= .731 Changing to percent 73.1% Is this correct?
What is the exact answer to cos^2x - sin^2x = 0 for between 0 and 2pi? I got x = pi/4 and x = 5pi/4. But the answer key says there are two additional answers which are 3pi/4 and 7pi/4. I don't know
how to get these. Any help is much appreciated!
Problem Solving Work Backward The test that Keyshawn's class took finished at 10:30 a.m. The first part of the test took 30 minutes. There was a 15 minute break. The second part of the test also took
30 minutes. At what time did the test start?
Problem Solving: Work Backward At 12 noon, Leslie recorded the temperature as 56 degree Fahrenheit. The temperature had increased by 8 degree Fahrenheit from 10 a.m. The temperature at 8 a.m. was 2
degree Fahrenheit warmer than it was at 10 a.m. What was the temperature at 8 a.m.
I'm having a hard time coming up with the right equation to solve this one. E-Z Stop Fast Gas sold $10,957 worth of gasoline yesterday. Regular grade sold for $2.30 a gallon and premium grade sold
for $2.55 a gallon. If the station sold 420 more gallons of regular than of ...
Is this the correct numerical expression for 15 less than one-ninth of P? 15-1/9 P
A machine produces a voltage in the following formula V(t) = 300cos(10(pi)x). X is the time in seconds. Graph the function with the restriction 0 < X < 1
Phil Phoenix is paid monthly. For the month of January of the current year, he earned a total of $8,288. The FICA tax rate for social security is 6.2% and the FICA tax rate for Medicare is 1.45%. The
FUTA tax rate is 0.8%, and the SUTA tax rate is 5.4%. Both unemployment taxes...
AP tatistics
A horticulturist wishes to estimate the mean growth of seedlings in a large timber plot last year. A random sample of n = 100 seedlings is selected and the one-year growth for each is measured. The
sample yields = 5.62 cm. The known standard deviation for this popualtiopn is ...
Make a solutions by dissolving 4 lbs of sulfur dioxide in 100 lbs of water then heat to 30 degress C. what is the partial pressure of the sulfur dioxide over the solution specified?
A scientist is mixing a chemical solution for an experiment. The solution contains 3/8 ounce of a chemical and 1/6 ounce saline solution. What is the unit rate of chemical to saline solution?
Math Statistics
The annual per capita consumption of fresh apples (in pounds) in a nearby state can be approximated by a normal distribution, with a mean of 15.9 pounds and a standard deviation of 4.2 pounds. (a)
What is the smallest annual per capita consumption of apples that can be in the ...
I need a estimate within $25, the true average of amount of postage a company spends each year. If I want to be 98% confident, how large a sample is necessary? The standard deviation is known to be
The standard deviation of the diameter of 18 baseballs was 0.29cm. Find the 95% confidence interval of the true standard deviation of the diameters of the baseballs. Do you think the manufacturing
process should be checked for inconsistency?
confidence intervals: A researcher is interested in estimating the average salary of teachers in a large school district. She wants to be 95% confident that her estimate is correct. If the standard
deviation is $1050, how large a sample is needed to be accurate within $200.
Enough of a monoprotic acid is dissolved in water to produce a 0.0190 M solution. The pH of the resulting solution is 2.33. Calculate the Ka for the acid.
Us travel data center survey of 1500 adults found that 42% of respondents stated that they favor historical sites. Find the 95% confidence interval of the true proportion of all adults who favor
visiting historical sites.
Confidence Intervals; In a survey of 1004 individuals, 442 felt that Randolph spent too much time away. Find a 95% confidence interval for the true population proportion
Suppose the minimum wage is above the equilibrium wage in the market for unskilled labour. Using a demand and supply diagram of the market for unskilled labour, show the market wage, the number of
workers who are employed, and the number of workers who are unemployed. Also sho...
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | Next>>
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=ann&page=2","timestamp":"2014-04-20T04:09:08Z","content_type":null,"content_length":"30446","record_id":"<urn:uuid:fd5a91df-2ef7-48d7-ab1c-3a83a7848afd>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
|
physically meaningful interpolation on diffeomorphism group , or statistics on manifolds
up vote 3 down vote favorite
Suppose we work on the diffeomorphism group on a shape space (for example the shape of human organs). We can regard all different shapes are achieved by applying diffeomorphic deformations on a
template shape. The 'distances' between two shapes is then the length of the geodesic connecting these shapes. With the help of the distance, we can do statistics/classification on shapes. The
question is:
Given N+1 different shapes S_1(0),.....,S_N(0), S_N+1(0) from N+1 individuals at time 0, the shapes will change to S_i(t) at time t. If we already have S_1(t),.....,S_N(t), we can build the
'growth_track' of each shape by the geodesic between S_i(0) and S_i(t). Then how can we predict/interpolate the S_N+1(t) from these information (if we assume the shapes may follow similar tracks
which is quite reasonable).
Two possible ways: (1) Compute the geodesics from S_N+1(0) to S_1(0),....,S_N(0) and then 'parallel translate ' the initial tangent vector of the 'growth_track'(geodesic) of each shape S_1,...S_N
back to the shape S_N+1, we can then work on the tangent space of S_N+1 to interpolate the initial tangent vector of the growth_track of S_N+1. This seems implementable but lack of physical meaning .
The advantage of this solution is that we can achieve statistics on a linear space, which may be preferable if N is big.
(2) Compute the geodesics from S_1(0),....,S_N(0) to S_N+1(0) and 'parallel translate' the initial tangent vector to S_1(t),.....,S_N(t) along the 'growth_track's of S_1,...,S_N. We can then follow
Riemannian exponential map from S_i(t) with the correspondent parallel translated vectors to get S_1_N+1(t),.....,S_N_N+1(t) as estimates of S_N+1(t) from each shape's point of view, then a final
'averaged' interpolation can be achieved by a Karcher mean of these estimates. (computational costly if N is big and I have no idea how to simplify it)
Can anybody provide other solutions? If anyway we need to do statistics on the shapes, is the tangent space based solution the only realistic solution? But since it use linear space to describe the
nonlinear deformation, maybe it's only an approximation when the deformation is minor.
Another question is that how can we compute a 'mean' growth_track of the shapes? How can we classify the growth_tracks? In the case if the growth tracks are geodesics, maybe we can still use tangent
space to describe it. what if the tracks are piece-wise geodesics?
Or maybe there is already a solution somewhere on carrying out statistics on manifolds? Thx.
1 Are you familiar with Hausdorff-Gromov distance? It has been used to capture shapes in a sense not dissimilar from yours. en.wikipedia.org/wiki/… – Joseph O'Rourke Feb 9 '13 at 17:06
Thanks a lot for the information. But I think that method is only a shape comparison/registration strategy. The physical meaning of the distance defined there is a little bit weak (some kind of
elastic energy of a spring model). – user31017 Feb 10 '13 at 16:04
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged dg.differential-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/121308/physically-meaningful-interpolation-on-diffeomorphism-group-or-statistics-on","timestamp":"2014-04-20T03:33:06Z","content_type":null,"content_length":"50600","record_id":"<urn:uuid:f24298ec-81ba-45d6-b9e8-374ab0ed4927>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How many commuting nilpotent matrices are there?
up vote 4 down vote favorite
To be precise, fix $n$, fix a field $k$.
What is the maximal dimension of a subspace of the vector space of all $n\times n$ matrices formed by commutative nilpotent matrices? By commutative I mean all the products of matrices in this
subspace are commutative.
(I feel like this can be formulated in terms of Lie algebras, but I don't find a good one. And I think the down-to-earth formulation might make it more accessible.)
matrices linear-algebra lie-algebras
Commuting matrices can be simultaneously trigonalized over the algebraic closure. If the matrices are nilpotent, their trigonalizations will be strictly upper triangular (because any nonzero
entries on the diagonal would survive taking powers, contradicting the nilpotency). So the maximal dimension is $\frac{n\left(n-1\right)}{2}$. – darij grinberg Sep 18 '10 at 22:12
I get this point, but upper triangular matrices don't necessarily commute with each other, right? – Yuhao Huang Sep 18 '10 at 22:25
"Commutative" is not a property of a matrix, but of a set of matrices, so I would rather you state the property you want more precisely. – Qiaochu Yuan Sep 18 '10 at 22:28
2 I think the substitution of "commuting" for "commutative" would address Qiaochu's concern. – Charles Staats Sep 18 '10 at 22:35
Related questions: mathoverflow.net/questions/19591/…, mathoverflow.net/questions/19755/…. It came up there that you can achieve the maximal dimension for a commutative subalgebra, 1 plus the
1 floor of n^2/4, by taking scalars plus 2-by-2 block strictly upper triangular matrices. (Apparently the proof of maximality is due to Schur.) Now you can throw out the multiples of the identity to
obtain an example of commuting nilpotents having dimension the floor of n^2/4. – Jonas Meyer Sep 19 '10 at 0:28
show 5 more comments
2 Answers
active oldest votes
Try the matrices of the form ${0A\choose 00}$ with A an m by n block (with |m-n|≤1 for maximal dimension).
up vote 2 down vote Spaces of commuting nilpotent matrices have nothing to do with maximal tori or Cartan subalgebras, which consist of semisimple matrices.
add comment
A book "Commutative Matrices", Acad. Press, 1968 by Suprunenko and Tyshkevich (translation from Russian) is devoted, largely, to this question. My cursory look at it tells that the
up vote 2 problem seems to be complicated - there are only partial results for small $n$ and particular classes of algebras, expressed in terms of some cumbersome-looking invariants.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged matrices linear-algebra lie-algebras or ask your own question.
|
{"url":"http://mathoverflow.net/questions/39252/how-many-commuting-nilpotent-matrices-are-there","timestamp":"2014-04-19T00:04:38Z","content_type":null,"content_length":"61070","record_id":"<urn:uuid:0a6bb299-306c-4e54-b7d8-bbb75cd9b500>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finite field extensions.
November 5th 2009, 11:28 AM #1
Feb 2009
Finite field extensions.
If K/F is a finite field extension of degree n, so is K(x)/F(x), where K(x) is the field of rational functions in one variable over K, idem F(x).
I could prove K(x)/F(x) is a finite extension, but I cannot prove the degree is n. It helped me to have found that K(x) = K(p), where p is the polynomial p(x) = x. Analogously, F(x) = F(p). Any
hint will be welcome. Thanks for reading.
$K/F$ is finite iff it's algebraic and finitely generated. Let $a_1,...,a_n$ be generators of the extension try showing these also generate $K(x)$ over $F(x)$ (I'm not really sure it works but
seems a good place to start).
It's what I did. I proceeded like this: let K:F=n. Let a_1,...,a_n be a basis of K over F. Then K= F(a_1,...,a_n). I know that
K(x)= K(p) and F(x)= F(p), (1)
where p belonging to F[x] is given by p(x)= x. Substituting,
K(p)= F(a_1,...,a_n)(p)= F(p)(a_1,...a_n). (2)
Because K/F finite, K/F algebraic and because a_i belongs to K, a_i algebraic over F and, all the more so, algebraic over F(p). Hence, F(p)(a_1,...,a_n)/F(p) is a finite extension. Keeping in
mind (1) and (2), K(x)/F(x) is finite.
This done, I tried to prove a_1,...,a_n generate K(x) over F(x). Or that they are linearly independent over F(x). But in vain. Anyways, thank you for your post.
November 5th 2009, 04:43 PM #2
Super Member
Apr 2009
November 6th 2009, 02:17 AM #3
Feb 2009
|
{"url":"http://mathhelpforum.com/advanced-algebra/112627-finite-field-extensions.html","timestamp":"2014-04-16T16:07:21Z","content_type":null,"content_length":"35099","record_id":"<urn:uuid:7cfb89b0-08c1-468f-b725-22a9b0c96f7d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is there an "arithmetic cobordism category"?
up vote 11 down vote favorite
This question is a clumsy attempt to apply a certain analogy. I hope that if the answer is negative it comes with a clarification of the scope and limitations of the analogy.
Arithmetic topology is based on an analogy between number fields and 3-manifolds where primes are something like knots, the Legendre symbol is something like a linking number, etc. In quantum
topology, on the other hand, one way to study 3-manifolds is to study 3d TQFTs, e.g. functors $Z : 3\text{Cob} \to \text{Vect}$. These functors assign to every 3-manifold, interpreted as a cobordism
from the empty 2-manifold to itself, a morphism $k \to k$ where $k$ is the base field, and therefore give $k$-valued invariants of 3-manifolds.
If the analogy between number fields and 3-manifolds is strong enough, there might conceivably exist an "arithmetic cobordism category" whose morphisms are number fields and whose objects are...
whatever boundaries of number fields are in arithmetic topology. (One might need to adapt this construction depending on whether number fields are considered to have "boundaries" at all.) It might
conceivably be possible to adapt constructions of 3d TQFTs to the arithmetic case and therefore to find "quantum invariants" of number fields.
So is any construction like this possible, or am I just talking nonsense?
arithmetic-topology qa.quantum-algebra quantum-topology
Something TQFT-like in number theory cropped up here - londonnumbertheory.wordpress.com/2010/05/10/… – David Corfield May 26 '10 at 9:03
This goes into that direction: math.uiuc.edu/K-theory/0547 – Thomas Riepe May 26 '10 at 9:28
In the analogy 3manifolds = numberfields, 1-manifolds = finitefields, you're asking 2manifolds = ?. You could also ask 0manifolds = ?. – André Henriques May 26 '10 at 12:45
Number fields are not (compact) 3-manifolds. Rings of integers and S-integers are. Local fields are 2-manifolds, the boundary around a knot. Rings of integers in them are the tubular neighborhood
3 of the knot. That doesn't give a lot of 2-manifolds to work with. Not enough for Heegaard splittings. But you could try to approach the Casson invariant some other way, without mentioning Heegaard
splittings or the TQFT more generally. – Ben Wieland Jul 10 '10 at 3:31
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged arithmetic-topology qa.quantum-algebra quantum-topology or ask your own question.
|
{"url":"https://mathoverflow.net/questions/25975/is-there-an-arithmetic-cobordism-category","timestamp":"2014-04-17T21:58:47Z","content_type":null,"content_length":"52181","record_id":"<urn:uuid:0f41f727-0ead-4b12-bd6c-9793a0e53df1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
|
V=HOD & The Height of the Large Cardinal Tree
up vote 10 down vote favorite
As we know the assumption $V=L$ adds a restriction on the height of the large cardinal tree. Also there is a strict border like $0^{\sharp}$ exists such that all large cardinal axioms which are
equivalent or stronger than this axiom are contradictory with $V=L$ and all large cardinal assumptions below it are consistent with $V=L$.
Question: What is the situation for the weaker assumption $V=HOD$ and large cardinal tree? Precisely:
(a) Which one of the known large cardinal axioms (together with $ZFC$) does imply $V\neq HOD$?
(b) Is there any strict border in this case as same as "$0^\sharp$ exists" and "$V=L$"?
Is the statement "all large cardinal assumptions below [$0^\sharp$] are consistent with $V=L$" intended just as a statement about the widely studied large cardinal axioms, or is it intended as a
general principle (presumably based on some definition of what a "large cardinal axiom" is)? – Andreas Blass Dec 13 '13 at 21:54
See mathoverflow.net/questions/95406 . – Andreas Blass Dec 13 '13 at 21:56
@AndreasBlass: Your question produced a question for me! Is it unknown that each (discovered and undiscovered) large cardinal axiom with strictly weaker consistency strength than $0^{\sharp}$
exists is consistent with $V=L$? – user43940 Dec 13 '13 at 22:37
1 (A boring tree, being linear...) – Andres Caicedo Dec 14 '13 at 2:39
add comment
1 Answer
active oldest votes
There is no such border, because almost all the large cardinal properties, including the very strongest large cardinal axioms, are relatively consistent with $V=\HOD$. For the larger large
cardinals, this is generally proved by forcing, and there are several natural ways to force $V=\HOD$.
One such way to force $V=\HOD$ is to force so as to code every set in the pattern of GCH on the cardinals. Basically, one undertakes a forcing iteration that at each cardinal stage decides
generically whether to force the GCH at that cardinal or not; in other words, one uses the lottery sum of two posets, one of which forces the GCH there and the other of which forces a
failure of GCH there. In the extension, a simple density argument shows that every set is coded into the GCH pattern, and so we get $V=\HOD$. (The general idea of this kind of coding is
due originally to McAloon, although he used a different method. Many authors describe the coding with bookkeeping functions, but this is not actually needed, since the generic coding means
up vote 13 that it is dense that any new set is coded.) The forcing is somewhat easier when the $\text{GCH}$ holds in the ground model, and in the general case one wants to space out the cardinals at
down vote which coding occurs. The assertion that every set is coded into the GCH pattern is named as the continuum coding axiom (CCA) in the dissertation of Jonas Reitz, who proved that this
accepted implies the ground axiom (GA). You can find further uses of the CCA in Set-theoretic geology, which also contains full details of this kind of forcing and variations.
Now, the point is that most of the usual large cardinal axioms are preserved by the $V=\HOD$ forcing, by the usual lifting arguments as for GCH. Thus, all the main large cardinals are
relatively consistent with $V=\HOD$.
There are many other coding methods besides coding into the GCH pattern. Andrew Brooke-Taylor, for example, undertook coding in the $\Diamond^*_\kappa$ pattern, which allows one to force
$V=\HOD+\text{GCH}$ while preserving all the usual large cardinal notions. His paper Large Cardinals and Definable Well-Orderings of the Universe exactly fits the theme of this question.
See also my paper The wholeness axiom and $V=\HOD$, which is about precisely this problem in the case of the Wholeness axiom and related large cardinals.
Let me further add that we expect the pattern to persist, to the point that the canonical inner models for large cardinals are explicitly built as subclasses of $\mathsf{HOD}$. There is
also another comment that may be useful: There are "global" large cardinals (such as supercompactness) but we have local versions for all (such as: There is an inaccessible $\kappa$ such
that in $V_\kappa$ there is a proper class of supercompact cardinals). For any such (local) cardinal, showing its consistency with $V=\mathsf{HOD}$ is straightforward via the GCH coding
starting at a cardinal sufficiently high. – Andres Caicedo Dec 14 '13 at 2:47
(I really like the lottery, by the way, it really simplifies many arguments.) – Andres Caicedo Dec 14 '13 at 2:52
add comment
|
{"url":"http://mathoverflow.net/questions/151784/v-hod-the-height-of-the-large-cardinal-tree","timestamp":"2014-04-19T07:31:11Z","content_type":null,"content_length":"61017","record_id":"<urn:uuid:47e1c61e-e847-4ff2-8bec-c1a4ae32446e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Oaklyn Algebra Tutor
Find an Oaklyn Algebra Tutor
...As a tutor with multiple years of experience tutoring people in precalculus- and calculus-level courses, tutoring precalculus is one of my main focuses. With a physics and engineering
background, I encounter math at and above this level every day. With my experience, I walk the student through ...
9 Subjects: including algebra 1, algebra 2, physics, geometry
...In terms of my own content knowledge, I took seven years of Latin while in school, as well as five years of French. I was a National Merit Commended Scholar in 2010 and scored in the top
percentile on the SAT. I also received scores of 5 on the AP world history, United States history, biology, ...
32 Subjects: including algebra 1, reading, English, grammar
...It's an ever-changing world for teachers, and I'm ready to rise to your challenge. My pace is your pace. I move according to the progress you make after each lesson.
10 Subjects: including algebra 1, geometry, elementary (k-6th), prealgebra
...Each student is an individual and I adapt everything I do to help them learn in the way that is best suited to their needs. I am an SAT prep expert. I figured out the skills and tricks needed
to ace the test and used them to achieve a perfect score on the math and reading sections.
37 Subjects: including algebra 1, algebra 2, reading, geometry
...I have completed my Bachelors of Arts; I majored in Spanish. I have many references: from a Nobel Peace Prize nominee, with whom I worked in Guatemala, to my Professors at Temple; I assure you
that I am an excellent tutor. In my free time, I play the piano and read voraciously.
26 Subjects: including algebra 1, algebra 2, English, Spanish
|
{"url":"http://www.purplemath.com/Oaklyn_Algebra_tutors.php","timestamp":"2014-04-20T01:46:29Z","content_type":null,"content_length":"23702","record_id":"<urn:uuid:02167a71-4698-408a-9558-e907c27f200d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Asymptotic 2p-Moment Stability of Stochastic Linear Systems
Papadimitriou, Costas and Katafygiotis, Lambros S. and Beck, James L. (1999) Asymptotic 2p-Moment Stability of Stochastic Linear Systems. Mechanics Research Communications, 26 (1). pp. 21-29. ISSN
0093-6413. http://resolver.caltech.edu/CaltechAUTHORS:20120829-142838353
Full text is not posted in this repository. Consult Related URLs below.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:20120829-142838353
The exponential p-moment stability of dynamical systems governed by a system of linear Itô stochastic differential equations is revisited. It is well-known that the system of equations governing the
evolution of these p-moments is linear and, therefore, available results for asymptotic stability of the linear systems of deterministic first-order homogeneous differential equations are applicable
[2,4,7,10,11]. Specifically, the necessary and sufficient conditions for asymptotic stability of a system of deterministic linear equations is that the real parts of all the eigenvalues of the system
matrix are negative. The search for stability boundaries involves repeated solutions of eigenvalue problems of dimension equal to the dimension of the system of moments. Alternatively, the well-known
Routh-Hurwitz procedure provides conditions for stability in the form of inequalities which involve the system and parametric excitation characteristics. However, these conditions are quite
cumbersome since they involve computation of a large number of determinants of orders up to the order of the system of moments. Therefore, Routh-Hurwitz conditions are practical for obtaining
stability boundaries only for low order systems. For the special case of an n-th order linear Ito stochastic differential equation, Khasminskii [9] has derived simplified conditions for exponential
mean-square (p = 2) stability which mainly depend on the conditions of stability of the first moments, supplemented by only an extra condition involving the evaluation of an n-th order determinant.
In this study, new simplified 2p-moment stability conditions are developed which provide significant advantages for the analytical and numerical estimation of the 2p-moment stability border and
stability regions. Specifically, it is shown that there exists a real eigenvalue of the system of 2p-moments which is an upper bound of the real parts of all other eigenvalues. Thus, the stability of
the system of moments can be examined by computing only the maximum real eigenvalue of the state matrix describing the evolution of the 2p moments. In particular, at the border of stability, one
eigenvalue of the system of 2p-moments is equal to zero. Thus, a necessary condition for the system configuration to correspond to a point on the 2p-moment stability boundary is that the determinant
of the matrix describing the system of 2p-moments be zero. This condition is a generalization of the mean-square stability boundary condition obtained in [8]. It provides a single algebraic
expression for computing all candidate stability boundaries. It is also shown in this study that all candidate 2p-moment stability boundaries can be obtained by computing the real eigenvalues of a
matrix of dimension equal to the dimension describing the system of 2p-moments.
Item Type: Article
Related URLs:
Additional Information: Copyright © 1999 Published by Elsevier Ltd. Received 10 June 1998; accepted for print 16 October 1998.
Record Number: CaltechAUTHORS:20120829-142838353
Persistent URL: http://resolver.caltech.edu/CaltechAUTHORS:20120829-142838353
Usage Policy: No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code: 33679
Collection: CaltechAUTHORS
Deposited By: Carmen Nemer-Sirois
Deposited On: 30 Aug 2012 21:11
Last Modified: 30 Aug 2012 21:11
Repository Staff Only: item control page
|
{"url":"http://authors.library.caltech.edu/33679/","timestamp":"2014-04-16T21:53:06Z","content_type":null,"content_length":"26900","record_id":"<urn:uuid:82dc3396-1f56-4875-b6c0-04796bc037b7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geomathematically Oriented Potential Theory
• Presents parallel discussions of three-dimensional Euclidean space and spherical potential theory
• Describes extensive applications to geoscientific problems, including modeling from satellite data
• Provides a balanced combination of rigorous mathematics with the geosciences
• Includes new space-localizing methods for the multiscale analysis of the gravitational and geomagnetic field
As the Earth`s surface deviates from its spherical shape by less than 0.4 percent of its radius and today’s satellite missions collect their gravitational and magnetic data on nearly spherical
orbits, sphere-oriented mathematical methods and tools play important roles in studying the Earth’s gravitational and magnetic field.
Geomathematically Oriented Potential Theory presents the principles of space and surface potential theory involving Euclidean and spherical concepts. The authors offer new insight on how to
mathematically handle gravitation and geomagnetism for the relevant observables and how to solve the resulting potential problems in a systematic, mathematically rigorous framework.
The book begins with notational material and the necessary mathematical background. The authors then build the foundation of potential theory in three-dimensional Euclidean space and its application
to gravitation and geomagnetism. They also discuss surface potential theory on the unit sphere along with corresponding applications.
Focusing on the state of the art, this book breaks new geomathematical grounds in gravitation and geomagnetism. It explores modern sphere-oriented potential theoretic methods as well as classical
space potential theory.
Table of Contents
Three-Dimensional Euclidean Space R^3
Basic Notation
Integral Theorems
Two-Dimensional Sphere Ω
Basic Notation
Integral Theorems
(Scalar) Spherical Harmonics
(Scalar) Circular Harmonics
Vector Spherical Harmonics
Tensor Spherical Harmonics
Basic Concepts
Background Material
Volume Potentials
Surface Potentials
Boundary-Value Problems
Locally and Globally Uniform Approximation
Oblique Derivative Problem
Satellite Problems
Gravimetry Problem
Geomagnetic Background
Mie and Helmholtz Decomposition
Gauss Representation and Uniqueness
Separation of Sources
Ionospheric Current Systems
Basic Concepts
Background Material
Surface Potentials
Curve Potentials
Boundary-Value Problems
Differential Equations for Surface Gradient and Surface Curl Gradient
Locally and Globally Uniform Approximation
Disturbing Potential
Linear Regularization Method
Multiscale Solution
Mie and Helmholtz Decomposition
Higher-Order Regularization Methods
Separation of Sources
Ionospheric Current Systems
Exercises appear at the end of each chapter.
Author Bio(s)
Willi Freeden is a professor in the Geomathematics Group at the University of Kaiserslautern. Dr. Freeden is an editorial board member of seven international journals and was previously the
editor-in-chief of the International Journal on Geomathematics. His research interests include special functions of mathematical (geo)physics, PDEs, constructive approximation, integral transforms,
numerical methods, the use of mathematics in industry, and inverse problems in geophysics, geodesy, and satellite technology.
Christian Gerhards is a visiting postdoc researcher in the Department of Mathematics and Statistics at the University of New South Wales.
|
{"url":"http://www.crcpress.com/product/isbn/9781439895429","timestamp":"2014-04-18T03:06:32Z","content_type":null,"content_length":"94862","record_id":"<urn:uuid:e4e2789d-ecd3-4a26-8a1f-a01f0da4b502>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Concepts Review
Gareth Kafka's students in Physics 15b work together to solve problems in class in order to emphasize process and collaboration.
To participate, students need to have attended lectures. At the beginning of section, the class reviews basic concepts from lecture, then students break themselves into groups of 4 or 5. During the
activity, students use only the handouts given to them at the beginning of section. They are actively told not to use pens/pencils! The handouts include short blurbs of basic lecture concepts,
important formulae from lecture, and practice problems. Each group discusses the problem, answering questions such as "what do we already know?" and "what do we need to show?" They may discuss what
formulae would be useful, but are told not to actually solve the problem. After about 5 minutes, the instructor works the problem on the board, using almost only student input. He only gives tips and
tricks as necessary, providing explanations as they are useful. The class works as many problems as possible in this way, and at the end of section the instructor summarizes the important points.
The activity has a few goals: 1) Give students a good method to solve problems. 2) Help them slow down; make them think before they write! 3) Help students feel comfortable working with each other
and explaining topics to each other.
Gareth recommends starting this method early. Changing the effort that students must put forth themselves in the middle of the semester is notoriously difficult!
See also:
Single Class
Kafka, Gareth
Introductory Electromagnetism
Whole class
Problem Set
|
{"url":"http://ablconnect.harvard.edu/book/physics-concepts-review","timestamp":"2014-04-16T20:21:17Z","content_type":null,"content_length":"56757","record_id":"<urn:uuid:55c0d0e9-21ae-43cd-be65-973daaa7fad3>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
|
what is the equation switched capacitors
what is the equation that describes the relation between the charging current i(vdd) and that switched capacitors ( c1,c2,c3,c4,c5) if we put into consideration that switches are closing in sequence
switch 1,switch 2, . . .switch 5 with a delay time of td
Carl Pugh
There is a problem with your circuit.
When the second capacitor is switched, there is an infinite current through the switch.
That's only for an instant.
When the switch for C
is closed, the voltages across C
& C
become equal to each other virtually instantaneously. Then the pair continue to be charged for that point.
|
{"url":"http://www.physicsforums.com/showthread.php?p=3849968","timestamp":"2014-04-21T07:20:03Z","content_type":null,"content_length":"30269","record_id":"<urn:uuid:925a53f5-9565-4e48-97bb-b25cbd07961d>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
|
DIVISION LESSON CONTINUED WITH REMAINDERS
Sometimes, division problems don’t have exact whole numbers as quotients, or answers. For example, 44÷ 9 = ? Inverting this problem to a multiplication problem, we ask “what times 9 = 44? There is
not an exact, even number. We know that 5 x 9 = 45, which is close, but not exactly 44. That is when we have to write the quotient with a remainder. A remainder is what is leftover after we’ve solved
the problem with the closest quotient, and then subtract the product of the quotient and divisor from the dividend. To solve, write the problem this way:
9 ) 44
We know that 5 x 9 = 45, but that is higher than the dividend 44. So we take the next lower number, 4 x 9 = 36. Insert 4 as the quotient, multiply the divisor (9) by this quotient (4), and then
subtract this product from the dividend (44) to find the remainder.
4 R8 (quotient with remainder)
(divisor) 9) 44 (dividend)
- 36 (product of 9 x 4)
8 (remainder)
When you do a division problem like this, make the quotient as large as you can. To check your work, make sure the remainder is less than the divisor. If the remainder is more, you need to try again
with a larger quotient. For example, if the problem is 26 divided by 4, would your quotient be 5 or 6? If you use 5 as the quotient, we know that
5 x 4 = 20, and 26 – 20 = 6 left over. Since the remainder (6) is more than the divisor (4), we know that the quotient can be 1 greater.
So we try again, with a 6 as the quotient: 6 x 4 = 24, leaving 2 leftover. Because the remainder (2) is less than the divisor (4), we know we’ve found the largest possible quotient. (See below)
Wrong: Correct: Try again with higher quotient:
5 R6 6 R2
4 ) 26 4 ) 26
-20 -24
6 2 (less than divisor)
(remainder is higher than the divisor)
Division Activity Worksheet
Printable worksheet - Division Activity. Worksheet opens up in a new window for printing.
Do you have an idea or recommendation for this division and remainders lesson, or a new lesson? Then please leave us a suggestion.
|
{"url":"http://www.moneyinstructor.com/lesson/divisionmoneyremainders.asp","timestamp":"2014-04-20T15:50:41Z","content_type":null,"content_length":"30394","record_id":"<urn:uuid:45e6b04a-104b-482a-8ce8-fb35c03b03fc>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Points on a Graph
After reviewing this unit you will be able to:
• Identify the x and y axes.
• Identify the origin. on a graph.
• Identify x and y coordinates of a point.
• Plot points on a graph.
Elements of a Graph
We often use graphs to give us a picture of the relationships between variables. Let's first look at the basic construction of graphs.
• A graph is a visual representation of a relationship between two variables, x and y.
• A graph consists of two axes called the x (horizontal) and y (vertical) axes. These axes correspond to the variables we are relating. In economics we will usually give the axes different names,
such as Price and Quantity.
• The point where the two axes intersect is called the origin. The origin is also identified as the point (0, 0).
Coordinates of Points
A point is the basic relationship displayed on a graph. Each point is defined by a pair of numbers containing two coordinates. A coordinate is one of a set of numbers used to identify the location of
a point on a graph. Each point is identified by both an x and a y coordinate. In this unit you will learn how to find both coordinates for any point. You will also learn the correct notation for
labeling the coordinates of a point. You will first begin by identifying the x-coordinateof a point.
Identifying the x-coordinate
The x-coordinate of a point is the value that tells you how far from the origin the point is on the horizontal, or x-axis. To find the x-coordinate of a point on a graph:
• Draw a straight line from the point directly to the x-axis.
• The number where the line hits the x-axis is the value of the x-coordinate.
At the right is a graph with two points, B and D. In this figure:
• The x-coordinate of point B is 100.
• The x-coordinate of point D is 400.
Identifying the y-coordinate
As we already mentioned, each point is defined by two coordinates, the x and the y coordinate. Now that you know how to find the x-coordinate of a point, you have to be able to find the y-coordinate.
The y-coordinate of a point is the value that tells you how far from the origin the point is on the vertical, or y-axis. To find the y-coordinate of a point on a graph:
• Draw a straight line from the point directly to the y-axis.
• The number where the line hits the axis is the value of the y-coordinate.
Looking back at the graph with our points B and D, we now identify the y-coordinate for each.
• The y-coordinate of point B is 400.
• The y-coordinate of point D is 100.
Notation for Identifying Points
Once you have the coordinates of a point you can use the ordered pair notation for labeling points. The notation is simple. Points are identified by stating their coordinates in the form of (x, y).
Note that the x-coordinate always comes first. For example, in the figure we've been using, we have identified both the x and y coordinate for each of the points B and D.
• The x-coordinate of point B is 100.
• The y-coordinate of point B is 400.
• Coordinates of point B are (100, 400)
• The x-coordinate of point D is 400.
• The y-coordinate of point D is 100.
• Coordinates of point D are (400, 100)
Points On The Axes
If a point is lying on an axis, you do not need to draw lines to determine the coordinates of the point. In the figure below, point A lies on the y-axis and point C lies on the x-axis. When a point
lies on an axis, one of its coordinates must be zero.
• Point A--If you look at how far the point is from the origin along the x-axis, the answer is zero. Therefore, the x-coordinate is zero. Any point that lies on the y-axis has an x-coordinate of
If you move along the y-axis to find the y-coordinate, the point is 400 from the origin. The coordinates of point A are (0, 400)
• Point C--If you look at how far the point is from the origin along the y-axis, the answer is zero. Therefore, the y-coordinate is zero. Any point that lies on the x-axis has a y-coordinate of
If you move along the x-axis to find the x-coordinate, the point is 200 from the origin. The coordinates of point C are (200, 0)
1. Which point is on the y-axis?
2. Which point is labeled (20, 60)?
3. Which point(s) have a y-coordinate of 30?
Answers to Example
1. Which point is on the y-axis? Point A
2. Which point is labeled (20, 60)? Point B
3. Which point(s) have a y-coordinate of 30? Points A & C
[solution to example]
Plotting Points on a Graph
There are times when you are given a point and will need to find its location on a graph. This process is often referred to as plotting a point and uses the same skills as identifying the coordinates
of a point on a graph. The process for plotting a point is shown using an example.
Plot the point (200, 300).
Step One Step Two Step Three
First, draw a line extending out from the x-axis at the x-coordinate of Then, draw a line extending out from the y-axis at the y-coordinate of The point where these two lines intersect is at the
the point. In our example, this is at 200. the point. In our example, this is at 300. point we are plotting, (200, 300).
You are now ready to try a practice problem. If you have already completed the first practice problem for this unit you may wish to try the additional practice.
[practice] [additional practice] [table of contents] [section one summary]
|
{"url":"http://cstl.syr.edu/fipse/GraphA/Unit2/Unit2.html","timestamp":"2014-04-17T13:21:01Z","content_type":null,"content_length":"12562","record_id":"<urn:uuid:2982b49d-3a94-4d23-8ce6-3d4d5bcfcd2e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
|
et al., “A proposed radix and word-length independent standard for floating point arithmetic
- In SIGPLAN Conference on Programming Language Design and Implementation , 1999
"... Some modern superscalar microprocessors provide only imprecise exceptions. That is, they do not guarantee to report the same exception that would be encountered by a straightforward sequential
execution of the program. In exchange, they offer increased performance or decreased chip area (which amoun ..."
Cited by 52 (6 self)
Add to MetaCart
Some modern superscalar microprocessors provide only imprecise exceptions. That is, they do not guarantee to report the same exception that would be encountered by a straightforward sequential
execution of the program. In exchange, they offer increased performance or decreased chip area (which amount to much the same thing). This performance/precision tradeoff has not so far been much
explored at the programming language level. In this paper we propose a design for imprecise exceptions in the lazy functional programming language Haskell. We discuss several designs, and conclude
that imprecision is essential if the language is still to enjoy its current rich algebra of transformations. We sketch a precise semantics for the language extended with exceptions. The paper shows
how to extend Haskell with exceptions without crippling the language or its compilers. We do not yet have enough experience of using the new mechanism to know whether it strikes an appropriate
balance between expressiveness and pwrformance.
- IN PROCEEDINGS OF THE 14TH SYMPOSIUM ON COMPUTER ARITHMETIC, I. KOREN AND P. KORNERUP (EDS , 1999
"... In modern computers, the floating point unit is the part of the processor delivering the highest computing power and getting most attention from the design team. Performance of any multiple
precision application will be dramatically enhanced by adequate use of floating point expansions. We present i ..."
Cited by 5 (1 self)
Add to MetaCart
In modern computers, the floating point unit is the part of the processor delivering the highest computing power and getting most attention from the design team. Performance of any multiple precision
application will be dramatically enhanced by adequate use of floating point expansions. We present in this work three multiplication algorithms faster and more integrated than the stepwise algorithm
proposed earlier. We have tested these new algorithms on an application that computes the determinant of a matrix. In the absence of overflow or underflow, the process is error free and possibly more
efficient than its integer based counterpart.
- In Proceedings of the 21st Annual ACM Symposium on Applied Computing , 2006
"... We provide sufficient conditions that formally guarantee that the floating-point computation of a polynomial evaluation is faithful. To this end, we develop a formalization of floatingpoint
numbers and rounding modes in the Program Verification System (PVS). Our work is based on a well-known formali ..."
Cited by 3 (1 self)
Add to MetaCart
We provide sufficient conditions that formally guarantee that the floating-point computation of a polynomial evaluation is faithful. To this end, we develop a formalization of floatingpoint numbers
and rounding modes in the Program Verification System (PVS). Our work is based on a well-known formalization of floating-point arithmetic in the proof assistant Coq, where polynomial evaluation has
been already studied. However, thanks to the powerful proof automation provided by PVS, the sufficient conditions proposed in our work are more general than the original ones.
, 1991
"... Regardless of how accurately a computer performs floating-point operations, if the data to operate on must be initially converted from the decimal-based representation used by humans into the
internal representation used by the machine, then errors in that conversion will irrevocably pollute the res ..."
Cited by 2 (0 self)
Add to MetaCart
Regardless of how accurately a computer performs floating-point operations, if the data to operate on must be initially converted from the decimal-based representation used by humans into the
internal representation used by the machine, then errors in that conversion will irrevocably pollute the results of subsequent
"... Introduction There exist many denitions of the div and mod functions in computer science literature and programming languages. Boute (Boute, 1992) describes most of these and discusses their
mathematical properties in depth. We shall therefore only briey review the most common denitions and the rare ..."
Add to MetaCart
Introduction There exist many denitions of the div and mod functions in computer science literature and programming languages. Boute (Boute, 1992) describes most of these and discusses their
mathematical properties in depth. We shall therefore only briey review the most common denitions and the rare, but mathematically elegant, Euclidean division. We also give an algorithm for the
Euclidean div and mod functions and prove it correct with respect to Euclid's theorem. 1.1 Common denitions Most common denitions are based on the following mathematical denition. For any two real
numbers D (dividend) and d (divisor) with d 6= 0, there exists a pair of numbers q (quotient) and r (remainder) that satisfy the following basic conditions of division: (1) q 2 Z (the quot
- Proceedings of the 16th IEEE Symposium on Computer Arithmetic , 2003
"... Decimal arithmetic is the norm in human calculations, and human-centric applications must use a decimal floating-point arithmetic to achieve the same results. ..."
Add to MetaCart
Decimal arithmetic is the norm in human calculations, and human-centric applications must use a decimal floating-point arithmetic to achieve the same results.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1000573","timestamp":"2014-04-20T01:55:26Z","content_type":null,"content_length":"25353","record_id":"<urn:uuid:bf2a2414-daa8-408d-b135-7c6debe5c3f9>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: large cardinals and P vs NP (reply to Cook)
JoeShipman@aol.com JoeShipman at aol.com
Thu Aug 2 14:14:21 EDT 2001
If C is a consistent large cardinal axiom and C+ZFC implies arithmetical statement A, and A is pi^0_1, then A is true. But I think C could be consistent and not 1-consistent, and have false pi^0_2 consequences. I can't answer this more precisely without a formal definition of "large cardinal axiom", though Harvey could probably give you an example.
-- Joe Shipman
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2001-August/004997.html","timestamp":"2014-04-18T10:36:17Z","content_type":null,"content_length":"2781","record_id":"<urn:uuid:8ba8a08b-56ec-4a49-b8b3-2f878d3ed5e4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The average case analysis of algorithms: multivariate asymptotics and limit distributions, Rapport de recherche no
Results 1 - 10 of 12
, 1999
"... We present a complete analysis of the statistics of number of occurrences of a regular expression pattern in a random text. This covers "motifs" widely used in computational biology. Our
approach is based on: (i) a constructive approach to classical results in theoretical computer science (automata ..."
Cited by 48 (4 self)
Add to MetaCart
We present a complete analysis of the statistics of number of occurrences of a regular expression pattern in a random text. This covers "motifs" widely used in computational biology. Our approach is
based on: (i) a constructive approach to classical results in theoretical computer science (automata and formal language theory), in particular, the rationality of generating functions of regular
languages; (ii) analytic combinatorics that is used for deriving asymptotic properties from generating functions; (iii) computer algebra for determining generating functions explicitly, analysing
generating functions and extracting coefficients efficiently. We provide constructions for overlapping or non-overlapping matches of a regular expression. A companion implementation produces
multivariate generating functions for the statistics under study. A fast computation of Taylor coefficients of the generating functions then yields exact values of the moments with typical
application to random t...
- IEEE Trans. Inform. Theory , 2004
"... We show how asymptotic estimates of powers of polynomials with non-negative coefficients can be used in the analysis of low-density parity-check (LDPC) codes. In particular we show how these
estimates can be used to derive the asymptotic distance spectrum of both regular and irregular LDPC code ense ..."
Cited by 41 (2 self)
Add to MetaCart
We show how asymptotic estimates of powers of polynomials with non-negative coefficients can be used in the analysis of low-density parity-check (LDPC) codes. In particular we show how these
estimates can be used to derive the asymptotic distance spectrum of both regular and irregular LDPC code ensembles. We then consider the binary erasure channel (BEC). Using these estimates we derive
lower bounds on the error exponent, under iterative decoding, of LDPC codes used over the BEC. Both regular and irregular code structures are considered. These bounds are compared to the
corresponding bounds when optimal (maximum likelihood) decoding is applied.
- J. COMB. THEORY, SERIES A , 1999
"... Given a multivariate generating function F (z1 ; : : : ; zd ) = P ar 1 ;:::;r d z r 1 1 z r d d , we determine asymptotics for the coecients. Our approach is to use Cauchy's integral formula
near singular points of F , resulting in a tractable oscillating integral. This paper treats the c ..."
Cited by 10 (3 self)
Add to MetaCart
Given a multivariate generating function F (z1 ; : : : ; zd ) = P ar 1 ;:::;r d z r 1 1 z r d d , we determine asymptotics for the coecients. Our approach is to use Cauchy's integral formula near
singular points of F , resulting in a tractable oscillating integral. This paper treats the case where the singular point of F is a smooth point of a surface of poles. Companion papers G treat
singular points of F where the local geometry is more complicated, and for which other methods of analysis are not known.
- JOURNAL OF COMPUTATIONAL BIOLOGY , 2001
"... The secondary structure of a RNA molecule is of great importance and possesses inuence, e.g. on the interaction of tRNA molecules with proteins or on the stabilization of mRNA molecules. The
classication of secondary structures by means of their order proved useful with respect to numerous applicati ..."
Cited by 10 (3 self)
Add to MetaCart
The secondary structure of a RNA molecule is of great importance and possesses inuence, e.g. on the interaction of tRNA molecules with proteins or on the stabilization of mRNA molecules. The
classication of secondary structures by means of their order proved useful with respect to numerous applications. In 1978 Waterman, who gave the rst precise formal framework for the topic, suggested
to determine the number a n;p of secondary structures of size n and given order p. Since then, no satisfactory result has been found. Based on an observation due to Viennot et al. we will derive
generating functions for the secondary structures of order p from generating functions for binary tree structures with Horton-Strahler number p. These generating functions enable us to compute a
precise asymptotic equivalent for a n;p . Furthermore, we will determine the related number of structures when the number of unpaired bases shows up as an additional parameter. Our approach proves to
be general enough to compute the average order of a secondary structure together with all the r-th moments and to enumerate substructures such as hairpins or bulges in dependence on the order of the
secondary structures considered.
, 1997
"... We consider here the probabilistic analysis of the number of descendants and the number of ascendants of a given internal node in a random search tree. The performance of several important
algorithms on search trees is closely related to these quantities. For instance, the cost of a successful searc ..."
Cited by 7 (2 self)
Add to MetaCart
We consider here the probabilistic analysis of the number of descendants and the number of ascendants of a given internal node in a random search tree. The performance of several important algorithms
on search trees is closely related to these quantities. For instance, the cost of a successful search is proportional to the number of ascendants of the sought element. On the other hand, the
probabilistic behavior of the number of descendants is relevant for the analysis of paged data structures and for the analysis of the performance of quicksort, when recursive calls are not made on
small subfiles. We also consider the number of ascendants and descendants of a random node in a random search tree, i.e., the grand averages of the quantities mentioned above. We address these
questions for standard binary search trees and for locally balanced search trees. These search trees were introduced by Poblete and Munro and are binary search trees such that each subtree of size 3
is balanced; in oth...
- Proceedings 21st S.T.A.C.S., V. Diekert and M. Habib editors, Lecture Notes in Computer Science , 2004
"... Abstract Motivated by problems of pattern statistics, we study the limit distribution of the random variable counting the number of occurrences of the symbol ¥ in a word of length ¦ chosen at
random in § ¥©¨����� � , according to a probability distribution defined via a finite automaton equipped wit ..."
Cited by 3 (2 self)
Add to MetaCart
Abstract Motivated by problems of pattern statistics, we study the limit distribution of the random variable counting the number of occurrences of the symbol ¥ in a word of length ¦ chosen at random
in § ¥©¨����� � , according to a probability distribution defined via a finite automaton equipped with positive real weights. We determine the local limit distribution of such a quantity under the
hypothesis that the transition matrix naturally associated with the finite automaton is primitive. Our probabilistic model extends the Markovian models traditionally used in the literature on pattern
statistics. This result is obtained by introducing a notion of symbol-periodicity for irreducible matrices whose entries are polynomials in one variable over an arbitrary positive semiring. This
notion and the related results we prove are of interest in their own right, since they extend classical properties of the Perron–Frobenius Theory for non-negative real matrices.
- J. DAIRY SCI , 2001
"... The Horton-Strahler number naturally arose from problems in various fields, e.g. geology, molecular biology and computer science. Consequently, detailed investigations of related parameters for
different classes of binary tree structures are of interest. This paper shows one possibility of how to pe ..."
Cited by 1 (0 self)
Add to MetaCart
The Horton-Strahler number naturally arose from problems in various fields, e.g. geology, molecular biology and computer science. Consequently, detailed investigations of related parameters for
different classes of binary tree structures are of interest. This paper shows one possibility of how to perform a mathematical analysis for parameters related to the Horton-Strahler number in a
unified way such that only a single analysis is needed to obtain results for many different classes of trees. The method is explained by the examples of the expected Horton-Strahler number and the
related r-th moments, the average number of critical nodes and the expected distance between critical nodes.
, 1999
"... We survey recent work on the enumeration of non-crossing configurations on the set of vertices of a convex polygon, such as triangulations, trees, and forests. Exact formulae and limit laws are
determined for several parameters of interest. In the second part of the talk we present results on the en ..."
Add to MetaCart
We survey recent work on the enumeration of non-crossing configurations on the set of vertices of a convex polygon, such as triangulations, trees, and forests. Exact formulae and limit laws are
determined for several parameters of interest. In the second part of the talk we present results on the enumeration of chord diagrams (pairings of 2n vertices of a convex polygon by means of n
disjoint pairs). We present limit laws for the number of components, the size of the largest component and the number of crossings. The use of generating functions and of a variation of Levy's
continuity theorem for characteristic functions enable us to establish that most of the limit laws presented here are Gaussian. (Joint work by Marc Noy with Philippe Flajolet and others.) 1. Analytic
Combinatorics of Non-crossing Configurations [3] 1.1. Connected Graphs and General Graphs. Let \Pi n = fv 1 ; : : : ; v n g be a fixed set of points in the plane, conventionally ordered
counter-clockwise, that are verti...
"... Let L be an algebraic language on an alphabet X = fx 1 , x 2 , ..., x k g, and n a positive integer. We consider the problem of generating at random words of L with respect to a given
distribution of the number of occurrences of the letters. We consider two alternatives of the problem. In the first ..."
Add to MetaCart
Let L be an algebraic language on an alphabet X = fx 1 , x 2 , ..., x k g, and n a positive integer. We consider the problem of generating at random words of L with respect to a given distribution of
the number of occurrences of the letters. We consider two alternatives of the problem. In the first one, a vector of natural numbers (n 1 , n 2 , ..., n k ) such that n 1 +n 2 + +n k = n is given,
and the words must be generated uniformly among the set of words of L which contain exactly n i letters x i (1 i k). The second alternative consists, given v = (v 1 , ..., v k ) a vector of positive
real numbers such that v 1 + + v k = 1, to generate at random words among the whole set of words of L of length n, in such a way that the expected number of occurrences of any letter x i equals nv i
(1 i k), and two words having the same distribution of letters have the same probability to be generated. For this purpose, we design and study two alternatives of the recursive method which is
classically employed for the uniform generation of combinatorial structures. This type of "controlled" non-uniform generation is of great interest in the statistical study of genomic sequences.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1067509","timestamp":"2014-04-24T20:31:34Z","content_type":null,"content_length":"38375","record_id":"<urn:uuid:f4156b60-de7c-41c9-8756-2165cfe9848a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nicolas Gisin
John Stewart Bell Prize for Research on Fundamental Issues in Quantum Mechanics and Their Applications
In 2009, the first biennial John Stewart Bell Prize for Research on Fundamental Issues in Quantum Mechanics and their Applications was awarded to Prof. Nicolas Gisin for his theoretical and
experimental work on foundations and applications of quantum physics, in particular: quantum non-locality, quantum cryptography and quantum teleportation. With sources of single and entangled
photons at telecommunications wavelength, he has implemented these quantum effects on a commercial optical fiber network in the 10-100 km range.
Nicolas Gisin, Professor of Physics at the Université de Genève, is a true visionary and a leader among his peers. He was among the first to recognize the importance of Bell’s pioneering work,
and has throughout his career made a series of remarkable contributions, both theoretical and experimental, to the foundations of quantum mechanics and to their application to practical quantum
cryptography systems. His work on the latter, for instance, was highlighted in the February 2003 issue of MIT’s Technology Review as one of the “10 Emerging Technologies that will Change the
We award the inaugural John Stewart Bell Prize for Research on Fundamental Issues in Quantum Mechanics and their Applications to Prof. Gisin in recognition of two of his recent contributions –
it should come as no surprise that they span theory as well as experiment. One of the remarkable features of Bell’s Inequalities is that they do not rely on the assumption of any particular
physical theory: they allow one to test local realism experimentally, without assuming quantum mechanics to be correct. On the other hand, the security of quantum cryptography relies on our
knowledge of quantum mechanics – and usually, on our confidence that the system we are using has in fact been constructed the way we believe it has. In “From Bell’s theorem to secure quantum key
distribution”, Physical Review Letters, 97, 120405 (2006), Gisin (with co-authors Acin and Masanes) follows in Bell’s footsteps and frees us from these assumptions, proving that a quantum
cryptographic protocol can be shown to be information- theoretically secure based purely on observations, without any prior guarantees about the system or even the correctness of quantum
At the same time, building on his longstanding efforts to bring quantum mechanics into the practical realm by developing sources of single and entangled photons in the telecommunications band
and implementing quantum communications protocols in the 10-100 km range, Gisin has been able to extend fundamental tests of nonlocality well into the regime of spacelike separation. In “Testing
the speed of ‘spooky action at a distance’“, Nature 454, 861 (2008), and “Spacelike separation in a Bell test assuming gravitationally induced collapses”, Physical Review Letters 100, 220404
(2008), Gisin (with co-authors Baas, Branciard, van Houwelingen, Salart, and Zbinden) has carried out experiments over a distance of 18 kilometers which further underscore the strength of
quantum correlations. In the first, they prove any hypothetical Einsteinian “spukhafte Fernwirkungen” (spooky actions at a distance) would need to travel at least 10,000 times greater than the
speed of light in any reference frame satisfying certain reasonable requirements in order to explain their observations. In the second, they have detection events culminate in the displacement
of macroscopic masses, sufficiently well-separated in space that gravitationally-induced collapse theories would suggest that collapse should occur before the two detectors could communicate at
the speed of light; if one believes that no measurement is complete until such a macroscopic change has taken place, then this is the first violation of Bell inequalities observed in the context
of true spacelike separation between such “complete measurements”.
These latest results are major steps forward in the effort to emphasize non locality in Bell-inequality violation tests and in the goal of making truly secure long-distance quantum cryptography
a reality. It is particularly fitting that, in honoring their author, we are also able to honor a researcher whose career has been so strongly associated with both fundamental and practical
implications of Bell’s work.
Prize acceptance video
|
{"url":"http://cqiqc.physics.utoronto.ca/bell_prize/Gisin.html","timestamp":"2014-04-18T23:15:54Z","content_type":null,"content_length":"20337","record_id":"<urn:uuid:3f803f58-22e5-4993-9359-806d607f82e4>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Graphing binomials from trinomials.
1. 49138
Graphing binomials from trinomials.
Using graphing to check your answers is helpful. When you factor a trinomial into two binomials, each binomial represents a linear relationship. If you plot the two binomials (which are just lines)
on a graph, what do they have in common with a plot of the trinomial itself? More important than that, how can this information be used to check your answer, when factoring a trinomial?
This explains the relationship between teh graphs of binomials and trinomials.
|
{"url":"https://brainmass.com/math/graphs-and-functions/49138","timestamp":"2014-04-16T13:09:58Z","content_type":null,"content_length":"27611","record_id":"<urn:uuid:b4b35357-c71b-4220-9625-6953f73496c8>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
|
3rd Grade Math: Distributive Property Help
Many 3rd grade math students find distributive property difficult. They feel overwhelmed with distributive property homework, tests and projects. And it is not always easy to find distributive
property tutor who is both good and affordable. Now finding
distributive property help
is easy. For your distributive property homework, distributive property tests, distributive property projects, and distributive property tutoring needs, TuLyn is a one-stop solution. You can master
hundreds of math topics by using TuLyn.
At TuLyn, we have over 2000 math video tutorial clips including
distributive property videos
distributive property practice word problems
distributive property questions and answers
, and
distributive property worksheets
distributive property videos
replace text-based tutorials in 3rd grade math books and give you better, step-by-step explanations of distributive property. Watch each video repeatedly until you understand how to approach
distributive property problems and how to solve them.
• Tons of video tutorials on distributive property make it easy for you to better understand the concept.
• Tons of word problems on distributive property give you all the practice you need.
• Tons of printable worksheets on distributive property let you practice what you have learned in your 3rd grade math class by watching the video tutorials.
How to do better on distributive property: TuLyn makes distributive property easy for 3rd grade math students.
Do you need help with Distributive Property of Multiplication over Addition in your 3rd Grade Math class?
3rd Grade: Distributive Property Videos
Simplifying Using Distributive Property With Negative Coefficients Video Clip Length:
1 minute 59 seconds
Video Clip Views:
This math video tutorial clip shows how to simplify algebraic expressions by following the rules of order of operations, distributive property and combining like terms.
Distributive property helps us get rid of the paranthesis first and then we combine the like terms to reach the solution.
distributive property video clips for 3rd grade math students.
3rd Grade: Distributive Property Word Problems
The attendance at a ball game was 400 people
The attendance at a ball game was 400 people. Student tickets cost $2 and adults tickets cost $3. If $1050 was collected in ticket ...
distributive property
You have six gummy worms and eleven sticks of gum. Your friend is going to double the amount of the candies because you helped her out last Friday. You want to add all of the candies to find out how
many you ...
distributive property homework help word problems for 3rd grade math students.
Third Grade: Distributive Property Practice Questions
distributive property homework help questions for 3rd grade math students.
Can I have some examples of distributive property, please. Thank you.
December 14, 2008, 7:11 pm
I could use a video clip on the most basic steps of the distributive property.
September 22, 2009, 3:42 pm
How Others Use Our Site
To receive a good review of 2nd grade math before trying 3rd grade math.
I am entering 3rd grade , like to be prepared. trying this site for 1 st time.
I teache 3rd grade math and also tutor Algebra 1.
I need some help explaining decimals to our 3rd grade daughter.
|
{"url":"http://www.tulyn.com/3rd-grade-math/distributive-property","timestamp":"2014-04-21T12:43:39Z","content_type":null,"content_length":"15623","record_id":"<urn:uuid:be43b37b-514d-4b9e-b188-216062e457b4>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
please help me fast!which lecture series by prof.Walter Lewin should I watch?I mean there is fall 1999 and fall 2010...which series includes every resources(video lecture,problems,notes)?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50d307c8e4b052fefd1db8dd","timestamp":"2014-04-19T19:42:35Z","content_type":null,"content_length":"74433","record_id":"<urn:uuid:8e8b072f-5146-4d45-9ad4-456448e5d4fb>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Beware, everybody: things are about to get super nerdy here.
I thought I’d stash this in the Off-Topic area, so it doesn’t get swallowed by a flood of angry threads about the team. I figure this is a topic that only a few of us will be interested in.
Some of you may remember that in the offseason I did a little bit of spreadsheet work on xBABIP (expected batting average on balls in play) for hitters. By the way, I have a new controversy-free
formula there, but that’s another story. Just recently, I was inspired to look at pitchers, regarding whether they have any influence on their BABIP, contrary to what Voros McCracken might say. I’m
not fully apprised of the current state of research in this area, so I would appreciate any input, though.
So in a nutshell, what I’ve found is that, yes, there are some factors that pitchers influence that significantly affect their BABIP. Much like with batters, the two main factors are line drive
percentage and the frequency of infield popups. This makes perfect sense to me – an infield popup is an automatic out, basically, and a line drive is going to have much greater odds of not being
caught. Pitchers seem to have quite a bit more control over how many popups they get than how many line drives they allow, much like batters, but they definitely influence both significantly.
A very simple formula that does a pretty good job of estimating a pitcher’s BABIP against is:
0.4*LD% - 0.6*FB%*IFFB% + 0.237 = xBABIP
You can find those stats at Fangraphs, by the way.
LD% = Line Drive percentage
FB% = Fly Ball percentage
IFFB% = Infield Fly Ball percentage (which is the percentage of fly balls that are hit to the infield; when multiplied by FB%, it gives the total percentage of balls hit that are infield popups)
Not surprisingly, this formula works better for a pitcher’s career BABIP, based on career numbers: 0.637 correlation, with an RMSE of 0.00968 (the average difference between the estimated and actual
BABIP). That’s for qualified pitchers from 2002-2011, and it’s pretty dang good, if you ask me, considering there are basically only 2 factors, and this was supposed to be something pretty random.
For single seasons of qualified pitchers over the same span, it’s a correlation of 0.441 and an RMSE of 0.01581.
Of course, these are just two of the most important defense-independent factors. The impact of defense is undoubtedly pretty important, and the park has some influence as well.
Refresher on correlations: 1 means a perfect correlation (when one factor goes up, the other follows in a linear fashion), negative one means a perfect negative correlation (when one goes up, the
other goes down), and 0 means no correlation (no apparent connection between the factors). And just because two factors are correlated doesn’t mean one causes the other. I do sort of imply some
causal relationships below, when they make sense, though.
You may be wondering what is correlated with a pitcher’s LD% and with how many popups they induce... here you go:
Strongest Correlations to LD%:
KN%: -0.658 (percentage of knuckleballs thrown; more knucklers equals fewer line drives allowed)
O-Swing%: -0.483 (getting batters to chase pitches outside the zone leads to fewer liners hit)
XX%: 0.3465 (percentage of mystery pitches thrown... these probably are typically sliders that don’t slide much, curves that don’t curve, etc.; throwing more iffy pitches leads to more liners
Zone%: 0.3455 (pitchers who throw in the zone more get hit harder... the price they pay for allowing fewer walks)
GB%: -0.273 (groundball pitchers allow fewer liners; the same can’t be said about fly ball pitchers)
Now, some of the strongest correlations to infield popups:
GB%: -0.871 (percentage of grounders... not a shocker that ground ball pitchers get a lot fewer popups...)
Z-Contact%: -0.529 (how often hitters make contact with pitches thrown in the zone... less contact equals more popups)
SwStr%: 0.356 (percent of pitches swung on and missed; more misses equals more popups)
HR/FB: -.302 (home runs per fly ball hit; fewer homers are tied to more popups... hitters aren’t squaring the ball as well, or swinging with as much authority)
Swing%: 0.293 (when hitters are swinging more frequently, they’re popping it up more... overconfidence, swinging defensively, or what?)
Z-Swing%: 0.277 (specifically, swinging at pitches in the zone is what’s most connected; unlike with liners, O-Swing% has pretty much no correlation)
KN%: 0.208 (knuckleballs are good)
Zone%: 0.192 (pitching in the zone more leads to more popups)
There’s very little connection between LD% and popups, but somewhat surprisingly, pitchers who give up more liners also get more popups (only a 0.080 correlation, though). That probably has something
to do with the popup pitchers being the more aggressive type, while the line drive preventers are more the nibbling type.
When I came up with the formula, by the way, it was right before Weaver had his awful game. His BABIP was standing at 0.225, whereas his xBABIP per my formula was 0.294. This year, he had been giving
up more liners and getting fewer popups than usual, so that 0.225 looked very fluky (though having Trout and/or Bourjos in the OF helps him a lot, for sure). It’s now up to .233, so more correction
could be on the way. Weaver, however, in his career has a 0.276 BABIP (one of the best), and it’s no fluke, because his xBABIP according to the formula is 0.271. The average for qualified pitchers
was 0.292, by the way.
So, any questions, comments or criticisms?
|
{"url":"http://www.forums.mlb.com/n/pfx/forum.aspx?nav=messages&webtag=ml-angels&tid=125729","timestamp":"2014-04-18T00:14:26Z","content_type":null,"content_length":"134547","record_id":"<urn:uuid:80c98cda-3aa5-4169-95ab-615f43b1820e>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
For each of the following, describe the sample space, S. a) A mother has a child; it is either a boy or a girl. b) A cancer patient is treated with a new type of treatment; the response variable is
the length of time the patient lives after the treatment. c) An AP Statistics student receives a grade at the end of the semester. d) A basketball player shoots four free throws. The number of makes
is recorded. e) A basketball player shoots four free throws. The sequence of makes and misses is recorded. f) Toss a coin four times and record the results.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/510c22c7e4b0d9aa3c472890","timestamp":"2014-04-20T16:28:41Z","content_type":null,"content_length":"44491","record_id":"<urn:uuid:323ad44e-6a18-4f9e-9362-0f2c03fa9467>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find the general term!!!
Re: Find the general term!!!
Still getting ahead of yourself but that is correct. The recurrence is the meat of this problem and the
thing you go to first. This will all hopefully be clearer when the bafflers thread is not so baffling.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the general term!!!
Why am I getting ahead of myself?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Find the general term!!!
A better question is what is over there at the bafflers thread.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the general term!!!
Let me see.
Do you mean the discussion about your programming and Base 3 or about the problem you posed?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Find the general term!!!
The discussion we are just having there.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the general term!!!
What does your programming have to do with this problem?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Find the general term!!!
Yoda wrote:
He must learn control. Will he finish what he starts?
We are only 1 / 3 of the way throught the point of the whole question over there.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the general term!!!
Are you trying to lure me into solving the problem experimentally?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Find the general term!!!
Is there any other way to do any type of mathematics other than cardboard problems that lace textbooks?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the general term!!!
Of course there is. But you cannot make me do it experimenrally. I would have to agree with that.
But what do you have in mind? How is the discussion in the other thread related to this?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Find the general term!!!
It is not done over there.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the general term!!!
What do you expect to happen when it gets done?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Find the general term!!!
You will have the answer to a question you asked gAr and I.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the general term!!!
How in the world do you plan on doing that if I just keep on asking random questions?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Find the general term!!!
Because we are getting closer to the point of my story.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the general term!!!
So,you will keep talking about your story even if I ask random questions?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Find the general term!!!
Yes, just because your patience is wavering that does not mean that mine is. I do remember saying that you did not like stories because you are on the impatient side.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the general term!!!
I ask random questions because they just pop into my mind. If I do not ask them right away,I will forget them.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Find the general term!!!
So, minutiae fills your mind?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the general term!!!
Yes,but not completely.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Find the general term!!!
Then move that to the side and post the other stuff. The stuff that is not random minutiae.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the general term!!!
I post both. Mostly the non-minutiae stuff.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Find the general term!!!
Very good! Then do more of that.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the general term!!!
I do.
How is a^2-b^2<>(a+b)(a-b) ?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Find the general term!!!
Those are not the same, true.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=213679","timestamp":"2014-04-18T00:40:15Z","content_type":null,"content_length":"38052","record_id":"<urn:uuid:9e64257f-b95f-48b1-bc68-bca1e5871344>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tutorial 21
Tutorial 21 - Spot Light
The spot light is the third and final light type that we will review (at least for a little while...). It is more complex than directional light and point light and essentially borrows stuff from
both. The spot light has an origin position and is under the effect of attenuation as distance from target grows (as point light) and its light is pointed at a specific direction (as directional
light). The spot light adds the unique attribute of shedding light only within a limited cone that grows wider as light moves further away from its origin. A good example for a spot light is the
flashlight. Spot lights are very useful when the character in the game you are developing is exploring an underground dungeon or escaping from prison.
We already know all the tools to develop the spot light. The missing piece is the cone effect of this light type. Take a look at the following picture:
The spot light direction is defined as the black arrow that points straight down. We want our light to have an effect only on the area limited within the two red lines. The dot product operation
again comes to the rescue. We can define the cone of light as the angle between each of the red lines and the light direction (i.e. half the angle between the red lines). We can take the cosine 'C'
of that angle and perform a dot product between the light direction 'L' and the vector 'V' from the light origin to the pixel. If the result of the dot product is larger than 'C' (remember that a
cosine result grows larger as the angle grows smaller), then the angle between 'L' and 'V' is smaller than the angle between 'L' and the two red lines that define the spot light cone. In that case we
want the pixel to receive light. If the angle is larger the pixel does not receive any light from the spot light. In the example above a dot product between 'L' and 'V' will yield a result which is
smaller than the dot product between 'L' and either one of the red lines (it is quite obvious that the angle between 'L' and 'V' is larger than the angle between 'L' and the red lines). Therefore,
the pixel is outside the cone of light and is not illuminated by the spot light.
If we go with this "receive/doesn't receive light" approach we will end up with a highly artificial spot light that has a very noticeable edge between its lit and dark areas. It will look like a
perfect circle within total darkness (assuming no other light sources). A more realistic looking spot light is one whose light gradually decreases towards the edges of the circle. We can use the dot
product that we calculated (in order to determine whether a pixel is lit or not) as a factor. We already know that the dot product will be 1 (i.e. maximum light) when the vectors 'L' and 'V' are
equal. But now we run into some nasty behavior of the cosine function. The spot light angle should not be too large or else the light will be too widespread and we will loose the appearance of a spot
light. For example, let's set the angle at 20 degrees. The cosine of 20 degrees is 0.939, but the range [0.939, 1.0] is too small to serve as a factor. There is not enough room there to interpolate
values that the eye will be able to notice. The range [0, 1] will provide much better results.
The approach that we will use is to map the smaller range defined by the spot light angle into the larger range of [0, 1]. Here's how we do it:
The principle is very simple - calculate the ratio between the smaller range and the larger range and scale the specific range you want to map by that ratio.
Code Walkthru
struct SpotLight : public PointLight
  Vector3f Direction;
  float Cutoff;
  SpotLight()
  {
    Direction = Vector3f(0.0f, 0.0f, 0.0f);
    Cutoff = 0.0f;
  }
The structure that defines the spot light is derived from PointLight and adds the two attributes that differentiate it from the point light: a direction vector and cutoff value. The cutoff value
represents the maximum angle between the light direction and the light to pixel vector for pixels that are under the influence of the spot light. The spot light has no effect beyond the cutoff value.
We've also added to the LightingTechnique class an array of locations for the shader (not quoted here). This array allows us to access the spot light array in the shader.
struct SpotLight
  struct PointLight Base;
  vec3 Direction;
  float Cutoff;
uniform int gNumSpotLights;
uniform SpotLight gSpotLights[MAX_SPOT_LIGHTS];
There is a similar structure for the spot light type in GLSL. Since we cannot use inheritance here as in the C++ code we use the PointLight structure as a member and add the new attributes next to
it. The important difference here is that in the C++ code the cutoff value is the angle itself while in the shader it is the cosine of that angle. The shader only cares about the cosine so it is more
efficient to calculate it once and not for every pixel. We also define an array of spot lights and use a counter called 'gNumSpotLights' to allow the application to define the number of spot lights
that are actually used.
vec4 CalcPointLight(struct PointLight l, vec3 Normal)
  vec3 LightDirection = WorldPos0 - l.Position;
  float Distance = length(LightDirection);
  LightDirection = normalize(LightDirection);
  vec4 Color = CalcLightInternal(l.Base, LightDirection, Normal);
  float Attenuation = l.Atten.Constant +
    l.Atten.Linear * Distance +
    l.Atten.Exp * Distance * Distance;
  return Color / Attenuation;
The point light function has gone through a minor modification - it now takes a PointLight structure as a parameter, rather than access the global array directly. This makes it simpler to share it
with spot lights. Other than that, there is no change here.
vec4 CalcSpotLight(struct SpotLight l, vec3 Normal)
  vec3 LightToPixel = normalize(WorldPos0 - l.Base.Position);
  float SpotFactor = dot(LightToPixel, l.Direction);
  if (SpotFactor > l.Cutoff) {
    vec4 Color = CalcPointLight(l.Base, Normal);
    return Color * (1.0 - (1.0 - SpotFactor) * 1.0/(1.0 - l.Cutoff));
  }
  else {
    return vec4(0,0,0,0);
  }
This is where we calculate the spot light effect. We start by taking the vector from the light origin to the pixel. As is often the case, we normalize it to get it ready for the dot product ahead. We
do a dot product between this vector and the light direction (which has already been normalized by the application) and get the cosine of the angle between them. We then compare it to the light's
cutoff value. This is the cosine of the angle between the light direction and the vector that defines its circle of influence. If the cosine is smaller it means the angle between the light direction
and the light to pixel vector places the pixel outside the circle of influence. In this case the contribution of this spot light is zero. This will limit the spot light to a small or large circle,
depending on the cutoff value. If it is the other way around we calculate the base color as if the light is a point light. Then we take the dot product result that we've just calculated
('SpotFactor') and plug it into the forumla described above. This provides the factor that will linearly interpolate 'SpotFactor' between 0 and 1. We multiply it by the point light color and receive
the final spot light color.
for (int i = 0 ; i < gNumSpotLights ; i++) {
  TotalLight += CalcSpotLight(gSpotLights[i], Normal);
In a similar fashion to point lights we have a loop in the main function that accumulates the contribution of all spot lights into the final pixel color.
void LightingTechnique::SetSpotLights(unsigned int NumLights, const SpotLight* pLights)
  glUniform1i(m_numSpotLightsLocation, NumLights);
  for (unsigned int i = 0 ; i < NumLights ; i++) {
    glUniform3f(m_spotLightsLocation[i].Color, pLights[i].Color.x, pLights[i].Color.y, pLights[i].Color.z);
    glUniform1f(m_spotLightsLocation[i].AmbientIntensity, pLights[i].AmbientIntensity);
    glUniform1f(m_spotLightsLocation[i].DiffuseIntensity, pLights[i].DiffuseIntensity);
    glUniform3f(m_spotLightsLocation[i].Position, pLights[i].Position.x, pLights[i].Position.y, pLights[i].Position.z);
    Vector3f Direction = pLights[i].Direction;
    Direction.Normalize();
    glUniform3f(m_spotLightsLocation[i].Direction, Direction.x, Direction.y, Direction.z);
    glUniform1f(m_spotLightsLocation[i].Cutoff, cosf(ToRadian(pLights[i].Cutoff)));
    glUniform1f(m_spotLightsLocation[i].Atten.Constant, pLights[i].Attenuation.Constant);
    glUniform1f(m_spotLightsLocation[i].Atten.Linear, pLights[i].Attenuation.Linear);
    glUniform1f(m_spotLightsLocation[i].Atten.Exp, pLights[i].Attenuation.Exp);
  }
This function updates the shader program with an array of SpotLight structures. This is the same as the correspoding function for point lights, with two additions. The light direction vector is also
applied to the shader, after it has been normalized. Also, the cutoff value is supplied as an angle by the caller but is passed to the shader as the cosine of that angle (allowing the shader to
compare a dot product result directly to that value). Note that the library function cosf() takes the angle in radians so we use the handy macro ToRadian in order to translate it.
|
{"url":"http://ogldev.atspace.co.uk/www/tutorial21/tutorial21.html","timestamp":"2014-04-19T22:52:22Z","content_type":null,"content_length":"11991","record_id":"<urn:uuid:8968db7e-4df8-4bc6-84ac-68d1a7b8a420>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Two paths to algebra in California
California will not block eighth-graders from taking algebra if the governor signs SB 1200, writes Deputy Superintendent Lupita Cortez Alcalá in a letter to Bill Lucia of EdVoice. School districts
will not be forced “into a misguided one-size-fits-all approach to math education,” she writes.
What it does do is provide for clear and viable pathways: one for students who are ready for higher mathematics (algebra 1 in a traditional sequence and course 1 in an integrated sequence) and
another for students who would progress through the grade level standards as called for in the Common Core standards.
Placement of students in mathematics courses, based on their readiness, remains a local decision – as it should be.
. . . adoption of the Common Core State Standards with California’s additions presented some unique challenges. California adopted two sets of eighth grade mathematics standards: the Common Core
set and a set that combined elements of the Common Core eighth-grade and high school mathematics standards with California’s own algebra standards. Unfortunately, the “Algebra 1 at Grade 8”
standards have created confusion in our school districts as it is a unique amalgamation, different from Algebra I, and not supported by instructional materials or curricula.
In focus groups, teachers and curriculum said “they want high expectations and high standards for their students – but also flexibility to decide when a student is ready for higher mathematics, based
upon each student’s classroom performance – not impersonal directives from the Capitol,” concludes Alcalá.
I think this means algebra-ready students will take Algebra I without any Common Core additions. I think . . . (A reform of years gone by, integrated math teaches bits of algebra, geometry, trig and
stats each year till students have mastered the concepts. It’s lost popularity.)
Last reply was September 23, 2012
1. Cal
View September 21, 2012
Told ya.
And this isn’t just California. Every state has 8th graders taking Algebra.
I feel like I’m shouting into the wind. Yes, all students CAN learn. NOT all students HAVE learned. Those that haven’t, aren’t ready for the higher level math and science classes.
This is the problem. You can put students in many classes, even though their skills are weak, because “getting them up to speed” is possible. You “incidentally” provide missing background
information in the context of teaching the higher level concepts. (I was told to do this by a consultant who wanted me to get more students passing grades in science).
Trouble is, I don’t believe that anyone has yet found a way to do this in math or science, particularly those branches of science that are highly math-dependent (physics, chemistry, geology). As
a result, we who teach those classes look incompetent, when we can’t manage to teach students who truly don’t understand basic arithmetic – like fractions and decimals, not to mention order of
operations. For those students, we are asking the equivalent of having 4th grade students win the Olympics in track. Without preparation, training, or time.
I don’t believe that the most gifted athlete could win his sport’s competition, if he hadn’t even learned the basic rules of his sport, spent his time when forced to be training playing with his
cell phone and trying to sleep, and, sometimes, sneaked a “little herb” in between runs.
Yet, I’m expected to bring that student along the road to proficiency.
□ Roger Sweenyreplied:
What? You can’t “differentiate instruction” and bring all your students up to speed?
Of course, you can’t–and expecting teachers to do so leads to stress for teachers and failure for students.
I wish this is what the Chicago teachers had struck about.
□ Florida residentreplied:
Article ” No, we can’t “:
The article is quite short, so I do advise you to read it, dear LindaF !
Your F.r.
3. Educationally Incorrect
>> Yes, all students CAN learn
This is like saying that “all children can run”. Human abilities form normal distributions and what percentage of people can do something depends on what that something is.
A high percentage of children can run
A lower percentage of children run fast enough, or will run fast enough, to make the track team.
A still-lower percentage are capable of winning.
A still-lower percentage will ever be capable of running fast enough to become professional athletes in sports were running is important.
If you take a statement like “every child can run” at face-value and as gospel, there’s no reason to think that you can’t train every kid to run 50 yds in 4.5 seconds, a mile in 4 minutes, or a
marathon in two hours. Good luck with that.
Similar for something like math ability.
A very high percentage of people can learn to count.
A lower percentage can learn to add and subtract.
A still-lower percentage can learn to manipulate fractions.
……………………………………………master HS algebra.
……………………………………………master abstract algebra
……………………………………………understand the proof of Fermat’s last theorem
…………………………………………….construct such a proof independently.
“Every child can learn” may make educators feel like moral giants but it doesn’t help anyone understand actual problems or to develop real solutions.
Making sta
□ Florida residentreplied:
Dear Educationally Incorrect !
You have omitted “Linear Algebra”.
It is reasonably simple and logical, extremely beautiful,
and it changes one’s view about the world around us.
Your most friendly,
□ Jerry Heverlyreplied:
I wish I’d said that.
4. Ze'ev Wurman
View September 23, 2012
Unfortunately, Ms. Alcala doesn’t know what she is talking about.
SB1200 plainly says: “(3) One set of standards is adopted at each grade level.” No ifs or buts. No options or course choices. For K-12, not only for grade 8.
Once this becomes a law, the plain reading is that the state can test students with ONLY ONE TEST at each K-12 grade level.
Schools can teach and offer anything they want — they always had this *legal* right. What they will not be able to offer is an algebra STATE test to those who take algebra, a pre-algebra test to
those who don’t, or even a choice of end-of-course tests to a high-school junior.
Ms. Alcala may even try to pass regulation that will allow her (i.e., CDE) to offer a choice of such tests. Yet it will be sufficient that one fuzzie who thinks that “Algebra is stressing
students too much in grade 8″ sues, and this choice will be stopped in its tracks. And we have plenty of people in this state who think exactly that.
What will happen in public schools at that point is easy to imagine — Algebra enrollment in grade 8 will drop like a rock. How many schools will be willing to have their kids to study one content
and be tested — and the results published in the local newspapers — on another, even if formally “easier”, content?
This is not about what Ms. Alcala wants, or teachers and schools want, or the public wants, or how they will like to interpret the bill. It is about what the law will clearly say.
5. Cal
View September 23, 2012
How many schools will be willing to have their kids to study one content and be tested — and the results published in the local newspapers — on another, even if formally “easier”, content?
The ones that know their students actually know the easier material.
Which means we won’t, thank god, be shoving students who don’t know the easier material into Algebra. Rather than shovelling all minority students into 8th grade Algebra on the off-chance that
some of them may learn, schools will reserve algebra to the kids who can actually do it.
But go ahead and pretend that this means, in a world where thousands of California kids are taking algebra in seventh grade, that no one will be taking algebra in 8th.
You just mean that black and Hispanic kids won’t be taking it unless they are actually ready for it. And thats something for which all math teachers should be profoundly grateful.
Incidentally, the Common Core clearly lays out an accelerated path, so I can’t figure out what you are hyperventilating about anyway.
Recent Comments
• Roger Sweeny on Math isn’t just for ‘math people’
• allen on Change without reform
• Anonymous Math Teacher on Change without reform
|
{"url":"http://www.joannejacobs.com/2012/09/two-paths-to-algebra-in-california/","timestamp":"2014-04-16T07:28:00Z","content_type":null,"content_length":"43319","record_id":"<urn:uuid:982f3c19-578e-4cc3-bf89-a990f0c0581d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Since I have resumed my R class, I will restart my resolution of Le Monde mathematical puzzles…as they make good exercises for the class. The puzzle this week is not that exciting: Find the four
non-zero different digits a,b,c,d such that abcd is equal to the sum of all two digit numbers made by picking
Cooling stations. A UHI Hint
Update: google earth files in the box: Personally I like to look at things backwards. Why are cool sites cool? So download the kml or kmz file and you can tour 62 sites: All with 90 years of data or
more. All with a cooling trend. And all “supposedly” urban. what do you see at
Some Oddities with cooling stations
Now, that the whole analysis has been moved to raster, I took some time to play around with a question that has interested a couple of people. Cool stations. A while back when I was looking at ways
of bounding uncertainties in the record I went on a hunt for the station that cooled the
Galton & simulation
Stephen Stigler has written a paper in the Journal of the Royal Statistical Society Series A on Francis Galton’s analysis of (his cousin) Charles Darwin’ Origin of Species, leading to nothing less
than Bayesian analysis and accept-reject algorithms! “On September 10th, 1885, Francis Galton ushered in a new era of Statistical Enlightenment with an address
Riemann, Langevin & Hamilton [reply]
Here is a (prompt!) reply from Mark Girolami corresponding to the earlier post: In preparation for the Read Paper session next month at the RSS, our research group at CREST has collectively read the
Girolami and Calderhead paper on Riemann manifold Langevin and Hamiltonian Monte Carlo methods and I hope we will again produce a
A couple hours work and we now have animations of the global anomalies: Created with the animation package in R. The code examples were a bit terse about some of the details but after fiddling about
I was able to get the program to output an Html animation complete with java based playback controls. Write
Visualising questionnaires
Last week I was shown the results of a workplace happiness questionnaire. The plots were ripe for a makeover. Most obviously, the pointless 3D effect needs removing, and the colour scheme is badly
Damn Close 5.0
Code will be in the drop box in a bit, once I shower: This is a wholesale replacement of previous versions, completely rewritten in raster. It will be the base going forward. All of the analysis
routines will be rewritten using raster. For time series functionality I will continue to use zoo as that
Connecting to a MongoDB database from R using Java
It would be nice if there were an R package, along the lines of RMySQL, for MongoDB. For now there is not – so, how best to get data from a MongoDB database into R? One option is to retrieve JSON via
the MongoDB REST interface and parse it using the rjson package. Assuming, for
Effective sample size
In the previous days I have received several emails asking for clarification of the effective sample size derivation in “Introducing Monte Carlo Methods with R” (Section 4.4, pp. 98-100). Formula
(4.3) gives the Monte Carlo estimate of the variance of a self-normalised importance sampling estimator (note the change from the original version in Introducing Monte
|
{"url":"http://www.r-bloggers.com/search/Twitter/page/169/","timestamp":"2014-04-16T04:22:37Z","content_type":null,"content_length":"39717","record_id":"<urn:uuid:e2006406-3ed6-4243-a03c-bfc6a35581f9>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Online math practice programs
You are here: Home → Online resources → math programs & curricula
Online math practice programs
This is an annotated list of online math programs and curricula. Typically, the programs offer interactive math practice and/or animated/video lessons, track student progress, and include parent
controls. Most of these are commercial programs, and the price varies according to the number of students and the subscription period. However, there are also several free websites included in the
All grades Elementary / Middle Middle / High School UK Australian
All grade levels
Another child-friendly math practice environment with unlimited questions on over hundred math topics per grade. IXL tracks student progress and gives children virtual medals and awards. Covers
grades PreK-Geometry. $9.95/month for one student; each additional child $2. $79 a year. See my review.
Online adaptive math practice environment with measurable student progress. From grade 1 to algebra. $14.95 per month for one student, 2nd student $5.
CK-12 is a large, free site for learning and practicing math and science. It contains text-based lessons, FlexBooks® (digital textbooks in PDF, ePub and mobi formats), videos, and quizzes for
multitudes of math and science topics from K to grade 12.
CTC Math
A comprehensive online tutorial program for grades K-12. Includes video tutorials, interactive questions, diagnostic tests, reporting, and more. Price: $29.97/month, $127/6 months, $197/12 months.
Family plans available.
Future School
An online learning system (curriculum) for K-12, for both math and English. Includes skill tests, video lessons, reporting, and tutoring services. $214/year for a single student or $35-$40/month for
a family.
Elementary / Middle School
Math ABC
Free interactive online math practice for all kinds of topics, grades K-6.
DigitWhiz is an online, games-based program aligned to the Common Core that guides kids ages 8+ to master key foundational skills in five areas: multiplication · division · integer operations · like
terms · solving equations
Adapted Mind
An online math practice system that adapts to student's needs. Covers most basic topics per grade but not all. Includes a pretest, virtual prizes, additional printable workhseets, some video lessons,
and progress reports. Some features are free; full membership $9.95 a month.
Another child-friendly math practice environment with unlimited questions on over hundred math topics per grade. IXL tracks student progress and gives children virtual medals and awards. Covers
grades PreK-Geometry. $9.95/month for one student; each additional child $2. $79 a year. See my review.
K5 Learning
An online reading and math enrichment program for grades K-5. It includes four separate programs: K5 Reading, K5 Math, K5 Spelling, and K5 Math Facts. Also included an initial assessment test and
detailed progress reports. The curricula for English and Math consist of animated, interactive lessons and practice activities. A free 14-day trial. Price $25/month for one student, $199 a year;
discounts for additional students. See also my review.
Splash Math
An online math practice system for grades 1-5. Includes all basic topics for those grades, except I did not see any word problems. Accessible on tablet devices. You can practice 20 questions per day
for free. One grade level: $9.99/lifetime, all grade levels $29.99/lifetime.
Smart Tutor
A fun, online elementary curriculum for K-5th grade reading and math. Includes an automated initial assessment, and an individualized curriculum consisting of animated interactive lessons and
practice activities. A free 14-day trial. Price $17.99/month for one student, $189.99 a year; discounts for additional students.
DreamBox Learning is an online math program for K-5 with an adaptive curriculum. It focuses on conceptual understanding and is higly adaptive. Students can personalize their login icon, wallpaper,
and music. Kids earn coins and rewards, some of which can be used to play games. Home pricing: $12.95 per month for 1 child, or $19.95/month for up to 4 children. See also my review.
For grades 2-8. Includes online math practice with scaffolded help and hints, reward badges, origami-based avatars, and math games. A 14-day free trial. Aligned to the Common Core standards (but only
offers simple practice and not on all possible topics in the standards). Price: apparently $20 per student.
A free, interactive learning program (game) for reading and mathematics for 4-10 year olds. Skoolbo claims to be the world's larget educational game. Includes 3D worlds and multiplayer races.
A+ Interactive Math
A full homeschool math curriculum available either on CDs or as an online version. Delivers multimedia Lessons, interactive quizzes after each lesson, worksheets and exams, eBooks, printable
worksheets and exams. Online version $19.95 a month. CD $99.99 or $124.99 (premium version) per grade.
Explorelearning Math Gizmos
Math and science interactive "gizmos" (online simulations) for grades 3-12 that are accompanied with an exploration guide and assessment question. Subscriptions for schools. Homeschoolers can
purchase this via Homeschool Buyers Co-op. A free 30-day trial.
ALEKS is a web-based math practice program—like a virtual math tutor. It uses artificial intelligence to assess student knowledge and then decides what topics he should practice. $19.95 a month,
$179.95 a year. Family discounts available.
An online practice program for English language arts and math.
Noetic Learning: StayAhead!
An individualized online self-paced program with an assessment and daily interactive worksheets and progress reports. For grades 2-5. The company also offers LeapAhead! summer math program and
Challenge Math program for gifted students. $19.95 a month.
HeyMath! E-Lessons Program
A program based on the Singapore Curriculum standards for grades 3-12, consisting of online video lessons. A free trial. Subscription $99.99 a year.
Middle & High School
Over 6,000 free, online video lessons for basic math, algebra, trigonometry, and calculus. Videos also available in Spanish. Also includes online textbooks. I've written a review of MathTV lessons
when they used to be offered on CDs.
BrightStorm Math
Over 2,000 free videos covering all high school math topics from algebra to calculus. Registration required (free).
Khan Academy
Possibly the web's biggest and free site for math videos. What started out as Sal making a few algebra videos for his cousins has grown to over 2,100 videos and 100 self-paced exercises and
assessments covering everything from arithmetic to physics, finance, and history.
Math Ops
Free and subscription online pre-algebra and algebra course that includes step-by-step narrated tutorials, videos, and online quizzes.
Free animated and narrated math tutorials - pre-algebra, algebra 1, geometry.
Free online interactive lessons for high school algebra, calculus, and AP calculus. Also for other subjects.
Another child-friendly math practice environment with unlimited questions on over hundred math topics per grade. IXL tracks student progress and gives children virtual medals and awards. Covers
grades PreK-Geometry. $9.95/month for one student; each additional child $2. $79 a year. See my review.
Shmoop.com Prealgebra
Free learning guides (tutorials) for all prealgebra topics with interactive practice problems, step-by-step examples, graphs, and real-world applications. This can be used for an online pre-algebra
Virtual Nerd
Video tutorials for prealgebra, algebra 1, algebra 2, and intro physics. This will also include practice problems and quizzes sometime during 2010-2011 school year. Includes both a free and paid
(premium) versions.
Explorelearning Math Gizmos
Math and science interactive "gizmos" (online simulations) for grades 3-12 that are accompanied with an exploration guide and assessment question. Subscriptions for schools. Homeschoolers can
purchase this via Homeschool Buyers Co-op. A free 30-day trial.
Art of Problem Solving's online learning system for gifted students. Offers a customized learning experience, adjusting to student performance. It is specifically designed to provide high-performing
students with a challenging curriculum. To sign up you need to join a class; there's an online class set up at Let's Play Math! open to all interested homeschoolers and self-directed learners.
Currently Alcumus is free.
Math Foundation
Animated lessons, quizzes, and printable worksheets covering middle and high school math concepts. $39.99 a year.
Tablet Class
An online curriculum and math learning system. Includes videos lessons, course materials, review notes, practice worksheets, tests and answer keys. Courses offered are prealgebra, algebra 1 & 2,
intermediate and college algebra, and GED math. $20/month or $90 a year. Free trial available.
HeyMath! E-Lessons Program
A program based on the Singapore Curriculum standards for grades 3-12, consisting of online video lessons. A free trial. Subscription $99.99 a year.
ThinkWell—the next-generation textbook
Multimedia video lectures that take the place of a traditional textbook, plus automatically graded exercises & homework. Titles offered are from grade 6 through calculus. The teacher on the videos is
Edward Burger, who has a unique and intuitive approach to learning math. Online access to any one course $125 a year.
VideoText Online Courses
VideoText offers two products: a complete course for algebra that covers pre-algebra, algebra 1, and algebra 2, and a complete course for geometry that covers geometry, trigonometry, and precalculus.
Includes video lessons, course notes, worktexts, solutions manuals, tests, and instructor's guides. Price: $299 per course.
Online courses for algebra 1, algebra 2, geometry, and trigonometry. Includes lessons, practice questions, sefl-tests, a question bank, and a forum. Prices about $10/month per course.
I Can Learn Online
Interactive, animated courses for fundamentals of math, prealgebra, and algebra. Subscription fee of $30/month gives you access to all three courses. A free trial available.
A self-teaching online math system with over 1,000 comprehensive lessons consisting of audio/video clips, problems to solve, full solutions, worksheet, quiz, and more. Includes pre-algebra, algebra 1
& 2, geometry, and college algebra. Lessons can be easily matched to common textbooks. See also my review. $49.50 a month, $99.50 six months, $149.50 a year, with access to all lessons.
A collection of lectures by college professors, including algebra, trigonometry, calculus, and statistics courses (for high school/college). Subscriptions $35 a month, $240 a year; they give you
access to all courses.
Math videos for all levels, from kindergarten to calculus, sold on DVDs.
Absorb Mathematics
An interactive course written by Kadie Armstrong, a mathematician. The lessons contain interactivity, explanations, and quiz questions. The course concentrates on geometry and trigonometry but
includes a few other topics as well. Most of the content is accessible by a fee only, but 469 lessons are available as free samples.
British (UK) online curricula
While these companies provide math instruction based on the British syllabus, math concepts are the same everywhere, and so the programs are definitely useful worldwide.
Whizz Education
Animated math lessons online for grades K-6 or KS1 and KS2; UK-based. Based on an initial assessment, the child will get a personalised programme of lessons, which is further adjusted depending on
the child's progress. $19.99 per month.
GetMathsFit is an online maths teaching program with more than 3,500 maths lessons for 11-19 year olds. Lessons are animated with an accompanying teacher audio and cover the basics of arithmetic
right through to A-Level calculus. Included are thousands of questions and their fully worked out answers.
An online maths tutoring system with 480 full audio/visual lessons presented by a real teacher, synchronised with animated graphics and backed up by tests and progress reports. For UK key stages 3
and 4 (11-16 year olds). 16 free trial lessons. Subscriptions £21.95 a month, or £157 a year.
Australian online curricula
While these companies provide math instruction based on the Australian syllabus, math concepts are the same everywhere, and so the programs are definitely useful worldwide.
An Australian online program for K-6 developed by teachers. Includes video lessons, interactive maths activities, number challenges, worksheets, interactive assessments and exams. $88 AUD a year;
free trial available. Schools can use this product for free.
Maths Power
Online math learning system with animated lessons, worksheets, and access to teacher support for K-12. Created from the Australian syllabus. $185 USD/year per course.
Animated maths lessons, worksheets, topic tests and worked solutions for years 7-12 in the Australian curriculum. 32 free lessons available to trial online. Online membership $39.95 AUD month (about
$33 USD) or $297 AUD year; family plans available. Also available on CDs.
Maths Online
A full year K-12 Australian math curriculum with animated and narrated lessons, interactive questions, self-testing, and reporting. Homeschoolers get a 60% discount.
|
{"url":"http://www.homeschoolmath.net/online/curricula.php","timestamp":"2014-04-19T04:19:02Z","content_type":null,"content_length":"43741","record_id":"<urn:uuid:550ed400-3c2b-405c-a717-fbbe0b64e0ae>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nice conditional distribution / Closure under noisy observation
up vote 0 down vote favorite
Let $X, Y, Z$ be Polish spaces; $M$ a collection of full-support Borel measures on $X$; $\nu$ a Borel measure on $Y$; $f:X\times Y \to Z$ continuous with the property that $f(\cdot,y)$ is injective
for every $y\in Y$.
The question:
Suppose $(x,y)\in X\times Y$ is drawn according to $\mu\otimes\nu$ for some $\mu \in M,$ and $z=f(x,y).$ Under what conditions on $(M,\nu,f)$ will the distribution of $x$ conditional on $z$ be
(a.s.-$z$) some $\mu' \in M$?
• A concrete example: take $X=Y=Z=\mathbb R,$ $M$ the class of Gaussian measures, $\nu$ Gaussian, and $f$ linear.
• A trivial case: $M$ is the set of all full-support Borel measures, and $f(X,y)$ is the same for every $y\in Y$.
• Another trivial case: $f(x,\cdot) = g \enspace\forall x\in X,$ for some $g:Y\to Z$.
I'd be very interested in a general answer to this, but I'm ultimately interested in some nice examples. I'd love non-Gaussian examples for which each of $X,Y,Z$ is either $\mathbb R$ or $[0,\
infty),$ $M$ is a collection nicely expressed by 1 or 2 parameters, and all measures in sight admit continuous densities.
Any thoughts are very appreciated!
pr.probability probability-distributions sampling bayesian
This is my first post here. Please let me know if I should change anything to make the question clearer or anything. – user36888 Jul 10 '13 at 19:13
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged pr.probability probability-distributions sampling bayesian or ask your own question.
|
{"url":"http://mathoverflow.net/questions/136304/nice-conditional-distribution-closure-under-noisy-observation","timestamp":"2014-04-18T21:30:32Z","content_type":null,"content_length":"49206","record_id":"<urn:uuid:c70a9cec-ca90-4eee-a459-a01d5d513589>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Can math and science help solve crimes?
Andrea Bertozzi, Martin Short and Jeffrey Brantingham
(PhysOrg.com) -- UCLA scientists working with Los Angeles police are using sophisticated mathematics to identify and analyze urban crime patterns.
UCLA's Jeffrey Brantingham works with the Los Angeles Police Department to analyze crime patterns. He also studies hunter-gatherers in Northern Tibet. If you tell him his research interests sound
completely unrelated, he will quickly correct you.
"Criminal offenders are essentially hunter-gatherers; they forage for opportunities to commit crimes," said Brantingham, a UCLA associate professor of anthropology. "The behaviors that a
hunter-gatherer uses to choose a wildebeest versus a gazelle are the same calculations a criminal uses to choose a Honda versus a Lexus."
Brantingham has been working for years with Andrea Bertozzi, a professor of mathematics and director of applied mathematics at UCLA, to apply sophisticated math to urban crime patterns. With their
colleagues, they have built a mathematical model that allows them to analyze different types of criminal "hotspots" — areas where many crimes occur, at least for a time.
They believe their findings apply not only to Los Angeles but to cities worldwide. Their latest research will appear as the cover feature in the March 2 issue of Proceedings of the National Academy
of Sciences (PNAS). Bertozzi will speak about the mathematics of crime at the annual meeting of the American Association for the Advancement of Science in San Diego.
The PNAS paper offers an explanation for when law enforcement officials can expect crime to be suppressed by intensified police actions and when crime might merely be displaced to other
Crime hotspots come in at least two different types, Brantingham and Bertozzi report in PNAS, along with lead author Martin Short, a UCLA assistant adjunct professor of mathematics, and George Tita,
an associate professor of criminology, law and society at UC Irvine. There are hotspots generated by small spikes in crime that grow ("super-critical hotspots") and hotspots where a large spike in
crime pulls offenders into a central location ("subcritical hotspots"). The two types look the same from the surface, but they are not.
Policing actions directed at one type of hotspot will have a very different effect from actions directed at the other type.
"This finding is important because if you want the police to suppress the hotspot, you want to be able to later take them out and have the suppression remain," Bertozzi said. "And you can do that
with only one of the two, in the subcritical case."
"Unless you are really looking for them, and our model says you should, you would not suspect these two types of hotspots," Brantingham said. "Just by mapping crime and looking at hotspots, you will
not be able to know whether that is generated by a small variation in crime or by a big spike in crime.
"If you were to send police into a hotspot without knowing which kind it is, you would not be able to predict whether you will just cause displacement of crime — moving it somewhere else, which is
what our model predicts if it's a hotspot generated by small fluctuations in crime — or whether you will actually reduce crime," he said. "Many people have argued that adding police to hotspots will
just push crime somewhere else, but that seems not to be true, at least in certain cases. You get displacement in some cases, but not nearly as much as many people thought."
Drug hotspots and violent crime hotspots have been suppressed, and analysts up until now have not been able to explain why.
In their mathematical model, the scientists are able to predict how each type of hotspot will respond to increased policing, as well as when each type might occur, by a careful mathematical analysis
involving what is known as bifurcation theory.
"Although this is an idealized model for which all parameters must be known precisely in advance in order to make predictions, we believe this is an important step in understanding why some crime
hotspots are merely displaced while others are actually removed by hotspot policing," Bertozzi said.
Predicting crime and devising better crime-prevention strategies requires "a mechanistic explanation for how and why crime occurs where it does and when it does," Brantingham said. "We think we have
made a big step in the direction of providing at least one core aspect of that explanation. We will refine it over time. You need to take these initial steps before you can develop new crime-fighting
Their model, Bertozzi said, "is nonlinear and develops complex patterns in space and time." These features, she noted, are well known in related models in other areas of science.
Bertozzi, Brantingham, Short and Tita have been studying crime patterns in Los Angeles using the last 10 years of data from the LAPD and have been able to identify violent crime hotspots, burglary
hotspots and auto-theft hotspots, among others. They believe their analysis likely applies to a wide variety of crimes.
The research is federally funded by the National Science Foundation (www.nsf.gov) and the U.S. Department of Defense.
"We have a key to understanding real-world phenomena," Bertozzi said. "The key is the mathematics. With powerful mathematical tools, we can borrow methods that have been studied in great detail for
other areas of science and engineering and figure out how to apply them to very different problems, such as crime patterns."
Will their research actually help police departments reduce crime?
"We're cautiously optimistic," Brantingham said. "Good science is done in small, incremental steps that can lead to big benefits in the long term. We are trying to understand the dynamics of crime
and to make small but significant steps in helping our police partners come up with policing strategies that will help to reduce crime.
"We have to do what biologists and engineers have been doing for years, which is to try to understand the fundamental mechanics and dynamics of how a system works," he said. "Before you can make
predictions about how the system will behave, you have to understand the fundamental dynamics. That's true with weather forecasting, where you run a climate simulation, and true with crime patterns."
The LAPD is at the world's forefront of knowing where crime is occurring and responding very quickly, Brantingham said.
"Can we actually push policing to look into the future and make a reasonable prediction about the near term when deciding how to allocate resources?" Brantingham asked. "This is the type of research
that is necessary to make that a reality."
Why do criminals return to the scene of a crime, or at least the same general area?
"If my house is burglarized today, then it is more likely to be burglarized tomorrow as well," said Short, who has studied problems involving mathematical modeling and pattern formation. "There are
good reasons for repeat victimization, from a criminal's point of view. They have already broken into your house once, so they know how to get in, and they already know what you have in your house.
The data back this up.
"The 'near repeat effect' says not only is my house more likely to be burglarized again, but so are my neighbors' homes," Short added. "The burglar may be comfortable with that area. It may be near
where he lives."
The scientists are also studying crime patterns with the mathematics used to forecast earthquakes and their aftershocks. "They are actually very similar," Bertozzi said.
In addition, they have started studying whether patterns of gang violence in Los Angeles are similar to insurgent killings in Iraq. Bertozzi will report preliminary data on this question at the AAAS
meeting on Feb. 20.
"An insurgent who wants to place an improvised explosive device in a particular location will make the same kind of calculations that a car thief will use in choosing which car to steal," Brantingham
said. "They want to go into areas where they feel comfortable, where they know the nooks and crannies. They want to be in an area where their activities will not appear suspicious. They also want to
have a large impact.
"The same thing goes for a burglar trying to break into a house or a car thief or a guy looking for a bar fight," he said. "They want to go where they know they can go in and out without seeming too
suspicious and where they can get the biggest bang for their buck. The mathematics underlying the insurgent activity and the criminal activity is very much the same. We're studying that now."
The researchers have funding from the U.S. Army Research Office's mathematics division to compare Iraq data and gang data.
They have also started a research project with the U.S. Office of Naval Research to provide mathematical algorithms that can help them extract information from diverse data sets.
Why is an anthropologist collaborating on a mathematical model to analyze human behavior?
"Many social scientists say human behavior and criminal behavior are too complex to be explained with a mathematical model," said Brantingham, who was trained as an archaeologist. "But it's not too
complex. We're not trying to explain everything, but there are many aspects of human behavior that are easily understood in a formal mathematical structure. There are regularities to human behavior
that we can understand mathematically."
"We're not asking whether a particular individual is going to commit a crime," Bertozzi said. "We ask whether a particular neighborhood will see an increase in crime."
It's a matter of group behavior, like studying traffic flow patterns, she said.
"Mathematical models and differential equations have been used in that field for decades," said Bertozzi, who had not worked with social scientists before working with Brantingham. She is interested
in applying mathematics to address practical problems that affect peoples' lives.
"This is an exciting area of research," she said. "UCLA has one of the top applied mathematics programs in the country, and we are able to attract stellar graduate students, postdoctoral researchers
and young faculty, such as Martin Short, who have made a huge impact in this research."
Bertozzi and Brantingham began working together after meeting through UCLA's Institute for Pure and Applied Mathematics.
"I knew if we were going to study crime problems, we needed excellent sources of data," Bertozzi said. "The fact that Jeff had the connection with LAPD and many interesting classes of problems to
study intrigued me."
Bertozzi and Brantingham, along with George Tita and Lincoln Chayes, a UCLA professor of mathematics, wrote a proposal to the National Science Foundation to support the research, which was funded.
"A lot of what motivated me to look at crime initially was trying to take the approaches to understanding the physical world I learned in archaeology and applying it to contemporary problems such as
crime," Brantingham said. "With George Tita and others, we reached out to the LAPD, and they have been very supportive of our work."
not rated yet Feb 22, 2010
"Can math and science help solve crimes?"
Was that supposed to be a thought-provoking question?
|
{"url":"http://phys.org/news186070236.html","timestamp":"2014-04-17T19:11:17Z","content_type":null,"content_length":"76514","record_id":"<urn:uuid:c2cff084-59a5-41b7-a2e0-6bbd1f3d8410>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
GenOpt^® is an optimization program for the minimization of a cost function that is evaluated by an external simulation program, such as EnergyPlus, TRNSYS, Dymola, IDA-ICE or DOE-2. It has been
developed for optimization problems where the cost function is computationally expensive and its derivatives are not available or may not even exist. GenOpt can be coupled to any simulation program
that reads its input from text files and writes its output to text files. The independent variables can be continuous variables (possibly with lower and upper bounds), discrete variables, or both.
Constraints on dependent variables can be implemented using penalty or barrier functions.
GenOpt has a library with local and global multi-dimensional and one-dimensional optimization algorithms, as well as algorithms for doing parametric runs. If your computer has multiple CPUs, GenOpt
will run multiple simulations in parallel to reduce computation time. This parallel computation is done automatically by GenOpt without requiring a special setup by the user.
By using GenOpt's algorithm interface, new optimization algorithms can be added to GenOpt's algorithm library without knowing the details of the program structure.
GenOpt is written in Java so that it is platform independent. The platform independence and the general interface make GenOpt applicable to a wide range of optimization problems.
GenOpt has not been designed for linear programming problems, quadratic programming problems, and problems where the gradient of the cost function is available. For such problems, as well as for
other problems, special tailored software exists that is more efficient.
|
{"url":"http://gundog.lbl.gov/GO/index.html","timestamp":"2014-04-19T09:24:40Z","content_type":null,"content_length":"4712","record_id":"<urn:uuid:b4724ec8-7b14-483d-b805-215dfff97046>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Is infinity equal to infinity?
Ray Dillinger <bear@sonic.net>
11 Jul 1998 23:42:02 -0400
From comp.compilers
| List of all articles for this month |
From: Ray Dillinger <bear@sonic.net>
Newsgroups: sci.math.num-analysis,comp.lang.c,sci.math,comp.compilers
Date: 11 Jul 1998 23:42:02 -0400
Organization: Cognitive Dissidents
Distribution: inet
References: 98-07-058
Keywords: arithmetic, design, comment
Erik Runeson wrote:
> Inf == Inf ?
> Inf - Inf = NaN
> Any comparison with a NaN (Not a number) shall, according to the IEEE
> 754 be concidered unordered and return false.
The need to have return values for comparisons like inf=inf is perhaps
the best argument I've ever seen for NaB (Not a Boolean) as a
necessary part of a programming language.
[The IEEE floating point standard does say that some comparisons return
"unordered" as well as true or false. -John]
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/98-07-101","timestamp":"2014-04-20T11:34:25Z","content_type":null,"content_length":"6240","record_id":"<urn:uuid:98d5fce1-dc86-4dfd-99f9-071571cc47bb>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Raymond Laflamme
Raymond obtained his PhD at the University of Cambridge, in 1988. He began his career working with Stephen Hawking on questions in quantum gravity and cosmology, but his interests have moved to
quantum information science and technology. His research encompass both theory and experiments.
Information processing is pervasive, it has changed the way we do science, the way we entertain ourselves, the structure of the economy, and ultimately who we are. This information revolution has
happened because of the incredible progress made to manipulate larger and larger amounts of information by shrinking the size of transistors. As we move toward that scale, we need a better
approximation to the laws of physics than the one provided by classical mechanics, we need quantum mechanics. Quantum mechanics is presently a hindrance to push the limits of the size of transistors
as we try to keep computing classically. In the last 20 years, physicists and computer scientists have discovered that we could instead take advantage of quantum mechanics. The goal of my work is to
harness quantum mechanical effects and use them for information processing. It turns out that this leads to new mind-boggling devices that seem much more powerful that their classical counterpart. A
large fraction of my energy is focused on finding and developing methods to control, manipulate and make quantum information robust against noise and imperfection that are present in realistic
devices. I also attempt to understand why these devices are so powerful and try to find new applications for them.
|
{"url":"http://www.perimeterinstitute.ca/people/Raymond-Laflamme","timestamp":"2014-04-17T05:53:13Z","content_type":null,"content_length":"56334","record_id":"<urn:uuid:adc5aec0-d74e-494c-a683-ba2603cef579>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
some hints for using the ACL2 prover
Major Section: ACL2-TUTORIAL
We present here some tips for using ACL2 effectively. Though this collection is somewhat ad hoc, we try to provide some organization, albeit somewhat artificial: for example, the sections overlap,
and no particular order is intended. This material has been adapted by Bill Young from a very similar list for Nqthm that appeared in the conclusion of: ``Interaction with the Boyer-Moore Theorem
Prover: A Tutorial Study Using the Arithmetic-Geometric Mean Theorem,'' by Matt Kaufmann and Paolo Pecchiari, CLI Technical Report 100, June, 1995. We also draw from a similar list in Chapter 13 of
``A Computational Logic Handbook'' by R.S. Boyer and J S. Moore (Academic Press, 1988). We'll refer to this as ``ACLH'' below.
These tips are organized roughly as follows.
A. ACL2 Basics
B. Strategies for creating events
C. Dealing with failed proofs
D. Performance tips
E. Miscellaneous tips and knowledge
F. Some things you DON'T need to know
A1. The ACL2 logic.
This is a logic of total functions. For example, if A and B are less than or equal to each other, then we need to know something more in order to conclude that they are equal (e.g., that they are
numbers). This kind of twist is important in writing definitions; for example, if you expect a function to return a number, you may want to apply the function fix or some variant (e.g., nfix or ifix)
in case one of the formals is to be returned as the value.
ACL2's notion of ordinals is important on occasion in supplying ``measure hints'' for the acceptance of recursive definitions. Be sure that your measure is really an ordinal. Consider the following
example, which ACL2 fails to admit (as explained below).
(defun cnt (name a i x)
(declare (xargs :measure (+ 1 i)))
(cond ((zp (+ 1 i))
((equal x (aref1 name a i))
(1+ (cnt name a (1- i) x)))
(t (cnt name a (1- i) x))))
One might think that (+ 1 i) is a reasonable measure, since we know that (+ 1 i) is a positive integer in any recursive call of cnt, and positive integers are ACL2 ordinals (see o-p). However, the
ACL2 logic requires that the measure be an ordinal unconditionally, not just under the governing assumptions that lead to recursive calls. An appropriate fix is to apply nfix to (+ 1 i), i.e., to use
(declare (xargs :measure (nfix (+ 1 i))))
in order to guarantee that the measure will always be an ordinal (in fact, a positive integer).
A2. Simplification.
The ACL2 simplifier is basically a rewriter, with some ``linear arithmetic'' thrown in. One needs to understand the notion of conditional rewriting. See rewrite.
A3. Parsing of rewrite rules.
ACL2 parses rewrite rules roughly as explained in ACLH, except that it never creates ``unusual'' rule classes. In ACL2, if you want a :linear rule, for example, you must specify :linear in the :
rule-classes. See rule-classes, and also see rewrite and see linear.
A4. Linear arithmetic.
On this subject, it should suffice to know that the prover can handle truths about + and -, and that linear rules (see above) are somehow ``thrown in the pot'' when the prover is doing such
reasoning. Perhaps it's also useful to know that linear rules can have hypotheses, and that conditional rewriting is used to relieve those hypotheses.
A5. Events.
Over time, the expert ACL2 user will know some subtleties of its events. For example, in-theory events and hints are important, and they distinguish between a function and its executable counterpart.
In this section, we concentrate on the use of definitions and rewrite rules. There are quite a few kinds of rules allowed in ACL2 besides rewrite rules, though most beginning users probably won't
usually need to be aware of them. See rule-classes for details. In particular, there is support for congruence rewriting. Also see rune (``RUle NamE'') for a description of the various kinds of rules
in the system.
B1. Use high-level strategy.
Decompose theorems into ``manageable'' lemmas (admittedly, experience helps here) that yield the main result ``easily.'' It's important to be able to outline non-trivial proofs by hand (or in your
head). In particular, avoid submitting goals to the prover when there's no reason to believe that the goal will be proved and there's no ``sense'' of how an induction argument would apply. It is
often a good idea to avoid induction in complicated theorems unless you have a reason to believe that it is appropriate.
B2. Write elegant definitions.
Try to write definitions in a reasonably modular style, especially recursive ones. Think of ACL2 as a programming language whose procedures are definitions and lemmas, hence we are really suggesting
that one follow good programming style (in order to avoid duplication of ``code,'' for example).
When possible, complex functions are best written as compositions of simpler functions. The theorem prover generally performs better on primitive recursive functions than on more complicated
recursions (such as those using accumulating parameters).
Avoid large non-recursive definitions which tend to lead to large case explosions. If such definitions are necessary, try to prove all relevant facts about the definitions and then disable them.
Whenever possible, avoid mutual recursion if you care to prove anything about your functions. The induction heuristics provide essentially no help with reasoning about mutually defined functions.
Mutually recursive functions can usually be combined into a single function with a ``flag'' argument. (However, see mutual-recursion-proof-example for a small example of proof involving mutually
recursive functions.)
B3. Look for analogies.
Sometimes you can easily edit sequences of lemmas into sequences of lemmas about analogous functions.
B4. Write useful rewrite rules.
As explained in A3 above, every rewrite rule is a directive to the theorem prover, usually to replace one term by another. The directive generated is determined by the syntax of the defthm submitted.
Never submit a rewrite rule unless you have considered its interpretation as a proof directive.
B4a. Rewrite rules should simplify.
Try to write rewrite rules whose right-hand sides are in some sense ``simpler than'' (or at worst, are variants of) the left-hand sides. This will help to avoid infinite loops in the rewriter.
B4b. Avoid needlessly expensive rules.
Consider a rule whose conclusion's left-hand side (or, the entire conclusion) is a term such as (consp x) that matches many terms encountered by the prover. If in addition the rule has complicated
hypotheses, this rule could slow down the prover greatly. Consider switching the conclusion and a complicated hypothesis (negating each) in that case.
B4c. The ``Knuth-Bendix problem''.
Be aware that left sides of rewrite rules should match the ``normalized forms'', where ``normalization'' (rewriting) is inside out. Be sure to avoid the use of nonrecursive function symbols on left
sides of rewrite rules, except when those function symbols are disabled, because they tend to be expanded away before the rewriter would encounter an instance of the left side of the rule. Also
assure that subexpressions on the left hand side of a rule are in simplified form.
B4d. Avoid proving useless rules.
Sometimes it's tempting to prove a rewrite rule even before you see how it might find application. If the rule seems clean and important, and not unduly expensive, that's probably fine, especially if
it's not too hard to prove. But unless it's either part of the high-level strategy or, on the other hand, intended to get the prover past a particular unproved goal, it may simply waste your time to
prove the rule, and then clutter the database of rules if you are successful.
B4e. State rules as strongly as possible, usually.
It's usually a good idea to state a rule in the strongest way possible, both by eliminating unnecessary hypotheses and by generalizing subexpressions to variables.
Advanced users may choose to violate this policy on occasion, for example in order to avoid slowing down the prover by excessive attempted application of the rule. However, it's a good rule of thumb
to make the strongest rule possible, not only because it will then apply more often, but also because the rule will often be easier to prove (see also B6 below). New users are sometimes tempted to
put in extra hypotheses that have a ``type restriction'' appearance, without realizing that the way ACL2 handles (total) functions generally lets it handle trivial cases easily.
B4f. Avoid circularity.
A stack overflow in a proof attempt almost always results from circular rewriting. Use brr to investigate the stack; see break-lemma. Because of the complex heuristics, it is not always easy to
define just when a rewrite will cause circularity. See the very good discussion of this topic in ACLH.
See break-lemma for a trick involving use of the forms brr t and (cw-gstack) for inspecting loops in the rewriter.
B4g. Remember restrictions on permutative rules.
Any rule that permutes the variables in its left hand side could cause circularity. For example, the following axiom is automatically supplied by the system:
(defaxiom commutativity-of-+
(equal (+ x y) (+ y x))).
This would obviously lead to dangerous circular rewriting if such ``permutative'' rules were not governed by a further restriction. The restriction is that such rules will not produce a term that is
``lexicographically larger than'' the original term (see loop-stopper). However, this sometimes prevents intended rewrites. See Chapter 13 of ACLH for a discussion of this problem.
B5. Conditional vs. unconditional rewrite rules.
It's generally preferable to form unconditional rewrite rules unless there is a danger of case explosion. That is, rather than pairs of rules such as
(implies p
(equal term1 term2))
(implies (not p)
(equal term1 term3))
(equal term1
(if p term2 term3))
However, sometimes this strategy can lead to case explosions: IF terms introduce cases in ACL2. Use your judgment. (On the subject of IF: COND, CASE, AND, and OR are macros that abbreviate IF forms,
and propositional functions such as IMPLIES quickly expand into IF terms.)
B6. Create elegant theorems.
Try to formulate lemmas that are as simple and general as possible. For example, sometimes properties about several functions can be ``factored'' into lemmas about one function at a time. Sometimes
the elimination of unnecessary hypotheses makes the theorem easier to prove, as does generalizing first by hand.
B7. Use defaxioms temporarily to explore possibilities.
When there is a difficult goal that seems to follow immediately (by a :use hint or by rewriting) from some other lemmas, you can create those lemmas as defaxiom events (or, the application of
skip-proofs to defthm events) and then double-check that the difficult goal really does follow from them. Then you can go back and try to turn each defaxiom into a defthm. When you do that, it's
often useful to disable any additional rewrite rules that you prove in the process, so that the ``difficult goal'' will still be proved from its lemmas when the process is complete.
Better yet, rather than disabling rewrite rules, use the local mechanism offered by encapsulate to make temporary rules completely local to the problem at hand. See encapsulate and see local.
B9. Use books.
Consider using previously certified books, especially for arithmetic reasoning. This cuts down the duplication of effort and starts your specification and proof effort from a richer foundation. See
the file "doc/README" in the ACL2 distribution for information on books that come with the system.
C1. Look in proof output for goals that can't be further simplified.
Use the ``proof-tree'' utility to explore the proof space. However, you don't need to use that tool to use the ``checkpoint'' strategy. The idea is to think of ACL2 as a ``simplifier'' that either
proves the theorem or generates some goal to consider. That goal is the first ``checkpoint,'' i.e., the first goal that does not further simplify. Exception: it's also important to look at the
induction scheme in a proof by induction, and if induction seems appropriate, then look at the first checkpoint after the induction has begun.
Consider whether the goal on which you focus is even a theorem. Sometimes you can execute it for particular values to find a counterexample.
When looking at checkpoints, remember that you are looking for any reason at all to believe the goal is a theorem. So for example, sometimes there may be a contradiction in the hypotheses.
Don't be afraid to skip the first checkpoint if it doesn't seem very helpful. Also, be willing to look a few lines up or down from the checkpoint if you are stuck, bearing in mind however that this
practice can be more distracting than helpful.
C2. Use the ``break rewrite'' facility.
Brr and related utilities let you inspect the ``rewrite stack.'' These can be valuable tools in large proof efforts. See break-lemma for an introduction to these tools, and see break-rewrite for more
complete information.
The break facility is especially helpful in showing you why a particular rewrite rule is not being applied.
C3. Use induction hints when necessary. Of course, if you can define your functions so that they suggest the correct inductions to ACL2, so much the better! But for complicated inductions, induction
hints are crucial. See hints for a description of :induct hints.
C4. Use the ``Proof Checker'' to explore.
The verify command supplied by ACL2 allows one to explore problem areas ``by hand.'' However, even if you succeed in proving a conjecture with verify, it is useful to prove it without using it, an
activity that will often require the discovery of rewrite rules that will be useful in later proofs as well.
C5. Don't have too much patience.
Interrupt the prover fairly quickly when simplification isn't succeeding.
C6. Simplify rewrite rules.
When it looks difficult to relieve the hypotheses of an existing rewrite rule that ``should'' apply in a given setting, ask yourself if you can eliminate a hypothesis from the existing rewrite rule.
If so, it may be easier to prove the new version from the old version (and some additional lemmas), rather than to start from scratch.
C7. Deal with base cases first.
Try getting past the base case(s) first in a difficult proof by induction. Usually they're easier than the inductive step(s), and rules developed in proving them can be useful in the inductive step
(s) too. Moreover, it's pretty common that mistakes in the statement of a theorem show up in the base case(s) of its proof by induction.
C8. Use :expand hints. Consider giving :expand hints. These are especially useful when a proof by induction is failing. It's almost always helpful to open up a recursively defined function that is
supplying the induction scheme, but sometimes ACL2 is too timid to do so; or perhaps the function in question is disabled.
D1. Disable rules.
There are a number of instances when it is crucial to disable rules, including (often) those named explicitly in :use hints. Also, disable recursively defined functions for which you can prove what
seem to be all the relevant properties. The prover can spend significant time ``behind the scenes'' trying to open up recursively defined functions, where the only visible effect is slowness.
D2. Turn off the ``break rewrite'' facility. Remember to execute :brr nil after you've finished with the ``break rewrite'' utility (see break-rewrite), in order to bring the prover back up to full
E1. Order of application of rewrite rules.
Keep in mind that the most recent rewrite rules in the history are tried first.
E2. Relieving hypotheses is not full-blown theorem proving.
Relieving hypotheses on rewrite rules is done by rewriting and linear arithmetic alone, not by case splitting or by other prover processes ``below'' simplification.
E3. ``Free variables'' in rewrite rules.
The set of ``free variables'' of a rewrite rule is defined to contain those variables occurring in the rule that do not occur in the left-hand side of the rule. It's often a good idea to avoid rules
containing free variables because they are ``weak,'' in the sense that hypotheses containing such variables can generally only be proved when they are ``obviously'' present in the current context.
This weakness suggests that it's important to put the most ``interesting'' (specific) hypotheses about free variables first, so that the right instances are considered. For example, suppose you put a
very general hypothesis such as (consp x) first. If the context has several terms around that are known to be consps, then x may be bound to the wrong one of them. For much more information on free
variables, see free-variables.
E4. Obtaining information Use :pl foo to inspect rewrite rules whose left hand sides are applications of the function foo. Another approach to seeing which rewrite rules apply is to enter the
proof-checker with verify, and use the show-rewrites or sr command.
E5. Consider esoteric rules with care.
If you care to see rule-classes and peruse the list of subtopics (which will be listed right there in most versions of this documentation), you'll see that ACL2 supports a wide variety of rules in
addition to :rewrite rules. Should you use them? This is a complex question that we are not ready to answer with any generality. Our general advice is to avoid relying on such rules as long as you
doubt their utility. More specifically: be careful not to use conditional type prescription rules, as these have been known to bring ACL2 to its knees, unless you are conscious that you are doing so
and have reason to believe that they are working well.
F. SOME THINGS YOU DON'T NEED TO KNOW
Most generally: you shouldn't usually need to be able to predict too much about ACL2's behavior. You should mainly just need to be able to react to it.
F1. Induction heuristics.
Although it is often important to read the part of the prover's output that gives the induction scheme chosen by the prover, it is not necessary to understand how the prover made that choice.
(Granted, advanced users may occasionally gain minor insight from such knowledge. But it's truly minor in many cases.) What is important is to be able to tell it an appropriate induction when it
doesn't pick the right one (after noticing that it doesn't). See C3 above.
F2. Heuristics for expanding calls of recursively defined functions.
As with the previous topic, the important thing isn't to understand these heuristics but, rather, to deal with cases where they don't seem to be working. That amounts to supplying :expand hints for
those calls that you want opened up, which aren't. See also C8 above.
F3. The ``waterfall''.
As discussed many times already, a good strategy for using ACL2 is to look for checkpoints (goals stable under simplification) when a proof fails, perhaps using the proof-tree facility. Thus, it is
reasonable to ignore almost all the prover output, and to avoid pondering the meaning of the other ``processes'' that ACL2 uses besides simplification (such as elimination, cross-fertilization,
generalization, and elimination of irrelevance). For example, you don't need to worry about prover output that mentions ``type reasoning'' or ``abbreviations,'' for example.
|
{"url":"http://planet.racket-lang.org/package-source/cce/dracula.plt/2/5/language/acl2-html-docs/TIPS.html","timestamp":"2014-04-16T16:17:33Z","content_type":null,"content_length":"25634","record_id":"<urn:uuid:c4426890-943e-4f4e-8d28-622fd7024fe2>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
|
kurtosis {PerformanceAnalytics}
compute kurtosis of a univariate distribution
kurtosis(x, na.rm = FALSE,
method = c("excess", "moment", "fisher", "sample", "sample_excess"),
a logical. Should missing values be removed?
a character string which specifies the method of computation. These are either "moment", "fisher", or "excess". If "excess" is selected, then the value of the kurtosis is computed by the "moment"
method and a value of 3 will be subtracted. The "moment" method is based on the definitions of kurtosis for distributions; these forms should be used when resampling (bootstrap or jackknife). The
"fisher" method correspond to the usual "unbiased" definition of sample variance, although in the case of kurtosis exact unbiasedness is not possible. The "sample" method gives the sample
kurtosis of the distribution.
a numeric vector or object.
arguments to be passed.
This function was ported from the RMetrics package fUtilities to eliminate a dependency on fUtilties being loaded every time. This function is identical except for the addition of checkData and
additional labeling.
where n is the number of return, \overline{r} is the mean of the return distribution, σ_P is its standard deviation and σ_{S_P} is its sample standard deviation
Carl Bacon, Practical portfolio performance measurement and attribution, second edition 2008 p.84-85
## mean -
## var -
# Mean, Variance:
r = rnorm(100)
## kurtosis -
print(kurtosis(portfolio_bacon[,1], method="sample")) #expected 3.03
print(kurtosis(portfolio_bacon[,1], method="sample_excess")) #expected -0.41
print(kurtosis(managers['1996'], method="sample"))
print(kurtosis(managers['1996',1], method="sample"))
Documentation reproduced from package PerformanceAnalytics, version 1.1.0. License: GPL
|
{"url":"http://www.inside-r.org/packages/cran/PerformanceAnalytics/docs/kurtosis","timestamp":"2014-04-19T12:45:08Z","content_type":null,"content_length":"19178","record_id":"<urn:uuid:ffcaf23f-b9e1-4708-9ac6-6bef6caae770>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Investigating Speed and Constant Acceleration
Investigating Speed and Constant Acceleration
Students roll miniature cars down a ramp and measure their speed and acceleration every 10 cm. Students can vary the type of car, height of ramp or the ramp material. Student change one of these
variables to investigate how the car's speed and acceleration change. Speed and acceleration can be graphed. The speed of each car during any one trail should increase linearly but it's acceleration
should remain constant. The graphs can be compared and the slope differences discussed.
Learning Goals
Students learn to vary only one variable. Students investigate the relationship between charts and graphs. Students model constant acceleration. The key vocabulary words are speed, constant
acceleration, variable, and slope.
Context for Use
This is a lab that should take two 45 minute periods. Groups of 2-4 should be used. Students should be familiar with plotting points on a graph.
Subject: Physics:Classical Mechanics
Resource Type: Activities:Lab Activity
Grade Level: Middle (6-8)
Description and Teaching Materials
Students are given a ramp (board, rain gutter, car track, etc.) about 1 meter long, a way to elevate one end, (books, ring stand, etc.) and a small car. Students experiment rolling a small car down
the track. Several variables can be investigated: height of ramp, type of car, and ramp material. This is an informal investigation at this point. Most students will be trying to find the fastest
car. Allow 10 - 15 minutes for the students to "play" with the equipment. Have students write down their findings and present them to the class. Example: Our group found that the rain gutter seems to
be the fastest ramp. Groups may have several findings and some of them may not be true. These will serve as possible investigations in part 2 of the lab.
Have each group of students choose a variable they would like to investigate. Mark the ramp in 10 cm intervals start at the raised end. Students should find the the speed of the car at each of these
marks, do just one mark at time. After completing 5 - 6 points they should change the variable they would like to investigate. Measure as before and change the ramp for a third time. (Students might
be able to produce a procedure to measure speed.) Acceleration can be found for each of the distances. Students should then graph the speed of each trial on the same graph and the accelerations on a
different graph. Have students compare and contrast their graphs with other groups and present their findings to the class.
Teaching Notes and Tips
If the ramp is too vertical the car will crash when it reaches the bottom. Have students limit the angle of the ramp to about 30 degrees. Times can be very short for some of the distances. Students
may want to make several measurements at each interval and choose the best. An average could be used but often one of the times is way off and should be thrown out.
Students will give two classroom presentation. The students will also produce two graphs
Grade 6 - Physical Science III.F.4 - the student will use a frame of reference to describe the position, speed and acceleration of an object and the student will measure the speed of an object.
References and Resources
|
{"url":"http://serc.carleton.edu/sp/mnstep/activities/27821.html","timestamp":"2014-04-17T16:23:09Z","content_type":null,"content_length":"22150","record_id":"<urn:uuid:14093a03-8a03-4be6-acee-cc47c86d2e35>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chula Vista ACT Tutor
...I graduated #5 in my senior class with a GPA of a 4.25 on the 5 point scale. After graduating, I went on to attend Francis Marion University where I began working in the Mathematics Lab as a
true first semester freshman. The Math Lab consisted of 4 classrooms that were sectioned off in one huge classroom.
15 Subjects: including ACT Math, calculus, ASVAB, algebra 1
...I will use my skills to teach any student. With my one-on-one tutoring I will answer any questions students ask. I show step by step solutions on how to solve Algebra problems that will be
learned by the student.
39 Subjects: including ACT Math, reading, chemistry, physics
...Additionally, I lived in France for five years, which helped develop very strong and natural French language skills. I also spent one year tutoring in Seattle, WA, working with special needs
students pursuing their GEDs. I am an effective tutor because of my skill in assessing my student's needs, but also because of my ability to empathize with young learners.
14 Subjects: including ACT Math, French, geometry, ASVAB
...This test is so important! It doesn't just show colleges how much information you have retained from high school, it also shows how much you prepared for the test. It is important to do the
very best you can and show colleges your true potential.
9 Subjects: including ACT Math, algebra 1, algebra 2, economics
...Some had ADHD, dyslexia, substance abuse problems. I also taught General Education Development (GED) at the U.S. Court.
28 Subjects: including ACT Math, reading, ESL/ESOL, Chinese
|
{"url":"http://www.purplemath.com/Chula_Vista_ACT_tutors.php","timestamp":"2014-04-21T07:34:03Z","content_type":null,"content_length":"23512","record_id":"<urn:uuid:d2ba7d9a-1e44-4969-9695-0fcbf6d1092e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
|
bilinear transform
The most common method of designing an
IIR digital filter
(that is, a
digital filter
with both
) is to first design an
analog filter
that meets the design specifications, then transform it to a digital filter using a transformation
to map from the
domain to the
The bilinear transform is the most commonly used function of this type. (Other transforms are sometimes used in other applications; for instance, the step-invariant transformation for feedback
control designs.) It is also known as the trapezoidal approximation.
The bilinear transform converts a
transfer function
in the
Laplace transform
domain to one in the
domain, by substituting:
s = c*(z-1)/(z+1)
where c is a constant that can be set arbitrarily to match one particular analog frequency is mapped exactly to a particular digital frequency.
Important Details
The mapping thus generated has several nice properties: it is
, and it maps the left-hand
in the Laplace domain to the interior of the
unit circle
in the z-domain (i.e.,
region to stable region, for
The mapping is not linear. By setting z = exp(jΩ) and s = jω in the bilinear transform equation, the relation between analog and digital frequencies can be extracted:
ω[analog] = c tan(Ω[digital]/2)
Thus, the frequency response of an analog filter will be "warped" in the resulting digital filter.
There are two methods of compensating for this frequency distortion. First, if an existing analog filter is being converted, the constant c can be selected to make one the filters match at one
particular frequency (for instance, the filter's cutoff frequency). More commonly, the filter's frequency specifications can be made in the digital domain and converted back to the analog domain
(with an arbitary choice of c). An analog filter is then designed to meet these specifications; when converted back to a digital filter, the critical frequencies will be in the right places.
The bilinear transform is related to the
trapezoidal approximation
numerical integration
. Consider a
difference equation
that approximates integration using this method:
y(n) = y(n-1) + T*(x(n) - x(n-1))/2
where T is the sampling period. Taking the z-transform and simplifying yields:
Y(z)/X(z) = (T/2)*(z+1)/(z-1)
Compare this to the Laplace transform of an integrator:
Y(s)/X(s) = 1/s
Clearly, applying the bilinear transform with c=2/T to one will yield the other.
Digital Signal Processing, Third Edition
, by
John Proakis
Dimitris Manolakis
. (
Digital Filtering: An Introduction
, by
Edward Cunningham
. (
my own head
|
{"url":"http://www.everything2.com/title/bilinear+transform","timestamp":"2014-04-17T09:56:36Z","content_type":null,"content_length":"23747","record_id":"<urn:uuid:9ae24600-9696-4ab8-816b-f87004804531>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fourier Transforms
William Tivol tivol at news.wadsworth.org
Fri Jun 30 16:07:21 EST 1995
Dear Toby,
The other responses pretty well describe what the Fourier transform is
(or how to look it up). Just in case that is not your question, but rather
you want to know why the Fourier transform of the electron density is simply
related to the diffraction amplitudes, I think I can help.
The x-ray scattering amplitude can be thought of as having no struc-
ture. That is, any point charge will scatter the x-ray in the same manner (of
course proportionally with the magnitude of the charge). In diffraction, all
the x-rays scattered in one direction end up in a particular spot, and each
direction is an eigenfunction of momentum. Thus, the scattering can be viewed
as going from one momentum eigenfunction to the sum of other momentum eigen-
functions. The momentum eigenfunction in the co-ordinate representation is
exp(ik.r), i.e. a plane wave. In the momentum representation, the eigenfunc-
tion is a delta-function. In the momentum representation, the incident delta
function is scattered by the Fourier transform of the electron density to a sum
of other delta functions, and, since the scattering still has no structure, the
scattering must be proportional to the structure factors, which are the Fourier
components of the electron density. If this was not your question, delete this
post before reading it.
Bill Tivol
More information about the Xtal-log mailing list
|
{"url":"http://www.bio.net/bionet/mm/xtal-log/1995-June/001753.html","timestamp":"2014-04-16T09:40:16Z","content_type":null,"content_length":"3446","record_id":"<urn:uuid:66ddfc06-9131-4d4e-934b-47de787adc31>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
|
bilinear transform
The most common method of designing an
IIR digital filter
(that is, a
digital filter
with both
) is to first design an
analog filter
that meets the design specifications, then transform it to a digital filter using a transformation
to map from the
domain to the
The bilinear transform is the most commonly used function of this type. (Other transforms are sometimes used in other applications; for instance, the step-invariant transformation for feedback
control designs.) It is also known as the trapezoidal approximation.
The bilinear transform converts a
transfer function
in the
Laplace transform
domain to one in the
domain, by substituting:
s = c*(z-1)/(z+1)
where c is a constant that can be set arbitrarily to match one particular analog frequency is mapped exactly to a particular digital frequency.
Important Details
The mapping thus generated has several nice properties: it is
, and it maps the left-hand
in the Laplace domain to the interior of the
unit circle
in the z-domain (i.e.,
region to stable region, for
The mapping is not linear. By setting z = exp(jΩ) and s = jω in the bilinear transform equation, the relation between analog and digital frequencies can be extracted:
ω[analog] = c tan(Ω[digital]/2)
Thus, the frequency response of an analog filter will be "warped" in the resulting digital filter.
There are two methods of compensating for this frequency distortion. First, if an existing analog filter is being converted, the constant c can be selected to make one the filters match at one
particular frequency (for instance, the filter's cutoff frequency). More commonly, the filter's frequency specifications can be made in the digital domain and converted back to the analog domain
(with an arbitary choice of c). An analog filter is then designed to meet these specifications; when converted back to a digital filter, the critical frequencies will be in the right places.
The bilinear transform is related to the
trapezoidal approximation
numerical integration
. Consider a
difference equation
that approximates integration using this method:
y(n) = y(n-1) + T*(x(n) - x(n-1))/2
where T is the sampling period. Taking the z-transform and simplifying yields:
Y(z)/X(z) = (T/2)*(z+1)/(z-1)
Compare this to the Laplace transform of an integrator:
Y(s)/X(s) = 1/s
Clearly, applying the bilinear transform with c=2/T to one will yield the other.
Digital Signal Processing, Third Edition
, by
John Proakis
Dimitris Manolakis
. (
Digital Filtering: An Introduction
, by
Edward Cunningham
. (
my own head
|
{"url":"http://www.everything2.com/title/bilinear+transform","timestamp":"2014-04-17T09:56:36Z","content_type":null,"content_length":"23747","record_id":"<urn:uuid:9ae24600-9696-4ab8-816b-f87004804531>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Infinity and the "Noble Lie"
joeshipman@aol.com joeshipman at aol.com
Sat Dec 10 15:59:08 EST 2005
> For those following this discussion who have no problem with the Axiom
> of Infinity in ZFC, and who are comfortable saying that it is a true
> statement and that statements derived from it are therefore also true,
> I'd like to hear what your attitude is to the axiom that an
> Inaccessible Cardinal exists.
It's also true, as are various other small large cardinal axioms up to
something like a weakly compact cardinal at least. What's the point of
this question?
My reply to Koskensilta:
The point of the question is that I don't expect everyone to have the
same answer as you. What is it about the large cardinals up to a weakly
compact cardinal that makes you believe they are true?
One property of lies, noble or otherwise, is that they are
*falsehoods*; and this presupposes an antecedent notion of truth. One
can question whether there is an antecedent notion of truth in
mathematics, which serves as a common ground upon which to resolve
the debate about whether there are infinite sets. It seems to me to
be very analogous to the debate between constructivists and non-
constructivists, where, also, one has to ask: On what none question-
begging grounds could one possibly resolve the issue.
My reply to Tait:
My point is that there does appear to be an antecedent notion of truth
in mathematics as far as "ordinary mathematics" is concerned -- for
example, there is a fair consensus that the Riemann Hypothesis is
either true or false but we don't know which. Those who deny that an
antecedent notion of truth exists which settles the Axiom of Infinity
had better explain what to make of statements which do not mention
infinite sets but all known proofs of which require the axiom of
infinity. (For example, Kruskal's theorem in Friedman's finite form, or
various other specializations of the Graph Minor theorem.)
My point was that axioms may or may not be called 'admissible'. I did
not say
anything about the truth. Mathematically, I think the axiom of infinity
perfectly true. The concept of truth of axioms must be seen in a
defined by the abstraction levels associated. Interpreting mathematics
in a
particular 'semantic domain' involves many assumptions on the mechanism
Actually mathematics starts from reality and from the truth
therein we eventually abstract these axioms. So axioms must be true by
definition. Particular abstractions may be favoured over others. It is
not necessary to find a fault with a version to investigate another.
'Absolute truth of ZFC axioms' is all right for mathematicians not
working on
the foundations.
My reply to Mani:
Your last sentence is exactly how a "noble lie" works -- the statements
are true as far as the masses are concerned, but we enlightened ones
know the situation is more complicated....
You're still not addressing my main point. I am not insisting you
declare that the ZFC axioms are "true" in a context-independent sense;
I am asking whether ANY theorem whose known ZFC-proofs require the
Axiom of Infinity can be "true" in a context-independent sense.
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2005-December/009444.html","timestamp":"2014-04-20T08:16:34Z","content_type":null,"content_length":"5690","record_id":"<urn:uuid:da78a06e-6c90-4b32-8991-95d3ed631b76>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Printing in increasing order
I having trouble completing my output function. The directions I was given were to:
create copy of the original array:
in a for-loop copy the values from the original
// note that the copy is going to be destroyed in the process of the printing
while array is not empty
assume that the first element of the array is the smallest found number
for each subsequent element in the array - compare with the smallest
if the element is smaller update the smallest found
print the smallest found number
call removeNumber() with the smallest found to be removed
My code so far:
void output(int *arrayPtr, int size){
int smallest = INT_MAX;
for (int i = 0; i<size; i++) {
if (arrayPtr[i]<smallest) {
smallest = arrayPtr[i];
Topic archived. No new replies allowed.
|
{"url":"http://www.cplusplus.com/forum/beginner/97848/","timestamp":"2014-04-19T12:01:31Z","content_type":null,"content_length":"6393","record_id":"<urn:uuid:a496e363-fa77-4db2-af1d-fd706f23261b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Books on vector calculus
September 29th 2011, 02:43 PM #1
Sep 2011
Books on vector calculus
I am a first year engineering student. Please tell me any books that will make me understand and grasp some of the things we do with vectors. I am looking for books that talk about vectors, such
as matrices, proofs, etc. Any books that u thought were good, please tell me too, i appreciate it. Thank you.
Re: Books on vector calculus
I am a first year engineering student. Please tell me any books that will make me understand and grasp some of the things we do with vectors. I am looking for books that talk about vectors, such
as matrices, proofs, etc. Any books that u thought were good, please tell me too, i appreciate it.
Here is the book that I would suggest to anyone: Vector Calculus by Thomas H. Barr.
Re: Books on vector calculus
Thank you, just wondering if you can name some more as well?
Re: Books on vector calculus
Re: Books on vector calculus
Re: Books on vector calculus
Amazon.com: Calculus of Vector Functions: R.H.; Williamson, R.E.; Mirkil, H. Crowell: Books <-- this book covers the linear algebra with an aim to how it is used in multivariate calculus. it's
not too abstract, but it is sufficently mathematically sophisticated. it has some nice pictures, too. it's calc 2-ish...
September 29th 2011, 02:46 PM #2
September 29th 2011, 02:58 PM #3
Sep 2011
September 29th 2011, 04:33 PM #4
September 29th 2011, 04:41 PM #5
Sep 2011
September 29th 2011, 10:39 PM #6
MHF Contributor
Mar 2011
|
{"url":"http://mathhelpforum.com/algebra/189149-books-vector-calculus.html","timestamp":"2014-04-18T04:44:17Z","content_type":null,"content_length":"46485","record_id":"<urn:uuid:60f9842a-1598-44d1-94aa-84ec570be22e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Large-scale motif discovery using DNA Gray code and equiprobable oligomers
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Bioinformatics. Jan 1, 2012; 28(1): 25–31.
Large-scale motif discovery using DNA Gray code and equiprobable oligomers
Motivation: How to find motifs from genome-scale functional sequences, such as all the promoters in a genome, is a challenging problem. Word-based methods count the occurrences of oligomers to detect
excessively represented ones. This approach is known to be fast and accurate compared with other methods. However, two problems have hampered the application of such methods to large-scale data. One
is the computational cost necessary for clustering similar oligomers, and the other is the bias in the frequency of fixed-length oligomers, which complicates the detection of significant words.
Results: We introduce a method that uses a DNA Gray code and equiprobable oligomers, which solve the clustering problem and the oligomer bias, respectively. Our method can analyze 18 000 sequences of
~1 kbp long in 30 s. We also show that the accuracy of our method is superior to that of a leading method, especially for large-scale data and small fractions of motif-containing sequences.
Availability: The online and stand-alone versions of the application, named Hegma, are available at our website: http://www.genome.ist.i.kyoto-u.ac.jp/~ichinose/hegma/
Contact: ichinose/at/i.kyoto-u.ac.jp; o.gotoh/at/i.kyoto-u.ac.jp
The technological development of next-generation sequencing has enabled us to obtain genome-scale promoter sequences (Wakaguri et al., 2008). The first step toward unraveling the regulatory
mechanisms from such large-scale data is to identify cis-regulatory motifs. Existing computational algorithms used for motif finding may be categorized into three classes: (1) motif discovery from
promoter sequences in a single genome (Sandve and Drabløs, 2006); (2) phylogenetic footprinting that uses promoter sequences from multiple species (Das and Dai, 2007); and (3) motif search relying on
known motif models, such as JASPAR (Sandelin et al., 2004) and TRANSFAC (Wingender, 2004). To predict the locations of motifs, each class adopts a distinct strategy: Class (1) tries to find
particular words or sets of similar words significantly enriched in promoters; Class (2) aligns orthologous genomic sequences and extracts the sites that are well-conserved among species; and Class
(3) finds the sites that match a list of known motifs cataloged in a library. Although the latter two classes are applicable to genome-scale promoter sequences in principle, the high computational
cost prohibits application of the first class to large-scale data, despite the fact that motif discovery is the only way if we have no prior knowledge of other species or known motifs.
Of the several different approaches adopted in motif discovery, word-based methods are much more scalable than other approaches (Das and Dai, 2007), such as expectation maximization (Bailey and
Elkan, 1994) or Gibbs sampling (Lawrence et al., 1993). In principle, a word-based method exhaustively counts all the oligomers in a given set of sequences and detects the ones that are represented
more abundantly than the background frequencies. However, there are two problems hindering the application of this method to large-scale data. First, it is not trivial to cluster similar oligomers
into fewer groups. Fundamentally, a word-based method initially detects interesting oligomers without allowing any substitutions, whereas a motif is typically a set of similar oligomers that contain
some variations among them. Hence, we need to apply a clustering method to gather similar oligomers. However, the computational cost rapidly increases with the number of initial oligomers or the
degree of allowed variations. Second, the detection of significantly abundant oligomers is complicated by the variable background frequencies of different oligomers with a fixed length. For example,
the background frequencies of AT-rich and GC-rich oligomers can differ extensively in human promoter sequences. Moreover, the difference becomes more remarkable for longer oligomers. Thus, we have to
carefully evaluate the statistical significance of over-representation of particular oligomers in large-scale data.
Here, we report a new motif discovery method that can analyze tens of thousands of DNA sequences each ~1 kbp long. We solve the first problem by using a DNA Gray code [originally proposed by Gray
(1947), see also Er (1984)]. The DNA Gray code is an ordering of oligomers in which adjacent oligomers differ from each other by only one nucleotide. Since neighboring oligomers in the DNA Gray code
are similar to one another, we can solve the first problem by searching only neighborhoods within the DNA Gray code. To solve the second problem, we use ‘equiprobable’ oligomers, the lengths of which
are variably adjusted so that every oligomer should have an approximately equal background probability. It is easily shown that the equiprobable oligomers can be naturally combined with the DNA Gray
We implement our motif discovery method in C to produce the computer program named ‘Hegma’ and evaluate the performance of Hegma by using a known database, cisRED (Robertson et al., 2006). The
benchmark test indicates that in most situations Hegma outperforms Weeder (Pavesi et al., 2004), the best existing word-based motif discovery tool (Tompa et al., 2005). As Hegma is three to four
orders of magnitude faster than Weeder, Hegma may be applicable to unprecedented scales of data analyses.
2 METHODS
2.1 DNA Gray code
A Gray code is a coding system of binary numbers in which adjacent numbers differ by only one bit. Although Gray has initially proposed this code as such binary numbers (Gray, 1947), we can easily
extend it to quaternary numbers (Er, 1984) to be applied to a DNA sequence.
The DNA Gray code can be constructed iteratively from monomers to arbitrary length oligomers. Consider a monomer code (A,G,C,T). This code is obviously a Gray code because adjacent monomers differ by
one nucleotide. Note that we regard the last monomer to be adjacent to the first monomer, and this circularity holds for longer oligomers. We prepare four copies of the monomer Gray code and
concatenate them with each nucleotide, but in the cases of G and T, the copies are arranged in the reverse order. This procedure yields the dimer Gray code as illustrated in Figure 1. In the same
manner as the dimers, we can construct the DNA Gray code of k-mers (k>1) by preparing four copies of the (k−1)-mer Gray code, two of which are reversed and concatenating them to each nucleotide.
Construction process of the DNA Gray code of dimers. The ordinary and reverse copies in the second row are copied from the monomer Gray code in the first row. The concatenation of the first and
second rows yields the dimer Gray code shown in the third ...
In general, if the (k−1)-mer code is a Gray code, the k-mer code constructed by the above procedure is also a Gray code. This fact can be understood from the following observations. We can partition
the k-mer Gray code into four regions in which the first nucleotides in each region are identical. Inside each region, the oligomers are arranged in Gray code order because the first nucleotides are
identical and the others are the (k−1)-mer Gray code. On the other hand, two oligomers at both sides of a boundary between neighboring regions are identical except for the first nucleotides because
of the reverse copy. Consequently, the k-mer code is inductively a Gray code as the monomer code is a Gray code.
The DNA Gray code has an ordered tree structure as a consequence of the construction process mentioned above (Er, 1984). This implies that we can apply the depth-first search algorithm to the tree to
naturally order oligomers of variable lengths. This feature is important in combining the DNA Gray code with the equiprobable oligomers, as discussed later in Section 2.3.
The Hamming distance between oligomers located at a distance d in the DNA Gray code is smaller than or equal to d. In this regard, when we extract some consecutive oligomers from the DNA Gray code,
those oligomers are similar to one another. However, all similar oligomers are not necessarily in a neighborhood in the DNA Gray code, i.e. two oligomers having a small Hamming distance can be
located at distant positions. Nevertheless, we can show that the property of the neighboring similarity is beneficial for efficient data processing compared with conventional methods (Section 3).
2.2 Shift detection
Two oligomers with a shift relation, for example ACGGT and CGGTC, are similar to each other in the sense of edit distance, although the Hamming distance between them is large. Because of the large
Hamming distance, we cannot immediately detect the similarity between such oligomers in the DNA Gray code. Fortunately, however, we can detect the shift relations of the oligomers at a low cost by
taking advantage of the feature that the DNA Gray code is left shift continuous.
Let S be a semi-infinite sequence, S=s[0]s[1]···s[i]···, s[i]A,G,C,T}. The left shift σ of the sequence is defined by:
Note that the left shift is the inverse of the construction of the DNA Gray code; in the construction process, we concatenate oligomers with each nucleotide, whereas we remove the first nucleotides
from the oligomers in the left shift.
To explain the left shift continuity, we introduce a real-valued representation of the sequence in the DNA Gray code. Let G[k]={g[0], g[1],…, g[i],…, g[N−1]} be a DNA Gray code with N=4^k oligomers,
where g[i] is an oligomer of length k, g[i]=s[i0]s[i1]···s[ik−1]. The real-valued representation ϕ[k] of g[i] is defined by:
In general, there is also a real-valued representation ϕ of a semi-infinite sequence S, x=ϕ(S) as k→∞. Our aim here is to show the function f that corresponds to the left shift σ in the real-valued
domain x.
In order to understand the left shift function f, we consider the construction process of the DNA Gray code in the real-valued domain, as shown in Figure 2. The copies and reverse copies in the
construction process correspond to the linear maps that have positive and negative slopes in the real-valued domain, respectively. Therefore, the process is expressed as shown in Figure 2a. Since the
left shift is the inverse process of the construction, we can obtain the left shift function as the inverse map, as shown in Figure 2b. This function is equivalent to the composition map of the tent
map well known in chaos theory (Alligood et al., 1997).
Construction process and left shift of the DNA Gray code in the real-valued domain. (a) The construction process can be expressed as linear maps that have positive (A and C) and negative (G and T)
slopes. (b) The left shift function f can be understood ...
It should be noted that the function f is continuous. The left shift continuity implies that the image mapped from a contiguous region in the DNA Gray code, which corresponds to a set of similar
oligomers, is also contiguous. If the functions were discontinuous, a contiguous region would be mapped to scattered regions. The left shift continuity ensures that we can obtain a single region
whenever a contiguous region is mapped.
Figure 3 illustrates two examples of contiguous regions r[1] and r[2], and their images r^′[1] and r^′[2]. The region r[2] overlaps with the image r^′[1] (Fig. 3b), which corresponds to the left
shifts of oligomers in r[1] (Fig. 3a). Since this implies that r[2] is included in the left shifts of r[1], we can judge that those regions have a shift relation. Thanks to the left shift continuity,
a shift relation can be detected by mapping only two oligomers at the beginning and end of the region even though the contiguous region is composed of many oligomers.
Mappings from two contiguous regions (r[1] and r[2]) to their images (r^′[1] and r^′[2]). (a) The relations between the contiguous regions and their images are indicated on the left shift function f.
(b) All contiguous regions are illustrated on ...
To detect overlapped pairs in a set of contiguous regions, we compare the regions with a sorted list of their images. We can compare those lists in a linear order of the number of regions.
Consequently, we can detect shift relations of oligomers quite efficiently.
2.3 Equiprobable oligomers
The background probability is a model that represents an intrinsic property of DNA sequences regardless of the presence of motifs. We can statistically detect an oligomer as a motif when the
frequency of its occurrence is significantly higher than the background probability. In this work, we use the m-th order Markov model of the given sequences as the model of the background
As we mentioned in Section 1, a variation among the background probabilities causes statistical bias in the significance detection. To overcome this problem, we propose equiprobable oligomers whose
lengths are variable, but whose background probabilities are adjusted to be nearly identical to one another.
Let I(S) be the background information content of an oligomer S, where I(S)=−log[2] P(S) and P(S) is the background probability. Let S^′ be the oligomer in which the right-most nucleotide is removed
from S. We define the equiprobable oligomer S such that it has the following property,
where θ is a threshold parameter.
As an example, we consider equiprobable oligomers that consist of only A and C with the 0-th order Markov model as the background probability. In the 0-th order Markov model, the background
information content I(S) of an oligomer S is expressed as the sum of the background information contents of individual nucleotides, i.e. I(S)=I(s[0]s[1]···s[k−1])=∑[i=0]^k−1I(s[i]). Figure 4
illustrates such equiprobable oligomers. Each box corresponds to a nucleotide and its height is drawn to be proportional to the information content of that nucleotide. Therefore, when the
(downwardly) heaped boxes exceed the threshold θ, the column of those nucleotides becomes an equiprobable oligomer. All the equiprobable oligomers do not have exactly the same probability; for
example, I(AAAA)=8 and I(CCC)=9. However, the equiprobability is considerably improved compared with fixed-length oligomers, especially in the cases of longer oligomers and a higher order Markov
model. The validity of the digitizing approximation is discussed in Section S.1 in Supplementary Material.
An example of equiprobable oligomers arranged in the order of Gray code. We use the 0-th order Markov model with I(A)=2 and I(C)=3. We fix the threshold parameter θ=8. The height of a box corresponds
to its information content.
Consider two oligomers, S[1] and S[2], such that S[1] is shorter than S[2]. If S[2] is an equiprobable oligomer and S[1] matches a prefix of S[2], S[1] cannot be an equiprobable oligomer because I(S
[1]) should be smaller than θ under the property of Equation (3). This observation implies that the set of equiprobable oligomers is a prefix code in which no oligomer matches a prefix of any other
oligomer. Recall the feature that the DNA Gray code has the ordered tree structure. In the prefix code, a code word is always located at a leaf of the tree. Therefore, the equiprobable oligomers can
be ordered on the tree and hence we can naturally combine the equiprobable oligomers with the DNA Gray code so that adjacent oligomers differ from each other by just one nucleotide up to the length
of the shorter oligomer.
Algorithm 1 shows the recursive procedure that performs the depth-first search on the tree of the DNA Gray code. By calling equigraycode(″″,true), one can display all of the equiprobable oligomers
with the DNA Gray code. If we use the i.i.d. uniform distribution as the background model, we can obtain the DNA Gray code with a fixed length of θ/2, because I(S)=2|S| in this case. Therefore,
Algorithm 1 can generate the DNA Gray code as a special case.
2.4 Significance detection
We have now obtained the DNA Gray code of equiprobable oligomers. To detect significant motifs from a given set of sequences, we count the occurrences of equiprobable oligomers. Let C be a set of
occurrence counts of equiprobable oligomers:
where M is the number of equiprobable oligomers and c[i] is the count of the i-th oligomer in the DNA Gray code. We define a contiguous region [i, j] as a cluster if it satisfies the following
The cluster is a set of similar oligomers that appear in the given sequences.
We detect the significance of the cluster by using its width w=j−i+1 and the total count o=∑[k=i]^j c[k]. The null hypothesis is that the cluster is obtained from random sequences generated by the
background model. In the background model, the occurrence probability p of each oligomer can be approximated by p=1/M because oligomers are equiprobable. Let q be the probability of an oligomer that
occurs at least once. Thus, q is expressed as q=1−(1−p)^T, where T is the total number of oligomers in the given sequences. The random width W against w can be understood as Bernoulli trials where
there are W-successes with the probability q between two failures. Therefore, the probability distribution of W is a geometric distribution represented by:
Since O≥w, the random total count O against o is conditioned by the width w. If there is no constraint, the probability distribution of O is a binomial distribution with the success probability wp
and the number of observations T. The conditional probability distribution is represented by:
where Bin is the binomial distribution:
Using these distributions, we define the p-value pv of a cluster by:
Since there are many clusters in the set of occurrence counts C, a large number of significance tests must be involved. To reduce the false discovery rate, we use the e-value ev instead of the p
-value, which is adjusted by the number of equiprobable oligomers M as follows,
If ev is smaller than a significance level α, the null hypothesis is rejected and hence the corresponding cluster is judged to be significantly enriched.
2.5 Summary of methods
The flowchart shown in Figure 5 summarizes our motif discovery procedure. The parameter that characterizes each process is presented beneath the description of the process.
1. Threshold parameter θ: the threshold parameter θ is critical in our method because it regulates the probability of equiprobable oligomers
. Empirically, we can obtain good results when we set
, where
is the total sum of the lengths of the input sequences. Therefore, in the application, θ is automatically adjusted in accordance with the input sequences, such that θ=log
)−ϵ (empirically, ϵ=1). The rationale behind this estimation is discussed in Section S.2 in
Supplementary Material
2. Order of Markov model m: the background Markov model is constructed from the input sequences that include the motifs themselves. Since the regions occupied by the motifs are much smaller than the
rest of the sequences, the background model can be properly estimated if m is small. The default value of m is fixed at 3.
3. Significance level α: the significance level α is not crucially influential in our method. We set the default value at 0.01 as a typical value.
4. Number of shifts: after finding significant clusters, we sort them in the ascending order of their e-values. We pick up each cluster in this order and look for other clusters that have a shift
relation with it. The clusters thus found are merged into a single motif. This process is recursively performed. The depth of this recursion defines the number of shifts allowed. We set the
default value for the depth at 3.
Flowchart of the motif discovery. Each box that corresponds to a process presents the description (upper) and the parameter (lower) within it.
2.6 Data and statistics
As the benchmark data, we use the set of human promoter sequences in the cisRED database (Human v9.0, Robertson et al., 2006). The cisRED database consists of a set of promoter sequences and a set of
motifs defined in those sequences, where each motif is conserved among several species and annotated according to the known motif database TRANSFAC (Wingender, 2004). The number of promoter sequences
is 18 779. The total number of nucleotides is ~47 Mbp, of which valid (unmasked) nucleotides amount to ~31 Mbp. After removal of redundancy, the number of conserved motifs is 236 208 and the number
of nucleotides occupied by the motifs is ~2.3 Mbp.
By comparing the sites predicted by our method with those listed in the cisRED database, we assess the performance of our method at two distinct levels, the nucleotide level and the site level. The
statistics we use are essentially the same as those adopted by Tompa et al. in their assessment strategy (Tompa et al., 2005). At the nucleotide level, each dataset consists of pairs (i,p), where i
is the sequence ID and p is the nucleotide position within the site. We denote the sets of known sites and predicted sites by nK and nP, respectively. At the site level, each set consists of triples
(i,s,e), where i is the sequence ID, and s and e are the start and end positions of the site, respectively. We denote the sets of known and predicted sites by sK and sP, respectively.
At the nucleotide level, the true positive nTP is simply defined by:
where |·| implies the size of the set. At the site level, the true positive sTP is expressed as:
where ov(u, v)=min(u.e,v.e)−max(u.s,v.s)+1 (overlap) and len(u)=u.e−u.s+1 (length). This expression implies that sTP is the number of known sites that overlap with the predicted sites by at least
one-quarter of the length of the known site.
The false positive and the false negative are defined as follows,
where x=n (nucleotide level) or x=s (site level). The true negative is defined only at the nucleotide level:
where L is the number of valid nucleotides in the promoter sequences.
Of the above definitions, only the false positive at the site level sFP is different from that of Tompa et al. (2005). Tompa et al. allowed overlaps between the predicted sites and removed such sites
from sFP if each site overlapped with a known site. In contrast, we use a slightly more stringent criterion to check whether the clustering of motifs is appropriately performed, i.e. we include the
overlaps of the predicted sites in sFP even if the sites overlap with a known site.
Either at the nucleotide (x=n) or at the site (x=s) level, the sensitivity xSn and the positive predictive value xPPV are defined as usual:
To average these quantities to give a single statistic, we adopt the correlation coefficient nCC at the nucleotide level, which is defined by:
In a similar way, we adopt the average site performance sASP at the site level, which is defined by:
3.1 Performance evaluation with all motifs in cisRED
To examine the performance of our method, Hegma, we adopt essentially the same evaluation scheme as that used by Tompa et al. (2005). To evaluate the effects of data size on the performance, we
prepare sets of sequences that are randomly selected from the human promoter sequences of the cisRED database. In the following results shown in Figure 6, we prepare 10 sets for each number of
Prediction statistics at the nucleotide level (a) and the site level (b), as a function of the number of sequences. The default parameter set described in Section 2.5 is used for calculation. Each
symbol indicates the average of 10 tests with the sequences ...
Figure 6a indicates that nPPV at the nucleotide level is insensitive to the variation in the number of sequences. In the default setting, our method adjusts the threshold parameter such that the
equiprobable oligomers should have the probability p=1/L under the background model, as discussed in Section 2.5. This adjustment maintains the null distribution at a constant precision, which
accounts for the constant rate of false positive (or type I error) and hence nearly constant nPPV. In contrast, nSn is improved as the number of sequences is increased. This improvement can be
explained by the general characteristics of statistical analysis, where a larger data size leads to more precise results.
The results at the site level are similar to those at the nucleotide level except that sPPV decreases for larger numbers of sequences (Fig. 6b). This decrease in sPPV originates from overlaps between
predicted sites, which augment sFP under our definition. Our method can detect a shift relation between overlapped sites and merge them. If this process were perfectly performed, the overlaps of the
predicted sites would be repressed. However, we fail to eliminate all the overlaps partly because we restrict the size of shifts to 3 in the default setting. We impose this restriction to avoid the
risk of merging unrelated motifs. Improved discrimination between related and unrelated motifs is one task to be explored in the future.
Figure 7 shows the memory usage and the calculation time. Calculations are made on a computer with 3 GHz Intel Xeon^® with 16 GB memory running under Linux^® 2.6. Both time and memory linearly
increase with the number of sequences. It is noteworthy that we need only 30 s for calculation of the full data (18 779 sequences, 31 Mb). The memory usage of 1.1 GB is also sufficiently feasible for
current conventional computers.
Dependence of memory usage and calculation time on the number of sequences. Each value is the average of 10 trials. For the full data, the memory usage is 1.1 GB and the calculation time is 30 s.
3.2 Performance evaluation with specific motifs
We compare the performance of our method to that of Weeder (version 1.4.2, Pavesi et al., 2004), a representative word-based method based on exhaustive enumeration with a limited number of mutations.
We choose Weeder because it performed best in the assessment of Tompa et al. (2005).
Almost all the conventional tools, including Weeder, assume that given promoter sequences are derived from coregulated genes. This assumption implies that most of the given sequences have at least
one specific motif that contributes to the specific regulation. Therefore, we prepare a set of sequences in which the fraction of sequences holding the motif is variably specified. We adopt the motif
AhR as the specific motif, because it is the most frequent motif in the TRANSFAC annotations. Let R and U be the sets of sequences with and without the motif AhR, respectively. We select sequences
from R and U according to a predefined percentage that we control. For example, when the total number of sequences is 1000 and the percentage of motif-containing sequences is 80%, we select 800
sequences from R and 200 sequences from U. In the following results, we fix the number of sequences at 1000. In order to evaluate the performance of single-motif detection, we regard only the known
sites as the right sites of the motif AhR, even though the motifs may be present at other sites in the sequence.
We run Weeder under the following settings: the species code is HS; the minimal sequence percentage on which the motif has to appear is 5 (to increase sensitivity); and the top 20 000 (sufficiently
large) motifs are reported. We try the following pairs of motif length and maximal number of mutations: (6,1), (8,2) and (10,3). Although motif length 12 is also allowed, we do not try it because of
the prohibitively long calculation time. We determine the positions of the predicted sites with the tool locator.out included in the Weeder tools.
Figure 8a shows the results at the nucleotide level. When the percentage of motif-containing sequences is 100%, i.e. all the sequences have the specific motif AhR, nCC of Weeder (0.093) is superior
to that of Hegma (0.087). However, Hegma outperforms Weeder under all other situations. The performance of Weeder becomes worse as the percentage of motif-containing sequences decreases, whereas
Hegma is little affected by this variation. Since the average length of equiprobable oligomers in this evaluation is 10.7, our setting of the motif length of Weeder should be impartial. Furthermore,
Weeder also adopts statistical measures based on Z-score, in a similar way to our method. Therefore, it is most likely that the equiprobable oligomers adopted in Hegma contribute to improving
performance compared with the fixed-length oligomers used in Weeder.
Performance comparison between Hegma and Weeder at the nucleotide level (a) and the site level (b). The number of sequences is fixed at 1000. Boxes show the average values of statistics of 10 sets of
sequences. Error bars show the maximum and minimum ...
The results at the site level (Fig. 8b) are more remarkable than those at the nucleotide level. At this level, Hegma outperforms Weeder under all situations, including the case that 100% of the
sequences contain the motif, where sASP of Weeder is 0.23 and that of Hegma is 0.25. We consider that the merge of shift-related motifs introduced in Hegma has effectively reduced sFP and hence
improved sPPV, as mentioned in the previous subsection.
We repeat the same analysis as mentioned above for the 10 most frequent motifs in cisRED (AhR, aMEF-2, POU2F1, Pax-5, DEAF-1, CREB, HNF-1α, DP-1, RSRFC4 and POU3F2). Figure 9 summarizes the results
for these 10 motifs at the nucleotide level by averaging their statistics. The detailed results for individual motifs together with the results of non-parametric statistical tests are presented in
Section S.4 in Supplementary Material. Clearly, Hegma outperforms Weeder under all the situations tested. The results at the site level are also similar to those at the nucleotide level (data not
shown). These observations imply that the performance of Hegma is more stable than that of Weeder regardless of the type of motif as well as the fraction of sequences that contain the motif. An
additional examination on a smaller ChIP-seq peak dataset also supports this conclusion as shown in Section S.3 in Supplementary Material.
Average statistics for the 10 most frequent motifs at the nucleotide level. Boxes show the average values for the statistics of motifs. Error bars show the maximum and minimum values of the
statistics. The setting is the same as that in Figure 8.
The average calculation time per dataset (1000 sequences) for Weeder is 10 h, whereas that for our method is only 1.4 s when tested under the same condition mentioned in Section 3.1 and averaged over
40 trials. Therefore, our method shows considerable advantage in calculation time as well.
3.3 Analysis of unannotated motifs
In Section 3.1, we regard the predicted sites that do not match any cisRED annotation as ‘false positives’. However, it is probable that some of them actually represent true motifs absent from the
cisRED annotation. We then extract such unannotated motifs from all significant motifs predicted by Hegma in the full data of the cisRED promoters such that >95% of the sites comprising each motif do
not overlap with any annotated sites. The number of all the predicted motifs is 7528 (composed of a total of 620 153 sites), of which the number of unannotated motifs is 1161 (36 443 sites). Figure
10 illustrates four examples of the unannotated motifs with the smallest e-values in sequence logos (Schneider and Stephens, 1990).
Four examples of unannotated motifs absent from the cisRED annotation. Each motif is labeled according to the name of the most similar motif in the JASPAR database (Sandelin et al., 2004). We
selected these motifs as the ones with the smallest e-values: ...
The unannotated sites tend to be located in distal regions compared with all the predicted sites; the average position (±SD) of the unannotated sites is −1140±894 bp relative to the transcription
start sites, whereas that of all the predicted sites is −737±837 bp (p-value of t-test: ≈0). The unannotated sites are a subset of the predicted sites and its complementary set is associated with the
cisRED annotation. Therefore, this disparity suggests that the positions of the annotated sites in cisRED may have significant bias toward proximal regions. These observations may be interpreted as
follows; it may be difficult for a phylogenetic footprinting approach, including cisRED, to detect conserved motifs in the distal regions, where the marked sequence divergence or the existence of
repetitive elements hinders reliable sequence alignment compared with more conserved proximal regions (Suzuki et al., 2004). Therefore, our method can complement the phylogenetic footprinting
approach to improve the overall sensitivity of motif discovery.
We have developed a large-scale motif discovery tool, Hegma, and shown that Hegma is not only applicable to large-scale data, but also can stably detect motifs even if only a small fraction of the
examined sequences contain the motifs. Thus, Hegma is applicable to situations where the fraction of motif-containing sequences is uncontrollable, such as the detection of splicing enhancers or
silencers in exon and intron sequences, or the detection of microRNA binding sites in UTR sequences. A huge number of such sequences have already been collected in databases. However, as our
knowledge of those motifs is yet far from complete, it is difficult to know in advance the percentage of sequences holding the motifs. We consider that the speed and precision of Hegma would
facilitate discovery of novel motifs from a heap of sequence data.
Funding: Aihara Innovative Mathematical Modelling Project, Japan Society for the Promotion of Science (JSPS) through the ‘Funding Program for World-Leading Innovative R&D on Science and Technology
(FIRST Program)’, initiated by the Council for Science and Technology Policy (CSTP); Grants-in-Aid (No. 20651053, No. 221S0002 and No. 22310124) from the Ministry of Education, Culture, Sports,
Science and Technology of Japan, in part.
Conflict of Interest: none declared.
Supplementary Material
Supplementary Data:
• Alligood K.T., et al. Chaos. an Introduction to Dynamical Systems. New York, LLC: Verlag; 1997.
• Bailey T.L., Elkan C. Proceedings of the 2nd International Conference on Intelligent Systems for Molecular Biology. Menlo Park, California: AAAI Press; 1994. Fitting a mixture model by
expectation maximization to discover motifs in biopolymers; pp. 28–36.
• Das M., Dai H.-K. A survey of DNA motif finding algorithms. BMC Bioinformatics. 2007;8(Suppl. 7):S21. [PMC free article] [PubMed]
• Er M.C. On generating the N-ary reflected gray codes. IEEE Trans. Comp. 1984;C-33:739–741.
• Gray F. Pulse code communication. U.S. Patent 2632058. 1947
• Lawrence C.E., et al. Detecting subtle sequence signals: a Gibbs sampling strategy for multiple alignment. Science. 1993;262:208–214. [PubMed]
• Pavesi G., et al. Weeder Web: discovery of transcription factor binding sites in a set of sequences from co-regulated genes. Nucleic Acids Res. 2004;32:W199–W203. [PMC free article] [PubMed]
• Robertson A.G., et al. cisRED: a database system for genome scale computational discovery of regulatory elements. Nucleic Acids Res. 2006;34:D68–D73. [PMC free article] [PubMed]
• Sandelin A., et al. JASPAR: an open-access database for eukaryotic transcription factor binding profiles. Nucleic Acids Res. 2004;32(Suppl. 1):D91–D94. [PMC free article] [PubMed]
• Sandve G.K., Drabløs F. A survey of motif discovery methods in an integrated framework. Biol. Direct. 2006;1:11. [PMC free article] [PubMed]
• Schneider T.D., Stephens R.M. Sequence logos: a new way to display consensus sequences. Nucleic Acids Res. 1990;18:6097–6110. [PMC free article] [PubMed]
• Suzuki Y., et al. Sequence comparison of human and mouse genes reveals a homologous block structure in the promoter regions. Genome Res. 2004;14:1711–1718. [PMC free article] [PubMed]
• Tompa M., et al. Assessing computational tools for the discovery of transcription factor binding sites. Nat. Biotechnol. 2005;23:137–144. [PubMed]
• Wakaguri H., et al. DBTSS: database of transcription start sites, progress report 2008. Nucleic Acids Res. 2008;36(Suppl. 1):D97–D101. [PMC free article] [PubMed]
• Wingender E. TRANSFAC, TRANSPATH and CYTOMER as starting points for an ontology of regulatory networks. In Silico Biol. 2004;4:55–61. [PubMed]
Articles from Bioinformatics are provided here courtesy of Oxford University Press
• MedGen
Related information in MedGen
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3244767/?tool=pubmed","timestamp":"2014-04-18T16:50:36Z","content_type":null,"content_length":"100496","record_id":"<urn:uuid:1fb4b3e6-cfa2-492c-bbe3-6ed5c99ea546>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CCSS.Math.Content.HSS-ID.A.4 - Wolfram Demonstrations Project
US Common Core State Standard Math HSS-ID.A.4
Demonstrations 1 - 12 of 12
Description of Standard: Use the mean and standard deviation of a data set to fit it to a normal distribution and to estimate population percentages. Recognize that there are data sets for which such
a procedure is not appropriate. Use calculators, spreadsheets, and tables to estimate areas under the normal curve.
|
{"url":"http://www.demonstrations.wolfram.com/education.html?edutag=CCSS.Math.Content.HSS-ID.A.4","timestamp":"2014-04-21T09:52:55Z","content_type":null,"content_length":"28887","record_id":"<urn:uuid:6d025d01-e93d-4835-ae55-5898c287a859>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Numerical Analysis: Math 128a
Fall 2001, MWF 1011 in 3 Evans
Professor: Michael Anshelevich, 1063 Evans, manshel@math.berkeley.edu.
Office hours: M 1112, W 23. The office hours for my other class are Tu 1112, F 23.
You are welcome to come by then, but the students from the other class have priority.
Class homepage: http://www.math.berkeley.edu/~manshel/m128/m128.html. I will
use it to post homework assignments and last-minute announcements. Also, if you want
to be on the mailing list for the class, send me email in the first week of classes.
GSI: Robert Cheng, rhcheng@math.berkeley.edu. Section Tu 23, 3111 Etcheverry.
Text: Stoer and Bulirsch, Introduction to Numerical Analysis (2nd ed.). Recommended
reading: Press et al., Numerical Recipes: The Art of Scientific Computing for the appropriate
programming language. A Matlab primer is available on the class homepage.
Prerequisites: Math 53, Math 54. Some topics you should be familiar with: Taylor's
theorem, differential equations, and linear algebra, in particular solution of systems of linear
equations. Programming experience is definitely a plus, otherwise you will have to learn very
quickly. The primary programming language in the course will be Matlab.
· General error analysis.
· Interpolation by polynomials, trigonometric functions, and splines.
· Numerical integration.
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/497/1656647.html","timestamp":"2014-04-18T18:27:51Z","content_type":null,"content_length":"8434","record_id":"<urn:uuid:752eccac-60c2-491e-9e33-b733a78cf046>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
|
8.2. math — Mathematical functions
This module is always available. It provides access to the mathematical functions defined by the C standard.
These functions cannot be used with complex numbers; use the functions of the same name from the cmath module if you require support for complex numbers. The distinction between functions which
support complex numbers and those which don’t is made since most users do not want to learn quite as much mathematics as required to understand complex numbers. Receiving an exception instead of a
complex result allows earlier detection of the unexpected complex number used as a parameter, so that the programmer can determine how and why it was generated in the first place.
The following functions are provided by this module. Except when explicitly noted otherwise, all return values are floats.
8.2.1. Number-theoretic and representation functions
Note that frexp() and modf() have a different call/return pattern than their C equivalents: they take a single argument and return a pair of values, rather than returning their second return value
through an ‘output parameter’ (there is no such thing in Python).
For the ceil(), floor(), and modf() functions, note that all floating-point numbers of sufficiently large magnitude are exact integers. Python floats typically carry no more than 53 bits of precision
(the same as the platform C double type), in which case any float x with abs(x) >= 2**52 necessarily has no fractional bits.
8.2.2. Power and logarithmic functions
8.2.3. Trigonometric functions
8.2.4. Angular conversion
Converts angle x from radians to degrees.
Converts angle x from degrees to radians.
8.2.5. Hyperbolic functions
Hyperbolic functions are analogs of trigonometric functions that are based on hyperbolas instead of circles.
8.2.6. Special functions
8.2.7. Constants
The mathematical constant π = 3.141592..., to available precision.
The mathematical constant e = 2.718281..., to available precision.
CPython implementation detail: The math module consists mostly of thin wrappers around the platform C math library functions. Behavior in exceptional cases follows Annex F of the C99 standard where
appropriate. The current implementation will raise ValueError for invalid operations like sqrt(-1.0) or log(0.0) (where C99 Annex F recommends signaling invalid operation or divide-by-zero), and
OverflowError for results that overflow (for example, exp(1000.0)). A NaN will not be returned from any of the functions above unless one or more of the input arguments was a NaN; in that case, most
functions will return a NaN, but (again following C99 Annex F) there are some exceptions to this rule, for example pow(float('nan'), 0.0) or hypot(float('nan'), float('inf')).
Note that Python makes no effort to distinguish signaling NaNs from quiet NaNs, and behavior for signaling NaNs remains unspecified. Typical behavior is to treat all NaNs as though they were quiet.
See also
Module cmath
Complex number versions of many of these functions.
|
{"url":"https://wingware.com/psupport/python-manual/3.2/library/math.html","timestamp":"2014-04-19T07:35:33Z","content_type":null,"content_length":"42385","record_id":"<urn:uuid:84b49980-23fb-46d5-b50c-de925808fb7c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Experimental Cosmology and
The Ideal Microcalorimeter
In its most simple incarnation, a microcalorimeter has three parts: an absorber with heat capacity C, a thermometer, and a weak thermal link (with conductance G) to a cold bath at temperature Tb. The
cold bath is maintained by a refrigerator at cryogenic temperatures (typically from 50 to 300 milliKelvin depending on the application). When a photon hits the absorber, its energy is converted into
heat, which raises the temperature of the absorber. A very sensitive thermometer registers this increase in temperature. The device then cools through the weak thermal link and returns to its
quiescent state, ready to detect another photon. The height of the thermal signal is proportional to the energy of the photon. The time constant of this process is determined by the heat capacity C
of the absorber and the thermal conductance G of the weak link. The time constant is given by tau=C/G. With these devices we can very accurately determine the energy of the photon, its time of
arrival, and by making an array of such devices and placing it at the focal plane of an imaging optic, we can also determine the direction of the photon. Thus we have a single-photon-counting imaging
To derive the response of a microcalorimeter to an incident photon, we start by looking at energy flow. The temperature of the absorber will depend on how energy goes in and how much goes out through
the weak thermal link:To derive the response of a microcalorimeter to an incident photon, we start by looking at energy flow. The temperature is proportional to the power flowing through the
Where C is the heat capacity of the absorber, T is the temperature of the absorber, P is some power dissipated in the absorber, G is the thermal conductance, Tb is the bath temperature and E is the
energy of the photon incident at time t=0.
This equation can be readily solved for the temperature T assuming the Power P is a constant:
So the response to a photon is a single exponential decay from with a time constant τ = C/G and an amplitude proportional to the energy of the photon:
All we need is a good way to accurately measure the temperature of the absorber. In our lab we are developing two different technologies: transition-edge sensors and magnetic calorimeters. Both hold
great promise to achieve very high resolutions (E/ΔE > 1000), and we are excited about the possibilities these technologies hold for future large-format detectors for both Earth and Space-borne
|
{"url":"http://web.mit.edu/figueroagroup/ucal/ucal_basics/index.html","timestamp":"2014-04-19T22:18:51Z","content_type":null,"content_length":"10014","record_id":"<urn:uuid:80222ce4-3fe2-49e2-ac6d-d84d336cd1c5>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
|
math - word problem. help asap?
Posted by Agnes on Sunday, December 3, 2006 at 1:15am.
in a certain math class, each student has a text book, every 2 students share a book of tables, every 3 students share a problem book, and every 4 students share a mathematics dictionary. If the
total number of books is 75..
how many students are in a class?
how many students share tables?
how many students share a problem book?
how many students share a math dictionary?
write the equattion
solve the equation
find the solution
answer each question
You posted this question twice. It has already been answered
i think u answered it wrong? i do not even understand what u did..
Customers of a phone company can choose between two service plans for long distance calls. The first plan has a $17 one-time activation fee and charges 8 cents a minute. The second plan has no
activation fee and charges 12 cents a minute. After how many minutes of long distance calls will the costs of the two plans be equal?
• math - word problem. help asap? - Anonymous, Monday, June 16, 2008 at 9:11pm
Customers of a phone company can choose between two service plans for long distance calls. The first plan has a one-time activation fee and charges cents a minute. The second plan has no
activation fee and charges cents a minute. After how many minutes of long distance calls will the costs of the two plans be equal?
□ math - word problem. help asap? - kayla help asap, Tuesday, November 18, 2008 at 2:34pm
The favorite colors of Patty,Jenna,Matt and Tom are brown,blue,red,and purple.No person's name has the same number of letters as his or her favorite color.Jenna and the girl who likes blue
are in different grades.Red is the favorite color of one of the boys.Find each person's favorite color and show your work.
• math - word problem. help asap? - Anonymous, Tuesday, March 15, 2011 at 10:07am
at a recycling plant, crusher 1 can crush three tons of glass in 5 hours. If crusher 2 is used in conjunction with crusher 1, it would take 2 hours to crush three tons of glass. How long would it
take crusher 2 alone to crush three tons of glass? Round your answer to the nearest hour and minute.
• math - word problem. help asap? - Anonymous, Thursday, October 24, 2013 at 10:26pm
Peter's age is less than 4 times Carol's age. The difference of their ages is 33. Find Peter's age and Carol's age.
HINT: Two equations, two variables
Related Questions
Math - Alegbra 2 - In a certain math class, each student has a text book, every ...
Math - Alegbra 2 some1 plz help? - in a certain math class, each student has a ...
Algebra - In a certain math class, each studen has a textbook, every two ...
math - Three groups of students are sharing leftover pizza (all the same size ...
algebra... - In a school, there are 4 more students in Class A than in Class B, ...
math - Three groups of students are sharing leftover pizza (all the same size ...
math - every math class at super fun time high school has 31 students and every ...
maths - Formalize the following english statements as quantified WFF. Let S(x) ...
Math - Mary is passing out school supplies to students in the class. She gives ...
Math - Word Problem - At a certain dance class, each student must pick exactly ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1165126547","timestamp":"2014-04-16T05:07:42Z","content_type":null,"content_length":"10875","record_id":"<urn:uuid:3bd5e72f-fa3f-40c0-a78f-6c38480306db>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Euless Statistics Tutor
Find an Euless Statistics Tutor
...Prealgebra suggests that students are laying down the basics of numbers, areas, etc., before tackling how to deal with "unknowns" and "equations". I can guide the students in mastering the
multiplication tables and how to proceed up and down the number line and getting comfortable in dealing wit...
7 Subjects: including statistics, algebra 1, prealgebra, algebra 2
...Recognizing that every student is different, I study my students to see how they best learn and understand information. I then use the strategies that are most in harmony with their natural
learning style to facilitate deep and meaningful learning and retention of that learning.-Bachelor's degre...
53 Subjects: including statistics, chemistry, English, reading
I have had a career in astronomy which included Hubble Space Telescope operations, where I became an expert in Excel and SQL, and teaching college-level astronomy and physics. This also involved
teaching and using geometry, algebra, trigonometry, and calculus. Recently I have developed considerable skill in chemistry tutoring.
15 Subjects: including statistics, chemistry, physics, calculus
...Sincerely, Dr. Bob My background in Physics dates back to 1976, when I began looking at how fireworks could be designed with mathematical equations and complex physical parameters. In high
school I built rockets and wrote computer programs to calculate trajectories.
93 Subjects: including statistics, chemistry, physics, English
I am a professional tutor of college level and high school level courses. I have tutored privately for over ten years and have been employed by a college to deliver tutorials and laboratory
demonstrations in civil, mechanical and electrical engineering and computer science courses. I have also wor...
56 Subjects: including statistics, chemistry, calculus, physics
|
{"url":"http://www.purplemath.com/euless_tx_statistics_tutors.php","timestamp":"2014-04-21T02:15:21Z","content_type":null,"content_length":"23994","record_id":"<urn:uuid:901e7617-e023-4fff-b05e-7c304b35e582>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
|
box building [Archive] - Car Audio Forum - CarAudio.com
01-06-2004, 05:25 PM
Ok I was thinking about maybe building a box for my 2 subs but the biggest question for me is this
"Im bad with math but how do you find the volume of a box so that I know the box I build has the right amount. Is there a simple formula? Also do angles matter? Say i wanted a box like this shape /_|
and not just |_| shaped would the formula still work. I ask the shape part becuase I have 2 12's and a ext cab ranger so seating room and spare room for box is a tuffy."
Also thanks in advanced!!!
|
{"url":"http://www.caraudio.com/forums/archive/index.php/t-47301.html","timestamp":"2014-04-18T04:28:29Z","content_type":null,"content_length":"9537","record_id":"<urn:uuid:ba3cb926-115a-4c06-962a-4272a9e02bd4>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
|
San Ysidro, CA Algebra Tutor
Find a San Ysidro, CA Algebra Tutor
I currently teach special education, K-6, in the San Diego School District. I am available after 3PM, most weekdays. I live in Northern Baja and have some Spanish skills.
17 Subjects: including algebra 1, reading, writing, English
...I absolutely love words and work with them whenever I can. I have extensive work with the International Center of UCSD, helping those here on exchange programs become more accustomed to English
and the American system. I had a 2170 on the SAT, and a 35 on the ACT.
42 Subjects: including algebra 2, English, algebra 1, calculus
...My tutoring experience includes study sessions with fellow classmates who were struggling with the subject. So although it is not certified tutoring experience, my fellow classmates' consistent
appreciation and gratitude for my help makes me confident in my ability to help those who are struggling. My tutoring style is committed, patient, and thorough.
10 Subjects: including algebra 2, algebra 1, calculus, chemistry
...I enjoy the many elements in the field of education. I appreciate the exchanges and learning opportunities I get when working with different students, their perspectives, personalities and
backgrounds. I enjoy the challenge of discovering how to best assist a student through his/her learning process by acknowledging the diversity of people's learning styles.
4 Subjects: including algebra 1, Spanish, geometry, ESL/ESOL
...I had those same questions!, and now that I can be the person with answers, the satisfaction I get is inexplicable. People prefer different learning styles such as visual, verbal, sequential,
kinesthetic, global, etc. I will explore and experiment with presenting material as it is preferred by the student.
5 Subjects: including algebra 1, algebra 2, geometry, prealgebra
|
{"url":"http://www.purplemath.com/san_ysidro_ca_algebra_tutors.php","timestamp":"2014-04-17T15:50:39Z","content_type":null,"content_length":"24072","record_id":"<urn:uuid:95a9fbe7-1f5f-4394-81ff-27ff10ee5951>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2nd order PDE charcteristics... help please! :S
September 30th 2006, 09:33 PM
2nd order PDE charcteristics... help please! :S
hi all, sorry again for the thousands of stupid questions! :P but heres another one...
ok i said for part (a) that the PDE is a hyperbolic type and so has two sets of real characteristics. and that the characteristics are http://img100.imageshack.us/img100/7...cture14dj6.png
then for part (b) i used the coordinates:
which leads to the equation:
but then i am stuck... how can i get the 'general solution' from this, i think i am heading in the right direction, but i am not sure.
any help would be greatly appreciated!
Sarah :)
October 1st 2006, 07:24 PM
The next step is to express y in terms of \epsilon and \tau.
Covering all the bases, are we ;)
October 2nd 2006, 06:44 PM
lol, sure am ;)
hmm i think i did my orginal conversion a little bit wrong, i now get that:
then putting in y gives:
is that just then solved as:
but this doesnt seem to work when i put it back into the orginal PDE.... :S
|
{"url":"http://mathhelpforum.com/advanced-applied-math/6000-2nd-order-pde-charcteristics-help-please-s-print.html","timestamp":"2014-04-20T16:57:24Z","content_type":null,"content_length":"6951","record_id":"<urn:uuid:9416b820-52d5-4908-91d2-017ed6a3ce0d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Best Fit Normal Map Generator + Source Code
The BFN generator is finally here! I have found time for this tonight.
So, this small application allows you to generate each face of a BFN cube-map. If you want to encode the BFN only as the length of the best fit normal or transform it to a 2D texture (as suggested in
Crytek's presentation), you will have a little bit of work to do but it should not be such a big deal. You can select the resolution of the cube-map as well as the bias vector at compilation time.
Some screen-shots for 256x256x6 cube-maps:
At the origin: the BFN cube-map with a bias vector=vec3(0.5,0.5,0.5).
in the +X direction: the regular normalization cube-map.
In the +X+Y direction: a BFN cubemap with a bias vector=vec3(0.5,0.5,8.0/255.0). It results in less precision for normals having negative Z value and more precision for normals having positive Z
The reconstruction error of the regular normalization cube-map as compared to the ground-truth normal map. (scale factor of 50)
The reconstruction error of the BFN cube-map with bias a vector=vec3(0.5,0.5,0.5).
Some reconstruction errors are still visible. (scale factor of 50)
The reconstruction error of the BFN cube-map with bias a vector=vec3(0.5,0.5,8.0/255.0).
No more error patterns are visible. (scale factor of 50)
Reconstruction error on the back faces (scale factor of 50). There is a large error when using a bias vector of (0.5,0.5,8.0/255.0). However, it is not important since only the positive Z values are
important when storing normals for a light pre-pass or deferred renderer.
Error when computing a specular lob with the regular normalization cube-map. The specular exponent is 64.
No noticeable difference when changing the bias vector of the BFN.
You can download the source here
And finally, the important references :
• Amantide ray marching paper.
• Crytek's presentation from SIGGRAPH 2010 as well as their 2D BFN texture (which I recommend) can be downloaded here.
10 comments:
1. Thank you, a great contribution to the community :)
2. Sorry for the late answer! :)
Thank you very much!
3. Thanks! Great job!
It seems like the (at least for me) bottleneck in the program is the printing though :)
I moved it outside the inner-most loop and got a 10x speed improvement!
4. Ahah! Nice! :)
And you are welcome! I am glad to help any developers who reach this blog. :)
5. Oh, thats nice, correct bias value is important for view space normals, where negative-z values are rare and incorrect... i want to play with this parameter, but source code is not available
now.. Please, can you reupload it or send via email?
6. Thank you. I will reupload the source code soon and keep you informed.
7. Closed, I have reuploaded the best fit normal demo with bias here: http://sebastien.hillaire.free.fr/demos/BFN.zip
8. Thank you! Your source code was very important for me, and I want to share results of my little researh:
1. It is very good, that it is possible to change BFN cubemap size and bias vector, first of all I play with this parameters and I can approve that cubemap 512x512 with bias float3(127.5, 127.5,
8.0/255.0) is really good, BUT:
2. Potentially BFN cubemap with best fit scale mixed with normalization factor can be more precise than original Crytek's idea, but in this case cubemap mipmapping is real problem, so I deside to
use hybrid method...
3. Instead of using cubemap with premultiplied best fit length and normalization factor, I prefer to use best fit only stored in one-channel (R8) cubemap. The difference from Crytek's method is
that not-possible to convert cubemap to 2D-texture when bias vector not equal to float3(0.5, 0.5, 0.5). So I preffer to use 1-channel cubemap with best fit only.
4. Best fit length better to store with multiplied max(abs(N.x), abs(N.y), abs(N.z)) for better length packing and divide with this value on shader
5. This texture can be easily mipmapped using nearest-filter, this helps to prevent texture cache polution
-many code improvements
-nice print in console (:
-command line support (cube size, mipmaps, bias, format)
-direct output to DDS file format
-R8 and RGB8 cubemap support (R8 - best fit length only, RGB8 - best fit length and normalization factor)
-mipmaping (up to desired level) or base level only
SHADER code for packing/unpacking with R8-mipmapped BFN-cubemap (posted in source code of modified files):
// NOTE: usage in shader:
// normalized bias values, used on BFN cubemap generation pass
#define BIAS float3(127.5 / 255.0, 127.5 / 255.0, 8.0 / 255.0)
float3 UnpackNormal(float3 n)
const float3 c1 = 1.0 / (1.0 - BIAS);
const float3 c2 = 1.0 - c1;
return normalize(n * c1 + c2);
float3 PackNormal(float3 n)
float3 an = abs(n);
float bestfit = texture(us_Bfn, n).x;
float maxabs = max(max(an.x, an.y), an.z);
const float3 c3 = 1.0 - BIAS;
const float3 c4 = BIAS;
return n * (bestfit / maxabs) * c3 + c4;
P.S. If you are interested, I can send modified source code to you, just post your email here or write me to steelratsoftware at gmail.com
P.P.S. Thank you very much! My view-space normals are more precise now!
P.P.P.S. Sorry for bad english (:
9. Hi!
1. Nice!
2. Yes, mipmap should not be used when using the bfn cubemap.
3. Unfortunatly, you are ,right, you cannot use the tricky transformation to the 2D texture. And you are very correct to only use the best fit length! :)
4. Ah! Very good! I will have a look a this by myself.
5. Yes, nearest filter will work. But you have to generate yourself each mimap level right?
You are welcome for the source code; and your english is good! :)
Anyway, I am going to send you an email right now.
10. My first fail:
It is incorrect to use nearest mipmap filter, we should compute best-fit for each mipmap individually ):
Hmmmm... it will work well with best-fit-normalization cubemap, we can compute not all levels (up to 32x32) and use mipmaped cube with anisotropic filter... testing required (:
|
{"url":"http://sebh-blog.blogspot.de/2010/10/best-fit-normal-map-generator.html","timestamp":"2014-04-19T04:18:56Z","content_type":null,"content_length":"96854","record_id":"<urn:uuid:14211810-f400-4914-a6d5-3f3e15274820>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics in Western Europe - Intrigue and Integration
'); } var S; S=topJS(); SLoad(S); //-->
edHelper.com Mathematics in Western Europe - Intrigue and Integration
History of Mathematics
Mathematics in Western Europe - Intrigue and Integration
Reading Level
edHelper's suggested reading level: grades 9 to 12
Flesch-Kincaid grade level: 10.26
challenging words: differentiation, gottfried, mid-15th, price-lists, tax-collector, equation, brilliance, multiplication, rift, geometry, discredit, following, probability, mathematical, calculus,
content words: Sometimes Arabic, Italian University, North Africa, Johannes Widmann, John Napier, William Oughtred, Blaise Pascal, Isaac Newton, Principia Mathematica, Perhaps Newton
Print Mathematics in Western Europe - Intrigue and Integration
Print Mathematics in Western Europe - Intrigue and Integration (font options, pick words for additional puzzles, and more)
Quickly Print - PDF format
Quickly Print: PDF (2 columns per page)
Quickly Print: PDF (full page)
Quickly Print - HTML format
Proofreading Activity
Print a proofreading activity
Feedback on Mathematics in Western Europe - Intrigue and Integration
Leave your feedback on Mathematics in Western Europe - Intrigue and Integration (use this link if you found an error in the story)
Mathematics in Western Europe - Intrigue and Integration
By Colleen Messina
^1 The newly invented Arabic numbers arrived in Europe around 1200 AD. However, they were not popular right away. Intrigue and opposition accompanied the change from the old Roman numbers. Sometimes
Arabic numbers had to sneak into a country via a mathematician. One case of this was when a Christian monk named Adelard of Bath disguised himself as a Muslim and studied in the University of Cordova
in the 12th century. He secretly translated the works of Euclid and smuggled his translations back to Britain. The difficulties continued into the 14th century as some insisted on keeping the old
system. An Italian University said that price-lists for books must still be in Roman numerals!
^2 One mathematician who promoted the use of the new numbers was an Italian named Leonardo de Pisa. He became most commonly known by his nickname, Fibonacci. Fibonacci was the son of an Italian
diplomat and grew up in North Africa in the late 12th century. He learned about Arabic numbers as a young boy and later wrote an influential book about practical geometry. In it, he encouraged the
use of the new Arabic numbers. He also estimated the value of pi as 3.1418. Our value today is 3.14159265.
^3 With the encouragement of mathematicians like Fibonacci, Europeans finally adopted the new numbers. By 1400, grateful merchants in Italy, France, Germany, and Britain used them for accounting.
European mathematicians made amazing progress in many areas of mathematics and science between 1200 and 1700 AD because of the new number system. Schools taught the new arithmetic throughout Europe.
Most textbooks used the new numbers by the mid-15th century.
^4 The new textbooks also adopted convenient shortcuts for writing equations for addition, subtraction, multiplication, and division. These symbols were invented for practical reasons, and the + and
- signs were first used in warehouses. Workers painted the plus sign on a barrel, for example, to show that it was full. The + and - signs first appeared in print in 1526 by Johannes Widmann in a
German math book. The signs for multiplication and division came later, and the equal sign was first used in England in 1557. These symbols also led to the algebra we recognize today. By 1600,
letters were used to represent unknown amounts in equations.
^5 Logarithms were also invented at this time. Logarithms are intriguing numbers because if you add two of them together, you can solve complicated multiplication and division problems! A Scottish
mathematician named John Napier first published a table of these numbers in 1614, and soon books of logarithms became available. Electronic calculators replaced logarithms by the 1970s, but for
centuries, "logs" simplified complex calculations.
Paragraphs 6 to 12:
For the complete story with questions: click here for printable
Copyright © 2009 edHelper
|
{"url":"http://www.edhelper.com/ReadingComprehension_35_198.html","timestamp":"2014-04-16T04:22:54Z","content_type":null,"content_length":"9716","record_id":"<urn:uuid:f175618b-e86e-404c-a246-a6d5345009d2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Strong solutions of the stochastic Navier-Stokes equations in $R^3$
Seminar Room 1, Newton Institute
We establish the existence of local strong solutions to the stochastic Navier-Stokes equations in $R^3$. When the noise is multiplicative and non-degenerate, we show the existence of global solutions
in probability if the initial data are sufficiently small. Our results are extention of the well-known results for the deterministic Navier-Stokes equations in $R^3$.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
|
{"url":"http://www.newton.ac.uk/programmes/SPD/seminars/2010010416301.html","timestamp":"2014-04-17T15:32:44Z","content_type":null,"content_length":"6014","record_id":"<urn:uuid:e03ddf19-4e06-40f6-b9e1-2642846f6042>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what is an equation of the line that passes through the point (-4,-1) and is parallel to and perpendicular to the line 2x+7y=14 ?
• 5 months ago
• 5 months ago
Best Response
You've already chosen the best response.
I NEED HELP ASAP PLEASEE
Best Response
You've already chosen the best response.
\[y=\frac{ 7x }{ 2 }+15\]
Best Response
You've already chosen the best response.
you want two different equations? one for parallell and one for perpendicular? Note: parallel has same slope, and perpendicular has negative reciprocal of slope. The primary equation in slope
intercept form should be y = -2/7x + 2
Best Response
You've already chosen the best response.
So using the equation formula. (y-y1)=m(x-x1). You plug in your points from (-4,-1) and the slope you need (see above for parallel or perpendicular). Parallel. (y+1)=-2/7(x+4) : use algebra and
get x and y on left of equation. Note, the +4 and +1 are due to -(-1) and -(-4). Perpendicular. Same thing in a sense, but the slope must be the negative reciprocal of the primary equation
(line). (y+1)=7/2(x+4) again, use algebra and get the x and y on left side of equation.
Best Response
You've already chosen the best response.
Therefore: Parallel: 7y+2x=-15 Perpendicular: 2y-7x=26
Best Response
You've already chosen the best response.
need to know how to work this equation for the parallel i am not understanding fully...y=-5/3x+2...please help
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/528566d8e4b06e215641f0f3","timestamp":"2014-04-21T02:28:48Z","content_type":null,"content_length":"40243","record_id":"<urn:uuid:661f99e9-f472-44b6-8a1c-33bc4940f5b6>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The men who stare at formulas
The n-category Café has a post about “Dangerous Knowledge”, the BBC documentary I reviewed here some time ago; there’s also a discussion in the comments on whether mathematicians (or academics, or
creative types) are really different from “normal” people. If you came here from the link over there, welcome, and here’s hoping that you’ll enjoy this recent interview with John Nash. (Hat tip to
Around the 6-minute mark in the second video, Nash is asked explicitly whether his mental illness might have in some way contributed to his creativity and enabled his mathematical work. He points out
in response that his work in game theory was all done before the onset of his mental problems and that he “did not develop any ideas, particularly on game theory, while being mentally irrational”. He
also recalls a mistake in a published paper that he completed shortly before the breakdown and suggests that it may have been due to a malfunction of his mind.
A few minutes earlier in the interview, Nash talks about cognitive therapy: it “simply stimulates you to think. The more the mind really works, sort of like a computer, the more it tends towards
rationality and maybe recovery”.
It would not require a lot of imagination to pursue this further. Could it be possible that mathematics, as an activity based on logical thinking and sustained intellectual effort, exercises the
brain and keeps it healthier than it would be otherwise? Should we elaborate on how mathematics, far from being the ungrateful mistress of Dangerous Knowledge, might actually be the salvation of
those of us who need its mental discipline?
Nash makes no such implications. There’s not a single sentence in the interview where he suggests that this or that might be true of mathematicians and mental disorders in general. He only speaks of
his own experience and to some extent that of his family, all in a very matter-of-fact way. I couldn’t help thinking that facts are his friends – the good, immutable, trusted facts, preferably drawn
from his own experience – whereas speculations and generalizations such as those above are not. That, and he also must have spent unimaginable time and effort training himself to sort through his
thoughts in this way and discard anything with insufficient factual basis.
I wonder if the general public might imagine mathematicians as men (yes, men) who stare at formulas and apply some mysterious psychic powers until the formulas simplify themselves. Or we might be
like Luke Skywalker or some other magical hero, giving ourselves over to the Force and hoping that it will guide us straight to [S:the heart of the Death Star:S] the solution to the Riemann
hypothesis. In that narrative, mental illness might be associated with a quickening of said psychic powers and, therefore, heightened mathematical ability.
I’m finding the real John Nash story to be far more inspiring.
3 responses to “The men who stare at formulas”
1. General Public has so much misunderstanding toward math world. My cousin once asked me why I should continue studying Mathematics after graduation. She believes all the students majoring in math
will transfer to other business such as financial or IT related areas. She told me affirmatively:”Jingrun Chen spent all his life focusing on the proof of 1+1=2”. Then what? “Of course he ended
up mad.” She said.
I couldn’t find anything to say.
2. You can explain to your cousin that it was Russell and Whitehead who spent some of the best years of their lives proving 1+1=2. Jingrun Chen spent a decade proving that every (large enough) even
number is the sum of a prime and either another prime or the product of two primes. I do recall that that had an effect on his health – he became famous, got proper medical attention for his
tuberculosis, and lived to a reasonable age.
Now, what moral your cousin will extract from that story, I don’t know. I mean, presumably one shouldn’t have to do anything in particular to get proper medical attention, but surely an
understanding of prime numbers neither helps nor hurts to get that point, and that’s not a moral of the above story, anyhow.
3. Not that I am old and wise. I think a certain answer to your cousin can be http://online.wsj.com/article/SB123119236117055127.html
I do not think there is any job in the world which does not have its own problems. As stated in the interview, mathematical thought process is healthy in my belief. You probably want to go for
mathematics because you like it. You do not want to go for a Financial/IT job because you do not like it. Why do people start playing sport? For the money? I doubt Ronaldo or Maradona cared about
the money when they began playing. Yes you do need to eat, if that is what your cousin is worried about. The days when mathematicians die due to poverty have far been left behind( though I still
believe they are underpaid.)
There are a lot of people who take math major and go into other things. Well they like it that way, let them. Why should this be generalised to everyone.
The misconceptions regarding mathematics cannot be gotten rid of. They are here to stay. All we can do, is sit in our rooms, sip on warm hot chocolate and laugh till our tears come out.Someone
recently commented ‘Its not their fault’
For even further reference read this lovely piece : http://gowers.wordpress.com/2009/12/20/wiles-meets-his-match/
Filed under mathematics: people, movies
|
{"url":"http://ilaba.wordpress.com/2009/12/08/the-men-who-stare-at-formulas/","timestamp":"2014-04-17T12:51:59Z","content_type":null,"content_length":"52419","record_id":"<urn:uuid:d6eac29a-d633-43f6-8142-1453ef99efa2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
West Chicago Prealgebra Tutor
...I hold a bachelor of science in electrical engineering with emphasis in mathematics. I teach with solid math acumen, coupled with encouragement and positive reinforcement. This has proved
positive for my own kids and many students I have tutored.
18 Subjects: including prealgebra, geometry, algebra 2, elementary (k-6th)
I love teaching math and science, and I know how to make it fun and interesting. As a physicist I work everyday with math and science, and I have a long experience in teaching and tutoring at all
levels (university, high school, middle and elementary school). My son (a 5th grader) scores above 99 p...
23 Subjects: including prealgebra, physics, calculus, statistics
...I am also helping students who is planning to take the AP Calculus, ACT and SAT exams. Many students who hated math started liking it after my tutoring. That is my specialty.
12 Subjects: including prealgebra, calculus, geometry, statistics
...Besides stats, I have a lot of knowledge about subjects from algebra through calculus. No matter what subject I'm tutoring, I share my love of learning with the student. I am willing to travel
to meet wherever the student feels comfortable.
5 Subjects: including prealgebra, statistics, algebra 1, probability
...During my masters degree I was a TA for the intro to computer science course. For three semesters I taught C++ and Matlab to freshmen and sophomore mechanical engineering students. I have used
Matlab extensively in all of my undergrad and master's coursework.
17 Subjects: including prealgebra, physics, calculus, algebra 1
|
{"url":"http://www.purplemath.com/West_Chicago_Prealgebra_tutors.php","timestamp":"2014-04-18T23:25:09Z","content_type":null,"content_length":"23983","record_id":"<urn:uuid:a076cc84-33cc-4ae1-8561-8ec5e1300194>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sugar Hill, GA Calculus Tutor
Find a Sugar Hill, GA Calculus Tutor
...In graduate school, I was a TA for 2 classes each semester for 3 years. I have BA's in Architectural/Art History and Psychology from UCSB and a Master's Degree in Architecture from U. of
Miami. I'm able to tutor a variety of subjects from Math and Science to History and Health.
48 Subjects: including calculus, English, reading, writing
...During that time I received an award for being an "Outstanding Teaching Assistant" from my supervisor and a student nominated "Thank a Teacher" certificate. I am currently searching for full
time employment in research in the Atlanta area, but truly enjoy working with students which is my motiva...
9 Subjects: including calculus, chemistry, geometry, algebra 1
...As a Special Education teacher, I have 7 years of experience helping students study, take notes, complete assignments, and get the most out of their school work. I taught Study Skills for 7
years in the classroom teaching students how to organize their notebooks, study for tests, and use graphic organizers. I am certified Special Needs Educator K-12.
21 Subjects: including calculus, geometry, algebra 1, algebra 2
...That's where I can help you. I studied and spent a many hours digging around to become fully familiar with the system. That's time I can save you!
32 Subjects: including calculus, reading, physics, geometry
...I truly fell in love with Biology in the 7th grade. My teacher, Ms. B. at Duluth Middle School, made science fun and exciting.
15 Subjects: including calculus, reading, chemistry, biology
Related Sugar Hill, GA Tutors
Sugar Hill, GA Accounting Tutors
Sugar Hill, GA ACT Tutors
Sugar Hill, GA Algebra Tutors
Sugar Hill, GA Algebra 2 Tutors
Sugar Hill, GA Calculus Tutors
Sugar Hill, GA Geometry Tutors
Sugar Hill, GA Math Tutors
Sugar Hill, GA Prealgebra Tutors
Sugar Hill, GA Precalculus Tutors
Sugar Hill, GA SAT Tutors
Sugar Hill, GA SAT Math Tutors
Sugar Hill, GA Science Tutors
Sugar Hill, GA Statistics Tutors
Sugar Hill, GA Trigonometry Tutors
Nearby Cities With calculus Tutor
Berkeley Lake, GA calculus Tutors
Buford, GA calculus Tutors
Chamblee, GA calculus Tutors
Covington, GA calculus Tutors
Cumming, GA calculus Tutors
Doraville, GA calculus Tutors
Duluth, GA calculus Tutors
Flowery Branch calculus Tutors
Johns Creek, GA calculus Tutors
Lilburn calculus Tutors
Norcross, GA calculus Tutors
Oakwood, GA calculus Tutors
Rest Haven, GA calculus Tutors
Stone Mountain calculus Tutors
Suwanee calculus Tutors
|
{"url":"http://www.purplemath.com/Sugar_Hill_GA_Calculus_tutors.php","timestamp":"2014-04-21T02:03:24Z","content_type":null,"content_length":"23891","record_id":"<urn:uuid:dd48e780-d7f3-4312-97de-310b130146c4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Interest Word Problems (with worked solutions & videos)
Algebra: Interest Word Problems
Interest Problems are word problems that use the formula for Simple Interest. There is also another type of interest word problems called Compound Interest.
In this lesson, we will how to solve
• word problems that involve a single Simple Interest
• word problems that involve more that one Simple Interest.
Related Topics:
Compound Interest Word Problems
More Algebra Word Problems
Simple Interest Word Problems
The formula for Simple Interest is:
i = prt
i is the interest generated.
p is the principal amount that is either invested or owed
r is the rate at which the interest is paid
t is the time that the principal amount is either invested or owed
This type of word problem is not difficult. Just remember the formula and make sure you plug in the right values. The rate is usually given in percent, which you will need to change to a decimal
Word Problems with one Simple Interest
Example 1:
John wants to have an interest income of $3,000 a year. How much must he invest for one year at 8%?
Step 1: Write down the formula
i = prt
Step 2: Plug in the values
3000 = p × 0.08 × 1
3000 = 0.08p
p = 37,500
Answer: He must invest $37,500
Example 2:
Jane owes the bank some money at 4% per year. After half a year, she paid $45 as interest. How much money does she owe the bank?
Step 1: Write down the formula
i = prt
Step 2: Plug in the values
45 = 0.02p
p = 2250
Answer: She owes $2,250
Simple Interest Word Problems (Investment Problems)
1. Find the amount of interest earned by $8000 invested at 5% annual simple interest rate for 1 year.
2. To start a mobile dog-grooming service, a woman borrowed $2,500. If the loan was for two years and the amount of interest was $175, what simple interest rate was she charged?
3. A student borrowed some money from his father at 2% simple interest to buy a car. He paid his father $360 in interest after 3 years, how much did he borrow?
4. A couple invested $6,000 of his $20,000 lotterey earning in bonds. How much do they have left to invest in stocks?
5. A college student wants to invest the $12,000 inheritance he received and use the annual interest erand to pay for his tuition cost of $945. The highest interest offered by a bank is 6% annual
simple interest. At this rate, he cannot earn the needed $945, so he decides to invest some of the money in a riskier, but more profitable, investment offering a 9% return. How much should he invest
in each rate?
6. A credit union loaned out $50,000, part at an annual rate of 6% and the rest at an annual rate of 12% . The collected combined interest was $3,600 that year. How much did the credit union loan out
at each rate?
Using the Simple Interest Formula (Word Problems)
Jenna invests $13,000 into separate bank accounts, one earning 6% simple interest and the other earning 3% simple interest. If at the ned of one year she earns $682.50 in interest , how much did she
invest in each account?
Word Problems with more than one Simple Interest
The following videos give more examples of interest word problems.
Pam invested $5000. She earned 14% on part of her investment and 6% on the rest. If she earned a total of $396 in interest for the year, how much did she invest at each rate? Note that this problem
requires a chart to organize the information. The chart is based on the interest formula, which states that the amount invested times the rate of interest = interest earned. The chart is then used to
set up the equation.
Johnny is a shrewd eight-year-old. For Christmas, his grandparents gave him ten thousand dollars. Johnny decides to invest some of the money in a savings account that pays two percent per annum and
the rest in a stock fund that pays ten percent per annum. Johnny wants his investments to yield seven percent per annum. How much should he put in each account?
Suppose $7,000 is divided into two bank accounts. One account pays 10% simple interest per year and the other pays 5%. After three years there is a total of $1451.25 in interest between the two
accounts. How much was invested into each account (rounded to the nearest cent)?
Word Problem: Simple Interest
You invest $20, 000 in two accounts paying 7% and 9% annual interest, respectively. If the total interest earned for the year is $1, 550 how much was invested at 7%?
(Part 1)
(Part 2)
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
|
{"url":"http://www.onlinemathlearning.com/interest-problems.html","timestamp":"2014-04-16T13:45:03Z","content_type":null,"content_length":"45389","record_id":"<urn:uuid:58d8bc84-f06b-4452-9b81-9edb95ab21fc>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Riegelsville Math Tutor
Find a Riegelsville Math Tutor
...I have received extensive computer networking training from New Horizons Computer Learning Center, and I received a master of applied science in information technology. I have several years of
experience troubleshooting and fixing networks and computer issues. I have worked with Microsoft's Windows operating system for over 15 years.
17 Subjects: including prealgebra, algebra 1, algebra 2, geometry
...I worked in the pharmaceutical industry as a chemist for 26 years and in my retirement I decided to help students succeed in chemistry. I have successfully tutored several students in honors
and AP chemistry in the past few years. I have a broad knowledge of chemistry and passed the Praxis test 4 years ago.
7 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I can teach you how to use this software effectively, but more important I can teach you how to find your way “around” any question you need answered AFTER our sessions. Once you have mastered
the fundamentals you will be able and confident to proceed on your own. I have 23 years’ experience as...
62 Subjects: including algebra 1, algebra 2, organic chemistry, biology
...I recently graduated from Eastern University with a degree in middle level education, concentrating in math and science. I currently teach after-school science programs and tutor a few times a
week. I am always looking for more ways to inspire students to love science and math.
28 Subjects: including linear algebra, prealgebra, geometry, algebra 1
...My GPA is a 3.81 so all the math classes I've taken have went very well and I'd love to offer help to anyone who needs it! I'm experienced in all middle school and high school math classes, as
well as offering prep help for the SAT, ACT, or Praxis exams. Upon request I can forward you any references, professional evaluations, and background checks that you would like to see.
14 Subjects: including geometry, trigonometry, statistics, discrete math
Nearby Cities With Math Tutor
Baptistown Math Tutors
Broadway, NJ Math Tutors
Coopersburg Math Tutors
Durham, PA Math Tutors
Freemansburg, PA Math Tutors
Hellertown Math Tutors
Kintnersville Math Tutors
Little York, NJ Math Tutors
Nazareth, PA Math Tutors
Phillipsburg, NJ Math Tutors
Revere, PA Math Tutors
Richlandtown Math Tutors
Springtown, PA Math Tutors
Stewartsville, NJ Math Tutors
West Easton, PA Math Tutors
|
{"url":"http://www.purplemath.com/riegelsville_math_tutors.php","timestamp":"2014-04-21T14:50:00Z","content_type":null,"content_length":"24093","record_id":"<urn:uuid:b51a11dd-5574-451d-9ed3-d40b3a83bef3>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
ASCII Text x
Jeeho Sohn, Thomas G. Robertazzi, Serge Luryi, "Optimizing Computing Costs Using Divisible Load Analysis," IEEE Transactions on Parallel and Distributed Systems, vol. 9, no. 3, pp. 225-234, March,
BibTex x
@article{ 10.1109/71.674315,
author = {Jeeho Sohn and Thomas G. Robertazzi and Serge Luryi},
title = {Optimizing Computing Costs Using Divisible Load Analysis},
journal ={IEEE Transactions on Parallel and Distributed Systems},
volume = {9},
number = {3},
issn = {1045-9219},
year = {1998},
pages = {225-234},
doi = {http://doi.ieeecomputersociety.org/10.1109/71.674315},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Parallel and Distributed Systems
TI - Optimizing Computing Costs Using Divisible Load Analysis
IS - 3
SN - 1045-9219
EPD - 225-234
A1 - Jeeho Sohn,
A1 - Thomas G. Robertazzi,
A1 - Serge Luryi,
PY - 1998
KW - Bus network
KW - computer utility
KW - cost
KW - divisible load
KW - load sharing.
VL - 9
JA - IEEE Transactions on Parallel and Distributed Systems
ER -
Abstract—A bus oriented network where there is a charge for the amount of divisible load processed on each processor is investigated. A cost optimal processor sequencing result is found which
involves assigning load to processors in nondecreasing order of the cost per load characteristic of each processor. More generally, one can trade cost against solution time. Algorithms are presented
to minimize computing cost with an upper bound on solution time and to minimize solution time with an upper bound on cost. As an example of the use of this type of analysis, the effect of replacing
one fast but expensive processor with a number of cheap but slow processors is also discussed. The type of questions investigated here are important for future computer utilities that perform
distributed computation for some charge.
[1] I. Ahmad, A. Ghafoor, and G.C. Fox, "Hierarchical Scheduling of Dynamic Parallel Computations on Hypercube Multicomputers," J. Parallel and Distributed Computing, vol. 20, pp. 317-329, 1994.
[2] S.H. Bokhari, "A Network Flow Model for Load Balancing in Circuit-Switched Multicomputers," IEEE Trans. Parallel and Distributed Systems, vol. 4, no. 6, pp. 649-657, June 1993.
[3] C.-H. Lee, D. Lee, and M. Kim, "Optimal Task Assignment in Linear Array Networks," IEEE Trans. Computers, vol. 41, no. 7, pp. 877-880, July 1992.
[4] K.K. Goswami, M. Devarakonda, and R.K. Iyer, "Prediction-Based Dynamic Load-Sharing Heuristics," IEEE Trans. Parallel and Distributed Systems, vol. 4, no. 6, pp. 638-648, June 1993.
[5] G. Huang and W. Ongsakul, "An Efficient Load-Balancing Processor Scheduling Algorithm for Parallelization of Gauss-Seidel Type Algorithms," J. Parallel and Distributed Computing, vol. 22, pp.
350-358, 1994.
[6] V.M. Lo, S. Rajopadhye, S. Gupta, D. Keldsen, M.A. Mohamed, and J. Telle, "Mapping Divide and Conquer Algorithms to Parallel Computers," Proc. 1990 Int'l Conf. Parallel Architectures, pp.
128-135, 1990.
[7] K. Ramamritham, J. Stankovic, and P. Shiah, “Efficient Scheduling Algorithms for Real-Time Multiprocessor Systems,” IEEE Trans. Parallel and Distributed Systems, vol. 1, no. 2, Apr. 1990.
[8] X. Qian and Q. Yang, "An Analytical Model for Load Balancing on Symmetric Multiprocessor Systems," J. Parallel and Distributed Computing, vol. 20, pp. 198-211, 1994.
[9] Y.-C. Chang and K.G. Shin, "Optimal Load Sharing in Distributed Real-Time Systems," J. Parallel and Distributed Computing, vol. 19, no. 1, pp. 38-50, Sept. 1993.
[10] K.G. Shin and M.-S. Chen, "On the Number of Acceptable Task Assignments in Distributed Computing Systems," IEEE Trans. Computers, vol. 39, no. 1, pp. 99-110, Jan. 1990.
[11] D.-T. Peng and K.G. Shin, "A New Performance Measure for Scheduling Independent Real-Time Tasks," J. Parallel and Distributed Computing, vol. 19, no. 1, pp. 11-26, Sept. 1993.
[12] G.C. Sih and E.A. Lee, “Declustering: A New Multiprocessor Scheduling Technique,” IEEE Trans. Parallel and Distributed Systems, vol. 4, no. 6, pp. 625-637, June 1993.
[13] J. Xu and K. Hwang, "Heuristic Methods for Dynamic Load Balancing in a Message-Passing Multicomputer," J. Parallel and Distributed Computing, vol. 18, no. 1, pp. 1-13, May 1993.
[14] J. Blazerwicz,M. Drabowski,, and j. Weglarz,“Scheduling multiprocessor tasks to minimize schedule length,” IEEE Trans. on Computers, vol. 35, no. 5, May 1986.
[15] J. Du and J. Leung,“Complexity of scheduling parallel task systems,”SIAM J. Discrete Math., vol. 2 no. 4, pp. 473–487, Nov. 1989.
[16] W. Zhao, K. Ramamritham, and J.A. Stankovic, "Preemptive Scheduling Under Time and Resource Constraints," IEEE Trans. Computers, Vol. 36, No. 8, Aug. 1987, pp. 949-960.
[17] Y.C. Cheng and T.G. Robertazzi, Distributed Computation with Communication Delays IEEE Trans. Aerospace and Electronic Systems, vol. 24, no. 6, pp. 700-712, Nov. 1988.
[18] Y.C. Cheng and T.G. Robertazzi, "Distributed Computation for a Tree Network with Communication Delays," IEEE Trans. Aerospace and Electronic Systems, vol. 26, no. 3, pp. 511-516, May 1990.
[19] S. Bataineh and T.G. Robertazzi, "Distributed Computation for a Bus Network with Communication Delays," Proc. 1991 Conf. Information Sciences and Systems, pp. 709-714, Johns Hopkins Univ.,
Baltimore, Md., Mar. 1991.
[20] S. Bataineh and T.G. Robertazzi, Bus Oriented Load Sharing for a Network of Sensor Driven Processors IEEE Trans. Systems, Man and Cybernetics, special issue on distributed sensor networks, vol.
21, no. 5, pp. 1202-1205, Sept. 1991.
[21] S. Bataineh and T.G. Robertazzi, "Performance Limits for Processor Networks with Divisible Jobs," IEEE Trans. Aerospace and Electronic Systems, vol. 33, no. 4, pp. 1,189-1,198, Oct. 1997.
[22] S. Bataineh, T. Hsiung, and T.G. Robertazzi, Closed Form Solutions for Bus and Tree Networks of Processors Load Sharing a Divisible Job IEEE Trans. Computers, vol. 43, no. 10, pp. 1184-1196,
Oct. 1994.
[23] T.G. Robertazzi, "Processor Equivalence for a Linear Daisy Chain of Load Sharing Processors," IEEE Trans. Aerospace and Electronic Systems, vol. 29, no. 4, pp. 1,216-1,221, Oct. 1993.
[24] J. Sohn and T.G. Robertazzi, Optimal Load Sharing for a Divisible Job on a Bus Network IEEE Trans. Aerospace and Electronic Systems, vol. 32, no. 1, pp. 34-40, Jan. 1996.
[25] J. Sohn and T.G. Robertazzi, "A Multi-Job Load Sharing Strategy for Divisible Jobs on Bus Networks," Technical Report 697, State Univ. of New York at Stony Brook, College of Eng. and Applied
Science, Aug. 1994. Also appears in chapter 12 of [45].
[26] J. Sohn and T.G. Robertazzi, "An Optimal Load Sharing Strategy for Divisible Jobs with Time-Varying Processor Speed and Channel Speed," Conf. version: Proc. ISCA Int'l Conf. Parallel and
Distributed Computing Systems, pp. 27-32,Orlando Fla., Sept. 1995. Journal version: Accepted for IEEE Trans. Aerospace and Electronic Systems, July 1998.
[27] D. Ghose and V. Mani, "Distributed Computation in a Linear Network: Closed-form Solutions and Computational Techniques," IEEE Trans. Aerospace and Electronic Systems, vol. 30, no. 2, pp.
471-483, Apr. 1994.
[28] D. Ghose and V. Mani, "Distributed Computation with Communication Delays: Asymptotic Performance Analysis," J. Parallel and Distributed Computing, vol. 23, pp. 293-305, Nov. 1994.
[29] V. Bharadwaj, D. Ghose, and V. Mani, "Optimal Sequencing and Arrangement in Distributed Single-Level Tree Networks with Communication Delays," IEEE Trans. Parallel and Distributed Systems, vol.
5, no. 9, pp. 968-976, Sept. 1994.
[30] V. Bharadwaj, D. Ghose, and V. Mani, Multiinstallment Load Distribution in Tree Networks With Delays IEEE Trans. Aerospace and Electronic Systems, vol. 31, no. 2, pp. 555-567, 1995.
[31] V. Bharadwaj, D. Ghose, and V. Mani, "An Efficient Load Distribution Strategy for a Distributed Linear Network of Processors with Communication Delays," Computer and Mathematics with
Applications, vol. 29, no. 9, pp. 95-112, May 1995.
[32] V. Bharadwaj, D. Ghose, and V. Mani, "A Study of Optimality Conditions for Load Distribution in Tree Networks with Communication Delays," Technical Report 423/GI/02-92, Dept. of Aerospace Eng.,
Indian Inst. of Science, Bangalore, India, Dec. 1992.
[33] H.J. Kim, G.I. Jee, and J.G. Lee, "Optimal Load Distribution for Tree Network Processors," IEEE Trans. Aerospace and Electronic Systems, vol. 32, no. 2, pp. 607-612, Apr. 1996.
[34] J. Blazewicz and M. Drozdowski, "Scheduling Divisible Jobs on Hypercubes," Parallel Computing, vol. 21, pp. 1,945-1,956, 1995.
[35] J. Blazewicz and M. Drozdowski, "The Performance Limits of a Two-Dimensional Network of Load-Sharing Processors," Foundations of Computing and Decision Sciences, vol. 21, no. 1, pp. 3-15, 1996.
[36] J. Blazewicz and M. Drozdowski, "Distributed Processing of Divisible Jobs with Communication Startup Costs," Discrete Applied Mathematics, vol. 76, issues 1-3, pp. 21-41, June 1997.
[37] E. Haddad, "Communication Protocol for Optimal Redistribution of Divisible Load in Distributed Real-Time Systems," Proc. ISMM Int'l Conf. Intelligent Information Management Systems, pp.
39-42,Washington, D.C., June 1994.
[38] J. Sohn, T.G. Robertazzi, and S. Luryi, "Optimizing Computing Costs Using Divisible Load Analysis," State Univ. of New York at Stony Brook, College of Eng. and Applied Science, Technical Report
719, Oct.30, 1995. Also related: US patent application, Load Sharing Controller for Optimizing Monetary Cost, 1996.
[39] R. Cocchi, D. Estrin, S. Shenker, and L. Zhang, “Pricing in Computer Networks: Motivation, Formulation and Examples,” IEEE/ACM Trans. Networking, vol. 1, pp. 614-627, 1993.
[40] A. Faragó, S. Blaabjerg, L. Ast, G. Gordos, and T. Henk, "A New Degree of Freedom in ATM Network Dimensioning: Optimizing the Logical Configuration," IEEE J. Selected Areas in Comm., vol. 13,
no. 7, pp. 1,199-1,205, Sept. 1995.
[41] J. Kurose and R. Simha, “A Microeconomic Approach to Optimal Resource Allocation in Distributed Computer Systems,” IEEE Trans. Computers, vol. 38, no. 5, May 1989.
[42] S.H. Low and P.P Varaiya, "A New Approach to Service Provisioning in ATM Networks," IEEE Trans. Networking, vol. 1, no. 5, pp. 547-553, Oct. 1993.
[43] Y.A. Korilis, A.A. Lazar, and A. Orda, "Architecting Noncooperative Networks," IEEE J. Selected Areas in Comm., vol. 13, no. 7, pp. 1,241-1,251, Sept. 1995.
[44] D. Menasce and V. Almeida, "Cost-Performance Analysis of Heterogeneity in Supercomputer Architectures," Proc. IEEE/ACM Supercomputing '90, pp. 169-177, 1990.
[45] V. Bharadwaj, D. Ghose, V. Mani, and T.G. Robertazzi, Scheduling Divisible Loads in Parallel and Distributed Systems.Los Alamitos, Calif.: IEEE CS Press, 1996.
Index Terms:
Bus network, computer utility, cost, divisible load, load sharing.
Jeeho Sohn, Thomas G. Robertazzi, Serge Luryi, "Optimizing Computing Costs Using Divisible Load Analysis," IEEE Transactions on Parallel and Distributed Systems, vol. 9, no. 3, pp. 225-234, March
1998, doi:10.1109/71.674315
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/td/1998/03/l0225-abs.html","timestamp":"2014-04-19T00:26:07Z","content_type":null,"content_length":"64660","record_id":"<urn:uuid:2371456e-474b-4963-a7c8-4621c3d2c3fe>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
EIA Resistor Values Explained
January 12th, 2009 by Jeff
Have you ever wondered why standard 5% resistors have strange values, like 330 and 470 Ohms, instead of nice round numbers like 300 or 500 Ohms?
It turns out that standard resistor values form a preferred number series defined by the EIA. 5% values are part of a standard called E24. The standard is based on a geometric series – each value
is approximately 1.1 times the previous one in the set.
This scheme ensures that the resistance values are separated by an amount approximately equal to twice their tolerance. Since a 5% tolerance resistor could actually be plus or minus 5% of the
nominal value, the E24 range spaces the values by 10%. That way, where the tolerance range of one value leaves off, the next higher value picks up, with the smallest possible overlap or gaps in
For example, 330 Ohms + 5% = 347 Ohms. The next highest E24 value is 360 ohms, and 360 Ohms – 5% = 342 Ohms. There is a small overlap of 5 ohms because the values don’t follow the geometric series
exactly (due to rounding to the nearest 10 Ohms). Spacing resistances significantly closer than their tolerance range would be silly – a 330 Ohm resistor could in reality be larger than a resistor
marked 335 Ohms if both resistors had a 5% tolerance.
Here is a chart of the E24 resistor values between 100 Ohms and 1k:
As you can see in the chart, E24 values are nicely spaced between 100 and 1k Ohms. Below 100 Ohms or above 1k, the series simply repeats. The name E24 comes from the fact that there are 24 values
per decade of resistance.
Other EIA standards define the values for other tolerance ranges. Here is E96, commonly used with 1% resistors:
In this case, each value is 2% larger than the previous value, yielding 96 values per decade!
It’s nice to know the range of possible resistor values when you are designing circuits. This quickly answers the question of whether you can use 573.25 Ohms in your circuit. (No. Well, not
easily.) There are lots of EIA tables online, including some that are colorful and some that can be printed and stuck on your wall.
The EIA values are also part of IEC standard 60063, so you may see them referred to as EIA or IEC resistance values, just to make things more confusing, but the values are the same.
Tags: EIA, Electronics, IEC, resistance, standards
Very interesting and makes sense, thanks for posting this.
One thing you might also want to discuss is the formula for obtaining the resistor value:
R = 10 ^ (n / b)
Where ^ is exponentiation, and b is the series number (24 for 5%, 96 for 1%). What is especially interesting about this formula is that it really helps calculate standard resistor ratios. These are
used all the time for voltage dividers, gain setting equations in op-amp circuits, and so forth.
R1/R2 = [10^(n1/b)] / [10^(n2/b)] = 10^[(n1-n2)/b]
b * log (R1/R2) = n1-n2 = delta n
If we know the ratio we want, we can solve for delta n. This is a very useful result. For our resistor ratio, we can pick any resistor. Now the other resistor is (delta n) steps higher in the series.
Using this trick you can change the impedance of the resistor divider to be whatever you want without changing the divider ratio.
The resistor values are equally spaced on a log scale — the resistor values make a straight line on a log plot. Using Eric’s formula,
log(R) = n/b
Where b is the series number (24 for 5%, 96 for 1%) and n is the number in the series.
Eric – Thank you for providing the formulas to solve for R and the resistor ratio – I have to admit that I am guilty of using trial and error to find the same results – this is much more
straightforward. Important to note that the R value in the first equation may not exactly match a given value in the series but will be within a few ohms.
Jason – You are correct – any geometric series will be a straight line on a log scale since R1/R2 is a constant.
|
{"url":"http://mightyohm.com/blog/2009/01/eia-resistor-values-explained/","timestamp":"2014-04-19T06:56:38Z","content_type":null,"content_length":"53741","record_id":"<urn:uuid:6936ba35-c1b6-4f9b-8ab7-727b6e20d383>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
|
July 2, 2007, 1:17 pm
The Big Picture of abstract algebra?
without the use of a required textbook. One of the difficult, and good, things this approach imposes on me as the professor is that I cannot rely on the book to provide structure and order to the
course. I have to do this myself. Before I can do any realistic planning, I first have to decide what I am going to cover and the order in which I am going to (try to) do it. And before I can do
that, I have to face some questions that professors are surprisingly able to sidestep when using a textbook, namely: What is this course about? What themes unify, and therefore motivate, the
material? And what are the core issues and questions that this course attempts to address?
Far too often, students can take a course in college or high school and make good…
July 5, 2007, 11:49 am
Advice for effective studying
Scott Young has an outstanding article at Lifehack.org today on 10 tips to study smart and save time. These three tips from the list are related to each other and offer very good advice that most
students, especially new college students, never hear:
• Leave No Islands – When you read through a textbook, every piece of information should connect with something else you have learned. Fast learners do this automatically, but if you leave islands
of information, you won’t be able to reach them during a test.
• Test Your Mobility – A good way to know you haven’t linked enough is that you can’t move between concepts. Open up a word document and start explaining the subject you are working with. If you
can’t jump between sections, referencing one idea to help explain another, you won’t be able to think through the connections during a test.
• Find Patterns – Look for patterns in information…
January 20, 2010, 10:31 am
Courses and "something extra"
Some of the most valuable courses I took while I was in school were so because, in addition to learning a specific body of content (and having it taught well), I picked up something extra along the
way that turned out to be just as cool or valuable as the course material itself. Examples:
• I was a psychology major at the beginning of my undergraduate years and made it into the senior-level experiment design course as a sophomore. In that course I learned how to use SPSS (on an
Apple IIe!). That was an “extra” that I really enjoyed, perhaps moreso than the experiment I designed. (I wish I still knew how to use it.)
• In my graduate school differential geometry class (I think that was in 1995), we used Mathematica to plot torus knots and study their curvature and torsion. Learning Mathematica and how to use it
for mathematical investigations were the “something extra” that I took from the …
February 4, 2010, 10:14 pm
12 videos for getting LaTeX into the hands of students
There seem to be two pieces of technology that all mathematicians and other technical professionals use, regardless of how technophobic they might be: email, and \(\LaTeX\). There are ways to typeset
mathematical expressions out there that have a more shallow learning curve, but when it comes to flexibility, extendability, and just the sheer aesthetic quality of the result, \(\LaTeX\) has no
rival. Plus, it’s free and runs on every computing platform in existence. It even runs on WordPress.com blogs (as you can see here) and just made its entry into Google Documents in miniature form as
Google Docs’ equation editor. \(\LaTeX\) is not going anywhere anytime soon, and in fact it seems to be showing up in more and more places as the typesetting system of choice.
But \(\LaTeX\) gets a bad rap as too complicated for normal people to use. It seems to be something people learn …
March 21, 2010, 7:32 pm
Calculus reform's next wave
There’s a discussion going on right now in the Project NExT email list about calculus textbooks, the merits/demerits of the Stewart Calculus textbook, and where — if anywhere — the “next wave” of
calculus reform is going to come from. I wrote the following post to the group, and I thought it would serve double-duty fairly well as a blog post. So… here it is:
I’d like to add my $0.02 worth to this discussion just because (1) I’m a longtime Stewart Calculus user, having used the first edition (!) when I was an undergrad and having taught out of it for my
entire career, and (2) I’m also a fairly consistent critic of Stewart’s calculus and of textbooks in general.
I try to see textbooks from the viewpoints of my students. From that vantage point, I unfortunately find very little to say in favor of Stewart’s franchise of books, including the current edition,
all of the…
May 15, 2010, 11:41 am
The semester in review
I’ve made it to the end of another semester. Classes ended on Friday, and we have final exams this coming week. It’s been a long and full semester, as you can see by the relative lack of posting
going on here since around October. How did things go?
Well, first of all I had a record course load this time around — four different courses, one of which was the MATLAB course that was brand new and outside my main discipline; plus an independent
study that was more like an undergraduate research project, and so it required almost as much prep time from me as a regular course.
The Functions and Models class (formerly known as Pre-calculus) has been one of my favorites to teach here, and this class was no exception. We do precalculus a bit differently here, focusing on
using functions as data modeling …
Search Casting Out Nines
□ The Chronicle Blog Network, a digital salon sponsored by The Chronicle of Higher Education, features leading bloggers from all corners of academe. Content is not edited, solicited, or
necessarily endorsed by The Chronicle. More on the Network...
Casting Out Nines through your favorite RSS reader: SUBSCRIBE
|
{"url":"http://chronicle.com/blognetwork/castingoutnines/category/math/linear-algebra/page/2/","timestamp":"2014-04-17T21:30:44Z","content_type":null,"content_length":"78967","record_id":"<urn:uuid:bfa8d9bb-71d8-41e6-8059-714c4e0c55ad>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From point-wise to essential supremum of a set of real-valued measurable functions
up vote 0 down vote favorite
For an uncountable collection of uncountable sets of real-valued random variables (i.e. measurable with respect to a $\sigma$-algebra) $\{S_i\}_{i\in I}$, with
$\inf \left(\bigcap_{i\in I}S_i \right)= \sup\{\inf S_i|{i\in I}\}$
I want to to show
$\mathrm{ess}\inf \left(\bigcap_{i\in I}S_i \right)= \mathrm{ess}\sup\{\mathrm{ess}\inf S_i|{i\in I}\}$
I tried something in analogy to the proof of existence of the essential supremum, but failed.
It would be great to get some help on this. Does it hold and if so, how prove it and if not, why not?
This is a cross-posting from this question on math.stackexchange
Edit: I change the notation. No the all infima/suprema are understood to be pointwise suprema of a set of functions. And the essential suprema/infima of a set of functions should be read like on this
The indexed intersection is defined as usual: $\bigcap_{i\in I}S_i = \{x|\forall i\in I: x\in S_i\}$
add comment
2 Answers
active oldest votes
Okay. Give $[0,1]$ Lebesgue measure. For each $t \in [0,1]$ let $S_t = \{1_{\{t\}}\} \cup \{a\cdot 1_{[0,1]}: a \geq 1\}$.
Then $\bigcap S_t = \{a\cdot 1_{[0,1]}: a \geq 1\}$ and its inf is the function $1_{[0,1]}$. For each $t$ the inf of $S_t$ is the function $1_{\{t\}}$, and the sup of these is also the
function $1_{[0,1]}$. So your premise holds.
up vote 2
down vote The essential inf of $\bigcap S_t$ is also the function $1_{[0,1]}$ but the essential inf of each $S_t$ is the zero function and their essential sup is again the zero function. So the
accepted conclusion fails.
Is this really what you meant?
Nice, yes that's what he's asking (I think). – Rabee Tourky Feb 21 '13 at 18:33
thanks for the (embarassingly easy) example. What I am actually after: I want to find out the conditions for the second equation. I thought, solving the question for the first
equation would help. Here, a sufficient condition seems to be that the sets are upper sets. But I have no clue how to show the same for "almost surely upper sets". The greatest
obstacle is, that I do not know how to handle the set of measurable functions ordered by $\le$ (a.s.), because it fails to be the simple product order. What properties does it share
with the product order? (e.g. complete distributivity: No, ...) – Johannes Feb 22 '13 at 11:35
1 If the $S_i$ are upper sets, then the equation you are asking for IS complete distributivity. – Nik Weaver Feb 22 '13 at 13:16
add comment
The question is not clear. Are you asking whether the complete distributive law holds in (probably the unit ball of) $L^\infty(X,\mu)$? The answer is no: complete distributivity is
characteristic of atomic measure spaces. An easy way to see this is to use the fact that a complete lattice is completely distributive iff it has the property that for all $c$ and $d$ with
up vote $c \not\geq d$ there exist $c' \not\leq c$ and $d' \not\geq d$ such that every element of the lattice lies above $c'$ or below $d'$. I refer you to Theorem 5.3.5 of my book Lipschitz
1 down Algebras for a proof. Let $A$ be a positive measure set that contains no atoms, find $B \subset A$ with $0 < \mu(B) < \mu(A)$, and take $c = \chi_B$ and $d = \chi_{A\setminus B}$.
Thanks for your answer. Are the equations in my questions equivalent to the complete distributive law? I ultimately want to know, under what conditions for $S_i$ the equation for the ess
inf/sup holds. After I found a condition for the inf/sup equation I am now trying to transfer it to the ess inf/sup and do not know, if this makes sense at all. – Johannes Feb 16 '13 at
What order relation are you talking about? Product order for real-valued functions? – Johannes Feb 16 '13 at 11:54
How can a $c′$ exists, such that every element of the lattice lies above it, if also $c′ \nleq c$? Doesn't "above" mean $c′\le c$? – Johannes 0 secs ago – Johannes Feb 16 '13 at 12:20
@Johannes: (1) it's hard for me to tell exactly what the equations in your question are supposed to mean. I am guessing. (2) $f \leq g$ if $f(x) \leq g(x)$ except on a set of measure zero.
(3) every element of the lattice either lies above $c'$ or lies below $d'$. – Nik Weaver Feb 16 '13 at 18:53
ok. I tried to make the notation more clear. Is it understandable now? – Johannes Feb 21 '13 at 11:38
add comment
Not the answer you're looking for? Browse other questions tagged measure-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/121926/from-point-wise-to-essential-supremum-of-a-set-of-real-valued-measurable-functio","timestamp":"2014-04-19T15:13:18Z","content_type":null,"content_length":"65503","record_id":"<urn:uuid:a1854ea9-a367-419b-aa3e-328d4bb8c14f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - why is knowing the total charge on the conductors enough?
how do you prove that the electric field is determined uniquely from knowing the total charge on a conductor (just the outline of the proof).
You don't - you also need the charge distribution.
That will be determined by the properties of the setup.
See Maxwell's equations.
|
{"url":"http://www.physicsforums.com/showpost.php?p=4223144&postcount=2","timestamp":"2014-04-18T03:08:57Z","content_type":null,"content_length":"7931","record_id":"<urn:uuid:f92b9b25-ac84-4180-90ae-8b71175eaac1>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Do You Write an Equation for a Vertical Line?
You may be able to guess that vertical lines are lines that go straight up and down, but did you know that all vertical lines have the same slope? In this tutorial, learn all about vertical lines
including their slope and what the equation of a vertical line looks like!
|
{"url":"http://www.virtualnerd.com/algebra-1/linear-equation-analysis/intercept/horizontal-vertical-lines/vertical-line-example","timestamp":"2014-04-17T07:09:30Z","content_type":null,"content_length":"25755","record_id":"<urn:uuid:ccfd9eb9-6a85-4916-bf47-44fa647db10b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
|