content
stringlengths
86
994k
meta
stringlengths
288
619
aluable English Janet M. Noyes Matthew Bransby Institution of Engineering and Technology Number of Pages: 315 Published: 2002-05-01 List price: $124.00 The aim of this book is to provide state-of-the-art information on various aspects of human-machine interaction and human-centred issues encountered in the control room setting. As industrial processes have become more automated, there is increasing concern about the performance of the people who control these systems. Human error is increasingly cited as the cause of accidents across many sectors of industry. This book is written primarily by engineers, for engineers involved with human factor issues. Based on a successful multidisciplinary conference on the subject, and illustrated with usef D. Lindsley The Institution of Engineering and Technology Number of Pages: 240 Published: 2000-01-01 List price: $90.00 Intended as a practical guide to the design, installation, operation and maintenance of the systems used for measuring and controlling boilers and heat-recovery steam-generators used in land and marine power plants and in process industries.Also available:Industrial Digital Control Systems, 2nd Edition - ISBN 9780863411373Flexible Robot Manipulators: Modelling, simulation and control - ISBN 9780863414480The Institution of Engineering and Technology is one of the world’s leading professional societies for the engineering and technology community. The IET publishes more than 100 new Wook Hyun Kwon Soo Hee Han Number of Pages: 380 Published: 2005-12-31 List price: $99.00 Receding Horizon Control introduces the essentials of a successful feedback strategy that has emerged in many industrial fields: the process industries in particular. Receding horizon control (RHC) has a number of advantages over other types of control: easier computation than steady-state optimal control; greater adaptability to parametric changes than infinite horizon control; better tracking than PID and good constraint handling among others. The text builds understanding starting with optimal controls for simple linear systems and working through constrained systems to nonlinear cases. Dan Huang Sing Kiong Nguang Number of Pages: 162 Published: 2009-06-11 List price: $109.00 Robust Control for Uncertain Networked Control Systems with Random Delays addresses the problem of analysis and design of networked control systems when the communication delays are varying in a random fashion. The random nature of the time delays is typical for commercially used networks, such as a DeviceNet (which is a controller area network) and Ethernet network. The main technique used in this book is based on the Lyapunov-Razumikhin method, which results in delay-dependent controllers. The existence of such controllers and fault estimators are given in terms of the solvability of Birkhäuser Boston Number of Pages: 684 Published: 1990-01-01 List price: $165.00 J. William Helton Matthew R. James Society for Industrial Mathematics Number of Pages: 355 Published: 1999-12 List price: $91.50 H-infinity control originated from an effort to codify classical control methods, where one shapes frequency response functions to meet certain objectives. H-infinity control underwent tremendous development in the 1980s and made considerable strides toward systematizing classical control. This book addresses the next major issue of how this extends to nonlinear systems. At the core of nonlinear control theory lie two partial differential equations (PDEs). One is a first-order evolution equation called the information state equation, which constitutes the dynamics of the controller. One can Stuart Bennett Institution of Electrical Engineers Number of Published: 1993-12-01 List price: $49.00 Following his book on the origin of control engineering (1800-1930 (see separate entry), the author now traces development through the critical period 1930-1955, widely identified as the period of ’classical’ control theory. In the 1930s basic automatic control devices were developed and used in process industries, as were servos for the control of aircraft and ships and amplifiers for the telephone system and early computers etc. During the war many disparate ideas were brought together for the development of aircraft tracking and response systems -- leading to classical control t
{"url":"http://www.ccebook.org/list/control/","timestamp":"2014-04-19T19:50:50Z","content_type":null,"content_length":"31890","record_id":"<urn:uuid:cefe2358-0ada-4642-88d0-cea651e51493>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Theory Topology_ZF_1 This file is a part of IsarMathLib - a library of formalized mathematics for Isabelle/Isar. Copyright (C) 2005 - 2008 Slawomir Kolodynski This program is free software; Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR theory Topology_ZF_1 imports Topology_ZF text{*In this theory file we study separation axioms and the notion of base and subbase. Using the products of open sets as a subbase we define a natural topology on a product of two topological spaces. *} section{*Separation axioms.*} text{*Topological spaces cas be classified according to certain properties called "separation axioms". In this section we define what it means that a topological space is $T_0$, $T_1$ or $T_2$.*} text{*A topology on $X$ is $T_0$ if for every pair of distinct points of $X$ there is an open set that contains only one of them. *} isT0 ("_ {is T⇩[0]}" [90] 91) where "T {is T⇩[0]} ≡ ∀ x y. ((x ∈ \<Union>T ∧ y ∈ \<Union>T ∧ x≠y) --> (∃U∈T. (x∈U ∧ y∉U) ∨ (y∈U ∧ x∉U)))" text{* A topology is $T_1$ if for every such pair there exist an open set that contains the first point but not the second.*} isT1 ("_ {is T⇩[1]}" [90] 91) where "T {is T⇩[1]} ≡ ∀ x y. ((x ∈ \<Union>T ∧ y ∈ \<Union>T ∧ x≠y) --> (∃U∈T. (x∈U ∧ y∉U)))" text{* A topology is $T_2$ (Hausdorff) if for every pair of points there exist a pair of disjoint open sets each containing one of the points. This is an important class of topological spaces. In particular, metric spaces are Hausdorff.*} isT2 ("_ {is T⇩[2]}" [90] 91) where "T {is T⇩[2]} ≡ ∀ x y. ((x ∈ \<Union>T ∧ y ∈ \<Union>T ∧ x≠y) --> (∃U∈T. ∃V∈T. x∈U ∧ y∈V ∧ U∩V=0))" text{*If a topology is $T_1$ then it is $T_0$. We don't really assume here that $T$ is a topology on $X$. Instead, we prove the relation between isT0 condition and isT1. *} lemma T1_is_T0: assumes A1: "T {is T⇩[1]}" shows "T {is T⇩[0]}" proof - from A1 have "∀ x y. x ∈ \<Union>T ∧ y ∈ \<Union>T ∧ x≠y --> (∃U∈T. x∈U ∧ y∉U)" using isT1_def by simp then have "∀ x y. x ∈ \<Union>T ∧ y ∈ \<Union>T ∧ x≠y --> (∃U∈T. x∈U ∧ y∉U ∨ y∈U ∧ x∉U)" by auto then show "T {is T⇩[0]}" using isT0_def by simp text{*If a topology is $T_2$ then it is $T_1$.*} lemma T2_is_T1: assumes A1: "T {is T⇩[2]}" shows "T {is T⇩[1]}" proof - { fix x y assume "x ∈ \<Union>T" "y ∈ \<Union>T" "x≠y" with A1 have "∃U∈T. ∃V∈T. x∈U ∧ y∈V ∧ U∩V=0" using isT2_def by auto then have "∃U∈T. x∈U ∧ y∉U" by auto } then have "∀ x y. x ∈ \<Union>T ∧ y ∈ \<Union>T ∧ x≠y --> (∃U∈T. x∈U ∧ y∉U)" by simp then show "T {is T⇩[1]}" using isT1_def by simp text{*In a $T_0$ space two points that can not be separated by an open set are equal. Proof by contradiction.*} lemma Top_1_1_L1: assumes A1: "T {is T⇩[0]}" and A2: "x ∈ \<Union>T" "y ∈ \<Union>T" and A3: "∀U∈T. (x∈U <-> y∈U)" shows "x=y" proof - { assume "x≠y" with A1 A2 have "∃U∈T. x∈U ∧ y∉U ∨ y∈U ∧ x∉U" using isT0_def by simp with A3 have False by auto } then show "x=y" by auto section{*Bases and subbases.*} text{*Sometimes it is convenient to talk about topologies in terms of their bases and subbases. These are certain collections of open sets that define the whole topology.*} text{*A base of topology is a collection of open sets such that every open set is a union of the sets from the base.*} IsAbaseFor (infixl "{is a base for}" 65) where "B {is a base for} T ≡ B⊆T ∧ T = {\<Union>A. A∈Pow(B)}" text{* A subbase is a collection of open sets such that finite intersection of those sets form a base.*} IsAsubBaseFor (infixl "{is a subbase for}" 65) where "B {is a subbase for} T ≡ B ⊆ T ∧ {\<Inter>A. A ∈ FinPow(B)} {is a base for} T" text{*Below we formulate a condition that we will prove to be necessary and sufficient for a collection $B$ of open sets to form a base. It says that for any two sets $U,V$ from the collection $B$ we can find a point $x\in U\cap V$ with a neighboorhod from $B$ contained in $U\cap V$.*} SatisfiesBaseCondition ("_ {satisfies the base condition}" [50] 50) "B {satisfies the base condition} ≡ ∀U V. ((U∈B ∧ V∈B) --> (∀x ∈ U∩V. ∃W∈B. x∈W ∧ W ⊆ U∩V))" text{*A collection that is closed with respect to intersection satisfies the base condition.*} lemma inter_closed_base: assumes "∀U∈B.(∀V∈B. U∩V ∈ B)" shows "B {satisfies the base condition}" proof - { fix U V x assume "U∈B" and "V∈B" and "x ∈ U∩V" with assms have "∃W∈B. x∈W ∧ W ⊆ U∩V" by blast } then show ?thesis using SatisfiesBaseCondition_def by simp text{*Each open set is a union of some sets from the base.*} lemma Top_1_2_L1: assumes "B {is a base for} T" and "U∈T" shows "∃A∈Pow(B). U = \<Union>A" using assms IsAbaseFor_def by simp text{* Elements of base are open. *} lemma base_sets_open: assumes "B {is a base for} T" and "U ∈ B" shows "U ∈ T" using assms IsAbaseFor_def by auto; text{*A base defines topology uniquely.*} lemma same_base_same_top: assumes "B {is a base for} T" and "B {is a base for} S" shows "T = S" using assms IsAbaseFor_def by simp; text{*Every point from an open set has a neighboorhood from the base that is contained in the set.*} lemma point_open_base_neigh: assumes A1: "B {is a base for} T" and A2: "U∈T" and A3: "x∈U" shows "∃V∈B. V⊆U ∧ x∈V" proof - from A1 A2 obtain A where "A ∈ Pow(B)" and "U = \<Union>A" using Top_1_2_L1 by blast; with A3 obtain V where "V∈A" and "x∈V" by auto; with `A ∈ Pow(B)` `U = \<Union>A` show ?thesis by auto; text{* A criterion for a collection to be a base for a topology that is a slight reformulation of the definition. The only thing different that in the definition is that we assume only that every open set is a union of some sets from the base. The definition requires also the opposite inclusion that every union of the sets from the base is open, but that we can prove if we assume that $T$ is a topology.*} lemma is_a_base_criterion: assumes A1: "T {is a topology}" and A2: "B ⊆ T" and A3: "∀V ∈ T. ∃A ∈ Pow(B). V = \<Union>A" shows "B {is a base for} T" proof - from A3 have "T ⊆ {\<Union>A. A∈Pow(B)}" by auto; moreover have "{\<Union>A. A∈Pow(B)} ⊆ T" fix U assume "U ∈ {\<Union>A. A∈Pow(B)}" then obtain A where "A ∈ Pow(B)" and "U = \<Union>A" by auto; with `B ⊆ T` have "A ∈ Pow(T)" by auto; with A1 `U = \<Union>A` show "U ∈ T" unfolding IsATopology_def by simp; ultimately have "T = {\<Union>A. A∈Pow(B)}" by auto; with A2 show "B {is a base for} T" unfolding IsAbaseFor_def by simp; text{*A necessary condition for a collection of sets to be a base for some topology : every point in the intersection of two sets in the base has a neighboorhood from the base contained in the intersection.*} lemma Top_1_2_L2: assumes A1:"∃T. T {is a topology} ∧ B {is a base for} T" and A2: "V∈B" "W∈B" shows "∀ x ∈ V∩W. ∃U∈B. x∈U ∧ U ⊆ V ∩ W" proof - from A1 obtain T where D1: "T {is a topology}" "B {is a base for} T" by auto then have "B ⊆ T" using IsAbaseFor_def by auto with A2 have "V∈T" and "W∈T" using IsAbaseFor_def by auto with D1 have "∃A∈Pow(B). V∩W = \<Union>A" using IsATopology_def Top_1_2_L1 by auto then obtain A where "A ⊆ B" and "V ∩ W = \<Union>A" by auto then show "∀ x ∈ V∩W. ∃U∈B. (x∈U ∧ U ⊆ V ∩ W)" by auto text{*We will construct a topology as the collection of unions of (would-be) base. First we prove that if the collection of sets satisfies the condition we want to show to be sufficient, the the intersection belongs to what we will define as topology (am I clear here?). Having this fact ready simplifies the proof of the next lemma. There is not much topology here, just some set theory.*} lemma Top_1_2_L3: assumes A1: "∀x∈ V∩W . ∃U∈B. x∈U ∧ U ⊆ V∩W" shows "V∩W ∈ {\<Union>A. A∈Pow(B)}" let ?A = "\<Union>x∈V∩W. {U∈B. x∈U ∧ U ⊆ V∩W}" show "?A∈Pow(B)" by auto from A1 show "V∩W = \<Union>?A" by blast text{*The next lemma is needed when proving that the would-be topology is closed with respect to taking intersections. We show here that intersection of two sets from this (would-be) topology can be written as union of sets from the topology.*} lemma Top_1_2_L4: assumes A1: "U⇩[1] ∈ {\<Union>A. A∈Pow(B)}" "U⇩[2] ∈ {\<Union>A. A∈Pow(B)}" and A2: "B {satisfies the base condition}" shows "∃C. C ⊆ {\<Union>A. A∈Pow(B)} ∧ U⇩[1]∩U⇩[2] = \<Union>C" proof - from A1 A2 obtain A⇩[1] A⇩[2] where D1: "A⇩[1]∈ Pow(B)" "U⇩[1] = \<Union>A⇩[1]" "A⇩[2] ∈ Pow(B)" "U⇩[2] = \<Union>A⇩[2]" by auto let ?C = "\<Union>U∈A⇩[1].{U∩V. V∈A⇩[2]}" from D1 have "(∀U∈A⇩[1]. U∈B) ∧ (∀V∈A⇩[2]. V∈B)" by auto with A2 have "?C ⊆ {\<Union>A . A ∈ Pow(B)}" using Top_1_2_L3 SatisfiesBaseCondition_def by auto moreover from D1 have "U⇩[1] ∩ U⇩[2] = \<Union>?C" by auto ultimately show ?thesis by auto text{*If $B$ satisfies the base condition, then the collection of unions of sets from $B$ is a topology and $B$ is a base for this topology.*} theorem Top_1_2_T1: assumes A1: "B {satisfies the base condition}" and A2: "T = {\<Union>A. A∈Pow(B)}" shows "T {is a topology}" and "B {is a base for} T" proof - show "T {is a topology}" proof - have I: "∀C∈Pow(T). \<Union>C ∈ T" proof - { fix C assume A3: "C ∈ Pow(T)" let ?Q = "\<Union> {\<Union>{A∈Pow(B). U = \<Union>A}. U∈C}" from A2 A3 have "∀U∈C. ∃A∈Pow(B). U = \<Union>A" by auto then have "\<Union>?Q = \<Union>C" using ZF1_1_L10 by simp moreover from A2 have "\<Union>?Q ∈ T" by auto ultimately have "\<Union>C ∈ T" by simp } thus "∀C∈Pow(T). \<Union>C ∈ T" by auto moreover have "∀U∈T. ∀ V∈T. U∩V ∈ T" proof - { fix U V assume "U ∈ T" "V ∈ T" with A1 A2 have "∃C.(C ⊆ T ∧ U∩V = \<Union>C)" using Top_1_2_L4 by simp then obtain C where "C ⊆ T" and "U∩V = \<Union>C" by auto with I have "U∩V ∈ T" by simp } then show "∀U∈T. ∀ V∈T. U∩V ∈ T" by simp ultimately show "T {is a topology}" using IsATopology_def by simp from A2 have "B⊆T" by auto with A2 show "B {is a base for} T" using IsAbaseFor_def by simp text{*The carrier of the base and topology are the same.*} lemma Top_1_2_L5: assumes "B {is a base for} T" shows "\<Union>T = \<Union>B" using assms IsAbaseFor_def by auto text{*If $B$ is a base for $T$, then $T$ is the smallest topology containing $B$. lemma base_smallest_top: assumes A1: "B {is a base for} T" and A2: "S {is a topology}" and A3: "B⊆S" shows "T⊆S" fix U assume "U∈T" with A1 obtain B⇩[U] where "B⇩[U] ⊆ B" and "U = \<Union>B⇩[U]" using IsAbaseFor_def by auto with A3 have "B⇩[U] ⊆ S" by auto with A2 `U = \<Union>B⇩[U]` show "U∈S" using IsATopology_def by simp text{*If $B$ is a base for $T$ and $B$ is a topology, then $B=T$.*} lemma base_topology: assumes "B {is a topology}" and "B {is a base for} T" shows "B=T" using assms base_sets_open base_smallest_top by blast section{*Product topology*} text{*In this section we consider a topology defined on a product of two sets.*} text{*Given two topological spaces we can define a topology on the product of the carriers such that the cartesian products of the sets of the topologies are a base for the product topology. Recall that for two collections $S,T$ of sets the product collection is defined (in @{text "ZF1.thy"}) as the collections of cartesian products $A\times B$, where $A\in S, B\in T$.*} "ProductTopology(T,S) ≡ {\<Union>W. W ∈ Pow(ProductCollection(T,S))}" text{*The product collection satisfies the base condition.*} lemma Top_1_4_L1: assumes A1: "T {is a topology}" "S {is a topology}" and A2: "A ∈ ProductCollection(T,S)" "B ∈ ProductCollection(T,S)" shows "∀x∈(A∩B). ∃W∈ProductCollection(T,S). (x∈W ∧ W ⊆ A ∩ B)" fix x assume A3: "x ∈ A∩B" from A2 obtain U⇩[1] V⇩[1] U⇩[2] V⇩[2] where D1: "U⇩[1]∈T" "V⇩[1]∈S" "A=U⇩[1]×V⇩[1]" "U⇩[2]∈T" "V⇩[2]∈S" "B=U⇩[2]×V⇩[2]" using ProductCollection_def by auto let ?W = "(U⇩[1]∩U⇩[2]) × (V⇩[1]∩V⇩[2])" from A1 D1 have "U⇩[1]∩U⇩[2] ∈ T" and "V⇩[1]∩V⇩[2] ∈ S" using IsATopology_def by auto then have "?W ∈ ProductCollection(T,S)" using ProductCollection_def by auto moreover from A3 D1 have "x∈?W" and "?W ⊆ A∩B" by auto ultimately have "∃W. (W ∈ ProductCollection(T,S) ∧ x∈W ∧ W ⊆ A∩B)" by auto thus "∃W∈ProductCollection(T,S). (x∈W ∧ W ⊆ A ∩ B)" by auto text{*The product topology is indeed a topology on the product.*} theorem Top_1_4_T1: assumes A1: "T {is a topology}" "S {is a topology}" "ProductTopology(T,S) {is a topology}" "ProductCollection(T,S) {is a base for} ProductTopology(T,S)" "\<Union> ProductTopology(T,S) = \<Union>T × \<Union>S" proof - from A1 show "ProductTopology(T,S) {is a topology}" "ProductCollection(T,S) {is a base for} ProductTopology(T,S)" using Top_1_4_L1 ProductCollection_def SatisfiesBaseCondition_def ProductTopology_def Top_1_2_T1 by auto then show "\<Union> ProductTopology(T,S) = \<Union>T × \<Union>S" using Top_1_2_L5 ZF1_1_L6 by simp text{*Each point of a set open in the product topology has a neighborhood which is a cartesian product of open sets.*} lemma prod_top_point_neighb: assumes A1: "T {is a topology}" "S {is a topology}" and A2: "U ∈ ProductTopology(T,S)" and A3: "x ∈ U" shows "∃V W. V∈T ∧ W∈S ∧ V×W ⊆ U ∧ x ∈ V×W" proof - from A1 have "ProductCollection(T,S) {is a base for} ProductTopology(T,S)" using Top_1_4_T1 by simp; with A2 A3 obtain Z where "Z ∈ ProductCollection(T,S)" and "Z ⊆ U ∧ x∈Z" using point_open_base_neigh by blast; then obtain V W where "V ∈ T" and "W∈S" and" V×W ⊆ U ∧ x ∈ V×W" using ProductCollection_def by auto; thus ?thesis by auto; text{*Products of open sets are open in the product topology.*} lemma prod_open_open_prod: assumes A1: "T {is a topology}" "S {is a topology}" and A2: "U∈T" "V∈S" shows "U×V ∈ ProductTopology(T,S)" proof - from A1 have "ProductCollection(T,S) {is a base for} ProductTopology(T,S)" using Top_1_4_T1 by simp; moreover from A2 have "U×V ∈ ProductCollection(T,S)" unfolding ProductCollection_def by auto; ultimately show "U×V ∈ ProductTopology(T,S)" using base_sets_open by simp; text{*Sets that are open in th product topology are contained in the product of the carrier.*} lemma prod_open_type: assumes A1: "T {is a topology}" "S {is a topology}" and A2: "V ∈ ProductTopology(T,S)" shows "V ⊆ \<Union>T × \<Union>S" proof - from A2 have "V ⊆ \<Union> ProductTopology(T,S)" by auto with A1 show ?thesis using Top_1_4_T1 by simp text{*Suppose we have subsets $A\subseteq X, B\subseteq Y$, where $X,Y$ are topological spaces with topologies $T,S$. We can the consider relative topologies on $T_A, S_B$ on sets $A,B$ and the collection of cartesian products of sets open in $T_A, S_B$, (namely $\{U\times V: U\in T_A, V\in S_B\}$. The next lemma states that this collection is a base of the product topology on $X\times Y$ restricted to the product $A\times B$. lemma prod_restr_base_restr: assumes A1: "T {is a topology}" "S {is a topology}" "ProductCollection(T {restricted to} A, S {restricted to} B) {is a base for} (ProductTopology(T,S) {restricted to} A×B)" proof -; let ?\<B> = "ProductCollection(T {restricted to} A, S {restricted to} B)" let ?τ = "ProductTopology(T,S)" from A1 have "(?τ {restricted to} A×B) {is a topology}" using Top_1_4_T1 topology0_def topology0.Top_1_L4 by simp; moreover have "?\<B> ⊆ (?τ {restricted to} A×B)" fix U assume "U ∈ ?\<B>" then obtain U⇩[A] U⇩[B] where "U = U⇩[A] × U⇩[B]" and "U⇩[A] ∈ (T {restricted to} A)" and "U⇩[B] ∈ (S {restricted to} B)" using ProductCollection_def by auto; then obtain W⇩[A] W⇩[B] where "W⇩[A] ∈ T" "U⇩[A] = W⇩[A] ∩ A" and "W⇩[B] ∈ S" "U⇩[B] = W⇩[B] ∩ B" using RestrictedTo_def by auto; with `U = U⇩[A] × U⇩[B]` have "U = W⇩[A]×W⇩[B] ∩ (A×B)" by auto; moreover from A1 `W⇩[A] ∈ T` and `W⇩[B] ∈ S` have "W⇩[A]×W⇩[B] ∈ ?τ" using prod_open_open_prod by simp; ultimately show "U ∈ ?τ {restricted to} A×B" using RestrictedTo_def by auto; moreover have "∀U ∈ ?τ {restricted to} A×B. ∃C ∈ Pow(?\<B>). U = \<Union>C" fix U assume "U ∈ ?τ {restricted to} A×B" then obtain W where "W ∈ ?τ" and "U = W ∩ (A×B)" using RestrictedTo_def by auto; from A1 `W ∈ ?τ` obtain A⇩[W] where "A⇩[W] ∈ Pow(ProductCollection(T,S))" and "W = \<Union>A⇩[W]" using Top_1_4_T1 IsAbaseFor_def by auto; let ?C = "{V ∩ A×B. V ∈ A⇩[W]}" have "?C ∈ Pow(?\<B>)" and "U = \<Union>?C" proof - { fix R assume "R ∈ ?C" then obtain V where "V ∈ A⇩[W]" and "R = V ∩ A×B" by auto; with `A⇩[W] ∈ Pow(ProductCollection(T,S))` obtain V⇩[T] V⇩[S] where "V⇩[T] ∈ T" and "V⇩[S] ∈ S" and "V = V⇩[T] × V⇩[S]" using ProductCollection_def by auto; with `R = V ∩ A×B` have "R ∈ ?\<B>" using ProductCollection_def RestrictedTo_def by auto; } then show "?C ∈ Pow(?\<B>)" by auto; from `U = W ∩ (A×B)` and `W = \<Union>A⇩[W]` show "U = \<Union>?C" by auto; thus "∃C ∈ Pow(?\<B>). U = \<Union>C" by blast; ultimately show ?thesis by (rule is_a_base_criterion); text{*We can commute taking restriction (relative topology) and product topology. The reason the two topologies are the same is that they have the same base.*} lemma prod_top_restr_comm: assumes A1: "T {is a topology}" "S {is a topology}" "ProductTopology(T {restricted to} A,S {restricted to} B) = ProductTopology(T,S) {restricted to} (A×B)" proof - let ?\<B> = "ProductCollection(T {restricted to} A, S {restricted to} B)" from A1 have "?\<B> {is a base for} ProductTopology(T {restricted to} A,S {restricted to} B)" using topology0_def topology0.Top_1_L4 Top_1_4_T1 by simp; moreover from A1 have "?\<B> {is a base for} ProductTopology(T,S) {restricted to} (A×B)" using prod_restr_base_restr by simp; ultimately show ?thesis by (rule same_base_same_top); text{*Projection of a section of an open set is open.*} lemma prod_sec_open1: assumes A1: "T {is a topology}" "S {is a topology}" and A2: "V ∈ ProductTopology(T,S)" and A3: "x ∈ \<Union>T" shows "{y ∈ \<Union>S. 〈x,y〉 ∈ V} ∈ S" proof - let ?A = "{y ∈ \<Union>S. 〈x,y〉 ∈ V}" from A1 have "topology0(S)" using topology0_def by simp moreover have "∀y∈?A.∃W∈S. (y∈W ∧ W⊆?A)" fix y assume "y ∈ ?A" then have "〈x,y〉 ∈ V" by simp with A1 A2 have "〈x,y〉 ∈ \<Union>T × \<Union>S" using prod_open_type by blast hence "x ∈ \<Union>T" and "y ∈ \<Union>S" by auto from A1 A2 `〈x,y〉 ∈ V` have "∃U W. U∈T ∧ W∈S ∧ U×W ⊆ V ∧ 〈x,y〉 ∈ U×W" by (rule prod_top_point_neighb) then obtain U W where "U∈T" "W∈S" "U×W ⊆ V" "〈x,y〉 ∈ U×W" by auto with A1 A2 show "∃W∈S. (y∈W ∧ W⊆?A)" using prod_open_type section_proj by auto ultimately show ?thesis by (rule topology0.open_neigh_open) text{*Projection of a section of an open set is open. This is dual of @{text "prod_sec_open1"} with a very similar proof.*} lemma prod_sec_open2: assumes A1: "T {is a topology}" "S {is a topology}" and A2: "V ∈ ProductTopology(T,S)" and A3: "y ∈ \<Union>S" shows "{x ∈ \<Union>T. 〈x,y〉 ∈ V} ∈ T" proof - let ?A = "{x ∈ \<Union>T. 〈x,y〉 ∈ V}" from A1 have "topology0(T)" using topology0_def by simp moreover have "∀x∈?A.∃W∈T. (x∈W ∧ W⊆?A)" fix x assume "x ∈ ?A" then have "〈x,y〉 ∈ V" by simp with A1 A2 have "〈x,y〉 ∈ \<Union>T × \<Union>S" using prod_open_type by blast hence "x ∈ \<Union>T" and "y ∈ \<Union>S" by auto from A1 A2 `〈x,y〉 ∈ V` have "∃U W. U∈T ∧ W∈S ∧ U×W ⊆ V ∧ 〈x,y〉 ∈ U×W" by (rule prod_top_point_neighb) then obtain U W where "U∈T" "W∈S" "U×W ⊆ V" "〈x,y〉 ∈ U×W" by auto with A1 A2 show "∃W∈T. (x∈W ∧ W⊆?A)" using prod_open_type section_proj by auto ultimately show ?thesis by (rule topology0.open_neigh_open)
{"url":"http://www.nongnu.org/isarmathlib/IsarMathLib/Topology_ZF_1.html","timestamp":"2014-04-21T10:18:19Z","content_type":null,"content_length":"69561","record_id":"<urn:uuid:566fb4df-55b5-4bd2-acb2-0377992e150b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: December 2010 [00150] [Date Index] [Thread Index] [Author Index] Re: a simple plotting question • To: mathgroup at smc.vnet.net • Subject: [mg114471] Re: a simple plotting question • From: Bill Rowe <readnews at sbcglobal.net> • Date: Sun, 5 Dec 2010 21:51:54 -0500 (EST) On 12/4/10 at 6:14 AM, sahserkan at hotmail.com (martinez) wrote: >How do I plot the following function of two variables? (not 3-d >g[y_]=y^3; h[z_]=z^2+8 >Plot[f[y,z],{y+z,0,3}]?? or >I mean the x-axis is devided into two parts, so x is y and z >dependent, so how do I plot it? You have defined a function that maps every point of a plane to a value. This is inherently a 3-D problem and cannot be handled by Plot. If you don't want to use Plot3D, there are other functions such as ContourPlot or DensityPlot which will work for this type of problem. Or, you can fix the value of one of the parameters thereby reducing the problem to a 2-D problem which can be handled by Plot.
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Dec/msg00150.html","timestamp":"2014-04-21T12:40:14Z","content_type":null,"content_length":"25678","record_id":"<urn:uuid:ecd96bc8-ac4d-4e05-ac5d-d02985ba2ce0>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
d t A. Taylor series formulation by Di Paola and Falsone B. Ordinary differential equation (ODE) formulation through Marcus mapping C. Equivalence between the * integral and Marcus integral D. General formulations A. Algorithm and its convergence analysis B. Numerical results A. Computational strategy B. The first law of thermodynamics in an overdamped Langevin equation C. Heat measurement formula A. Algorithm construction B. Tau-leaping condition C. Efficiency analysis A. Random motion near two parallel walls B. Langevin equation with double well potential C. Langevin equation with periodic forcing D. High dimensional case with multiplicative noise
{"url":"http://scitation.aip.org/content/aip/journal/jcp/138/10/10.1063/1.4794780","timestamp":"2014-04-16T04:48:16Z","content_type":null,"content_length":"91812","record_id":"<urn:uuid:6179e867-269b-4783-9eb8-f37c1446715a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Functions for Which All Points Are Local Extrema Real Analysis Exchange Functions for Which All Points Are Local Extrema Ehrhard Behrends,Stefan Geschke, and Tomasz Natkaniec Let $X$ be a connected separable linear order, a connected separable metric space, or a connected, locally connected complete metric space. We show that every continuous function $f:X\to\mathbb R$ with the property that every $x\in X$ is a local maximum or minimum of $f$ is in fact constant. We provide an example of a compact connected linear order $X$ and a continuous function $f:X\to\mathbb R$ that is not constant and yet every point of $X$ is a local minimum or maximum of $f$. Article information Real Anal. Exchange Volume 33, Number 2 (2007), 467-470. First available: 18 December 2008 Permanent link to this document Mathematical Reviews number (MathSciNet) Zentralblatt MATH identifier Primary: 26A15: Continuity and related questions (modulus of continuity, semicontinuity, discontinuities, etc.) {For properties determined by Fourier coefficients, see 42A16; for those determined by approximation properties, see 41A25, 41A27} 54C30: Real-valued functions [See also 26-XX] local extremum continuous function Behrends, Ehrhard; Geschke, Stefan; Natkaniec, Tomasz. Functions for Which All Points Are Local Extrema. Real Analysis Exchange 33 (2007), no. 2, 467--470. http://projecteuclid.org/euclid.rae/ • M. R. Wojcik, problem session, $34^{th}$ Winter School in Abstract Analysis, Lhota nad Rohanovem, Czech Republik (2006).
{"url":"http://projecteuclid.org/euclid.rae/1229619424","timestamp":"2014-04-21T07:07:52Z","content_type":null,"content_length":"30794","record_id":"<urn:uuid:9eed2e61-7215-4603-b6c4-fbf05db99454>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Short R script to plot effect sizes (Cohen’s d) and shade overlapping area April 23, 2012 By Kristoffer Magnusson Introduction to effect sizes Many times you read in a study that “x and y were significantly different, p < .05”, which is another way of saying that “assuming that the null hypothesis is true, the probability of getting the observed value simply by chance alone is less than 0.05” But that’s not really that interesting, though is it? Say you are reading an intervention study that are comparing a treatment group to a control group, I bet you are more interested in finding out the amount of difference between the groups, rather than the chances of the differences popping up under the null hypothesis. Luckily it’s getting more and more common to also report effect sizes in addition to p-values. Effect sizes, in this case, are metrics that represent the amount of differences between two sample means. One of the most common effect size measure in psychology is Cohen’s d or the standardized mean difference. As you can see by the name it’s a measure of the standardized difference between two means. Commonly Cohen’s d is categorized in 3 broad categories: 0.2–0.3 represents a small effect, ~0.5 a medium effect and over 0.8 to infinity represents a large effect. What that means is that with two samples with a standard deviation of 1, the mean of group 1 is 0.8 sd away from the other group’s mean if Cohen’s d = 0.8. That might sound very intuitive to some, but I find it’s more explanatory to present different d values visually. Which is really easy to do in R statistical software. Some quick R code to visualize Cohen’s d The thing I actually wanted to try out here was to shade the overlapping area of the two distributions. It turned to be pretty easy to do in R. # Standardized Mean Difference (Cohen's d) ES <- 0.8 # get mean2 depending on value of ES from d = (u1 - u2)/sd mean1 <- ES*1 + 1 # create x sequence x <- seq(1 - 3*1, mean1 + 3*1, .01) # generate normal dist #1 y1 <- dnorm(x, 1, 1) # put in data frame df1 <- data.frame("x" = x, "y" = y1) # generate normal dist #2 y2 <- dnorm(x, mean1, 1) # put in data frame df2 <- data.frame("x" = x, "y" = y2) # get y values under overlap y.poly <- pmin(y1,y2) # put in data frame poly <- data.frame("x" = x, "y" = y.poly) # Cohen's U3, proportion of control > 50th perc. treatment u3 <- 1 - pnorm(1, mean1,1) u3 <- round(u3,3) # plot with ggplot2 ggplot(df1, aes(x,y, color="treatment")) + # add line for treatment group geom_line(size=1) + # add line for control group geom_line(data=df2, aes(color="control"),size=1) + # shade overlap geom_polygon(aes(color=NULL), data=poly, fill="red", alpha=I(4/10), show_guide=F) + # add vlines for group means geom_vline(xintercept = 1, linetype="dotted") + geom_vline(xintercept = mean1, linetype="dotted") + # add plot title opts(title=paste("Visualizing Effect Sizes (Cohen's d = ",ES,"; U3 = ",u3,")", sep="")) + # change colors and legend annotation values= c("treatment" = "black","control" = "red")) + # remove axis labels ylab(NULL) + xlab(NULL) And some plots of the different effect size values A “large” effect size really look insignificant compared to the ridiculously large effect size reported by Clark et al. (2006) in their study Cognitive Therapy Versus Exposure and Applied Relaxation in Social Phobia: A Randomized Controlled Trial for the author, please follow the link and comment on his blog: R Psychologist » R daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/short-r-script-to-plot-effect-sizes-cohens-d-and-shade-overlapping-area/","timestamp":"2014-04-19T07:24:57Z","content_type":null,"content_length":"42343","record_id":"<urn:uuid:6796b4b2-2460-4ea2-975c-197abfb95deb>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
Featured | Democratic Firms This is a preprint of a paper developing three themes, capital structure, active learning, and spinoffs, with special attention to the Mondragon cooperatives. This paper is an introduction to property theory including the invisible hand mechanism which handles the initiation and termination of property rights in an on-going private property market economy. The Fundamental Theorem is that when Hume’s conditions of no involuntary transfers and no breached contracts are fulfilled, then the Lockean principle of people getting the fruits of their labor, i.e., imputing legal responsibility in accordance with de facto responsibility is satisfied. The major application is to the current system of a private property market economy based on the renting of persons, i.e., the employment contract. This is a preprint of the paper forthcoming in the Journal of Economic Issues in Sept. 2014. Featured | Development The theme of parallel experimentation is used to recast and pull together dynamic and pluralistic theories in economics, political theory, philosophy of science, and social learning. This is a preprint of a paper developing three themes, capital structure, active learning, and spinoffs, with special attention to the Mondragon cooperatives. Featured | Property Theory This paper shows that implicit assumptions about the numeraire good in the Kaldor-Hicks efficiency-equity analysis involve a “same-yardstick” fallacy. This paper is an introduction to property theory including the invisible hand mechanism which handles the initiation and termination of property rights in an on-going private property market economy. The Fundamental Theorem is that when Hume’s conditions of no involuntary transfers and no breached contracts are fulfilled, then the Lockean principle of people getting the fruits of their labor, i.e., imputing legal responsibility in accordance with de facto responsibility is satisfied. The major application is to the current system of a private property market economy based on the renting of persons, i.e., the employment contract. This is a preprint of the paper forthcoming in the Journal of Economic Issues in Sept. 2014. Featured | Quantum Mechanics The problem of interpreting quantum mechanics (QM) is essentially the problem of making sense out of an objectively indefinite reality–that is described mathematically by partitions. Our sense-making strategy is implemented by developing the mathematics of partitions at the connected conceptual levels of sets and vector spaces. Set concepts are transported to (complex) vector spaces to yield the mathematical machinery of full QM, and the complex vector space concepts of full QM are transported to the set-like vector spaces over ℤ₂ to yield the rather fulsome pedagogical model of quantum mechanics over sets or QM/sets. This is an introductory treatment of partition logic which also shows the extension to logical information theory and the possible killer application to quantum mechanics. Featured | Mathematics The problem of interpreting quantum mechanics (QM) is essentially the problem of making sense out of an objectively indefinite reality–that is described mathematically by partitions. Our sense-making strategy is implemented by developing the mathematics of partitions at the connected conceptual levels of sets and vector spaces. Set concepts are transported to (complex) vector spaces to yield the mathematical machinery of full QM, and the complex vector space concepts of full QM are transported to the set-like vector spaces over ℤ₂ to yield the rather fulsome pedagogical model of quantum mechanics over sets or QM/sets. This is an introductory treatment of partition logic which also shows the extension to logical information theory and the possible killer application to quantum mechanics.
{"url":"http://www.ellerman.org/","timestamp":"2014-04-20T15:52:04Z","content_type":null,"content_length":"31825","record_id":"<urn:uuid:3b1291d4-8ff1-40ae-bb67-fab23772ff7f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Comparison of SOM point densities based on different criteria - IEEE Transactions on Neural Networks "... This article describes the implementation of a system that is able to organize vast document collections according to textual similarities. It is based on the Self-Organizing Map (SOM) algorithm. As the feature vectors for the documents we use statistical representations of their vocabularies. The m ..." Cited by 204 (14 self) Add to MetaCart This article describes the implementation of a system that is able to organize vast document collections according to textual similarities. It is based on the Self-Organizing Map (SOM) algorithm. As the feature vectors for the documents we use statistical representations of their vocabularies. The main goal in our work has been to scale up the SOM algorithm to be able to deal with large amounts of high-dimensional data. In a practical experiment we mapped 6,840,568 patent abstracts onto a 1,002,240-node SOM. As the feature vectors we used 500-dimensional vectors of stochastic figures obtained as random projections of weighted word histograms. Keywords Data mining, exploratory data analysis, knowledge discovery, large databases, parallel implementation, random projection, Self-Organizing Map (SOM), textual documents. I. Introduction A. From simple searches to browsing of self-organized data collections Locating documents on the basis of keywords and simple search expressions is a c... , 2000 "... The self-organizing map (SOM) is an excellent tool in exploratory phase of data mining. It projects input space on prototypes of a low-dimensional regular grid that can be effectively utilized to visualize and explore properties of the data. When the number of SOM units is large, to facilitate quant ..." Cited by 159 (1 self) Add to MetaCart The self-organizing map (SOM) is an excellent tool in exploratory phase of data mining. It projects input space on prototypes of a low-dimensional regular grid that can be effectively utilized to visualize and explore properties of the data. When the number of SOM units is large, to facilitate quantitative analysis of the map and the data, similar units need to be grouped, i.e., clustered. In this paper, different approaches to clustering of the SOM are considered. In particular, the use of hierarchical agglomerative clustering and partitive clustering using-means are investigated. The two-stage procedure---first using SOM to produce the prototypes that are then clustered in the second stage---is found to perform well when compared with direct clustering of the data and to reduce the computation time. - Neural Networks , 2003 "... We study the application of Self-Organizing Maps for the analyses of remote sensing spectral images. Advanced airborne and satellite-based imaging spectrometers produce very high-dimensional spectral signatures that provide key information to many scientific inves- tigations about the surface and at ..." Cited by 15 (12 self) Add to MetaCart We study the application of Self-Organizing Maps for the analyses of remote sensing spectral images. Advanced airborne and satellite-based imaging spectrometers produce very high-dimensional spectral signatures that provide key information to many scientific inves- tigations about the surface and atmosphere of Earth and other planets. These new, so- phisticated data demand new and advanced approaches to cluster detection, visualization, and supervised classification. In this article we concentrate on the issue of faithful topo- logical mapping in order to avoid false interpretations of cluster maps created by an SaM. We describe several new extensions of the standard SaM, developed in the past few years: the Growing Self-Organizing Map, magnification control, and Generalized Relevance Learn- ing Vector Quantization, and demonstrate their effect on both low-dimensional traditional multi-spectral imagery and 200-dimensional hyperspectral imagery. , 2006 "... We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches ca ..." Cited by 8 (5 self) Add to MetaCart We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms: localized learning, concave-convex learning, and winner relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case the results hold only for the one-dimensional case. 1 , 2004 "... Self-organiyO maps (SOM) arewiORV used forthei topologypreservatiV property:neier borie iie vectors arequantiS: (or classiJOP eiass on the samelocatiy or onneiyWJV ones on a prede#nedgrie SOM are alsowioOR used forthei moreclassiWV vectorquantiT#VSO property. We showi thi paper that usiO SOMiMOR#W o ..." Cited by 7 (4 self) Add to MetaCart Self-organiyO maps (SOM) arewiORV used forthei topologypreservatiV property:neier borie iie vectors arequantiS: (or classiJOP eiass on the samelocatiy or onneiyWJV ones on a prede#nedgrie SOM are alsowioOR used forthei moreclassiWV vectorquantiT#VSO property. We showi thi paper that usiO SOMiMOR#W of the moreclassiW; sissi competiOJJ learniJ (SCL)algori#; drasti;W;O irasti; the speed of convergence of the vector quantiJJJST process.Thi facti demonstrated throughextensiR sinsiRT:J onarti;;VO and real examples,wim specie SOM (#xed and decreasiP neieasiPRVR: and SCLalgoriWOPR 2003Elsevi: B.V. AllriOVW reserved. Keywords: Self-organiyOiyO Vector quantiy:WJOP Convergence speed;AcceleratiO 1. M3634R13 quanti;yROP (VQ)i awiRSW used tooli many dataanalysi# #elds. It consiJJ i replaciy acontiOPTW ditiOPTWJT by a#ni: set ofquantiOPTW whin min min a prede#ned die#nedOV cri#nedOV VectorquantiWy:JO may be usedi clusteriO or classiP#JSWO tasks, where the aii todetermiO groups (clusters) of data sharih commonproperti:W It can also be usedi datacompressiP: where the aii Correspondii author. Tel.: +32-10-47-25-51; fax: +32-10-47-25-98. - IEEE Trans. Neural Net , 2007 "... Abstract—In this paper, we examine the scope of validity of the explicit self-organizing map (SOM) magnification control scheme of Bauer et al. (1996) on data for which the theory does not guarantee success, namely data that are n-dimensional, n 2, and whose components in the different dimensions ar ..." Cited by 7 (5 self) Add to MetaCart Abstract—In this paper, we examine the scope of validity of the explicit self-organizing map (SOM) magnification control scheme of Bauer et al. (1996) on data for which the theory does not guarantee success, namely data that are n-dimensional, n 2, and whose components in the different dimensions are not statistically independent. The Bauer et al. algorithm is very attractive for the possibility of faithful representation of the probability density function (pdf) of a data manifold, or for discovery of rare events, among other properties. Since theoretically unsupported data of higher dimensionality and higher complexity would benefit most from the power of explicit magnification control, we conduct systematic simulations on “forbidden ” data. For the unsupported =2 cases that we investigate, the simulations show that even n though the magnification exponent achieved achieved by magnification control is not the same as the desired desired, achieved systematically follows desired with a slowly increasing positive offset. We show that for simple synthetic higher dimensional data information, theoretically optimum pdf matching ( achieved =1) can be achieved, and that negative magnification has the desired effect of improving the detectability of rare classes. In addition, we further study theoretically unsupported cases with real data. Index Terms—Data mining, high-dimensional data, map magnification, self-organizing maps (SOMs). "... This chapter provides an overview on the self-organised map (SOM) in the context of manifold mapping. It first reviews the background of the SOM and issues on its cost function and topology measures. Then its variant, the visualisation induced SOM (ViSOM) proposed for preserving local metric on the ..." Cited by 4 (0 self) Add to MetaCart This chapter provides an overview on the self-organised map (SOM) in the context of manifold mapping. It first reviews the background of the SOM and issues on its cost function and topology measures. Then its variant, the visualisation induced SOM (ViSOM) proposed for preserving local metric on the map, is introduced and reviewed for data visualisation. The relationships among the SOM, ViSOM, multidimensional scaling, and principal curves are analysed and discussed. Both the SOM and ViSOM produce a scaling and dimension-reduction mapping or manifold of the input space. The SOM is shown to be a qualitative scaling method, while the ViSOM is a metric scaling and approximates a discrete principal curve/surface. Examples and applications of extracting data manifolds using SOM-based techniques are presented. "... For many years, artificial neural networks (ANNs) have been studied and used to model information processing systems based on or inspired by biological neural structures. They not only can provide solutions with improved performance when compared with traditional problem-solving methods, but ..." Cited by 1 (0 self) Add to MetaCart For many years, artificial neural networks (ANNs) have been studied and used to model information processing systems based on or inspired by biological neural structures. They not only can provide solutions with improved performance when compared with traditional problem-solving methods, but , 2008 "... 1 Self Organizing Map algorithm and distortion measure We study the statistical meaning of the minimization of distortion measure and the relation between the equilibrium points of the SOM algorithm and the minima of distortion measure. If we assume that the observations and the map lie in an compac ..." Add to MetaCart 1 Self Organizing Map algorithm and distortion measure We study the statistical meaning of the minimization of distortion measure and the relation between the equilibrium points of the SOM algorithm and the minima of distortion measure. If we assume that the observations and the map lie in an compact Euclidean space, we prove the strong consistency of the map which almost minimizes the empirical distortion. Moreover, after calculating the derivatives of the theoretical distortion measure, we show that the points minimizing this measure and the equilibria of the Kohonen map do not match in general. We illustrate, with a simple example, how this occurs. "... We examine the scope of validity of the explicit SOM magnification control scheme of Bauer, Der, and Herrmann [1], on data for which the theory does not guarantee success, namely data that are n-dimensional, n ≥ 2 and whose components in the different dimensions are not statistically independent. Th ..." Add to MetaCart We examine the scope of validity of the explicit SOM magnification control scheme of Bauer, Der, and Herrmann [1], on data for which the theory does not guarantee success, namely data that are n-dimensional, n ≥ 2 and whose components in the different dimensions are not statistically independent. The Bauer et al. algorithm is very attractive for the possibility of faithful representation of the pdf of a data manifold, or for discovery of rare events, among other properties. Since theoretically unsupported data of higher dimensionality and higher complexity would benefit most from the power of explicit magnification control, we conduct systematic simulations on “forbidden ” data. For the unsupported n = 2 cases that we investigate the simulations show that even though the magnification exponent αachieved achieved by magnification control is not the same as the desired αdesired, αachieved systematically follows αdesired with a slowly increasing positive offset. We show that for simple synthetic higher-dimensional data information theoretically optimum pdf matching (α achieved = 1) can be achieved, and that negative magnification has the desired effect of improving the detectability of rare classes. In addition we further study theoretically unsupported cases with real data.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=272637","timestamp":"2014-04-21T16:07:04Z","content_type":null,"content_length":"38595","record_id":"<urn:uuid:77dd262e-b758-4fe4-8df6-55bcf5b65d9a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
Beverly, MA Calculus Tutor Find a Beverly, MA Calculus Tutor ...Please note, - I am available after 6pm on weekdays and during the weekends. - Tutoring takes place at MIT in a quiet study space equipped with whiteboards. Visit me at, http://www.wyzant.com/ Tutors/engineeringhelpI frequently tutor calculus, and I have studied it from many angles. In high school, I scored 5/5 on both AP Calculus AB and BC examinations. 8 Subjects: including calculus, physics, SAT math, differential equations ...I have an undergraduate degree from Harvard University in Computer Science. Computer Science is the art/science of creating computer programs. There are a large number of topics, like networks, graphics, operating systems, data structures, databases, etc. 19 Subjects: including calculus, physics, geometry, SAT math ...I have experience teaching, lecturing, and tutoring undergraduate level math and physics courses for both scientists and non-scientists, and am enthusiastic about tutoring at the high school level. I am currently a research associate in materials physics at Harvard, have completed a postdoc in g... 16 Subjects: including calculus, physics, geometry, biology ...I have taught math for an SAT prep company. I teach the necessary concepts in order to obtain the answers and also how to use more efficient and quicker methods. SAT preparation requires lots of practice and I offer a study schedule based on the amount of time which remains until the exam and my assessment of the student's level. 24 Subjects: including calculus, chemistry, physics, statistics ...Most of my tutoring experience was with physics and calculus material. There are a number of aspects of algebra, geometry, trigonometry, and pre-calculus that I have continued to use and are fundamental to calculus and other advanced mathematics. I have continued to work with teens as I have coached about 10 years in youth sports. 10 Subjects: including calculus, physics, geometry, algebra 2 Related Beverly, MA Tutors Beverly, MA Accounting Tutors Beverly, MA ACT Tutors Beverly, MA Algebra Tutors Beverly, MA Algebra 2 Tutors Beverly, MA Calculus Tutors Beverly, MA Geometry Tutors Beverly, MA Math Tutors Beverly, MA Prealgebra Tutors Beverly, MA Precalculus Tutors Beverly, MA SAT Tutors Beverly, MA SAT Math Tutors Beverly, MA Science Tutors Beverly, MA Statistics Tutors Beverly, MA Trigonometry Tutors Nearby Cities With calculus Tutor Arlington, MA calculus Tutors Brighton, MA calculus Tutors Chelsea, MA calculus Tutors Danvers, MA calculus Tutors Everett, MA calculus Tutors Lynn, MA calculus Tutors Malden, MA calculus Tutors Marblehead calculus Tutors Peabody, MA calculus Tutors Revere, MA calculus Tutors Salem, MA calculus Tutors Saugus calculus Tutors Swampscott calculus Tutors Wenham calculus Tutors Woburn calculus Tutors
{"url":"http://www.purplemath.com/Beverly_MA_Calculus_tutors.php","timestamp":"2014-04-18T11:02:34Z","content_type":null,"content_length":"24068","record_id":"<urn:uuid:9d26f414-908f-48f9-abfc-9075f959e560>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Fallacy of Propositional Logic Type: Formal Fallacy Propositional logic is a system which deals with the logical relations that hold between propositions taken as a whole, and those compound propositions which are constructed from simpler ones with truth-functional connectives. For instance, consider the following proposition: Today is Sunday and it's raining. This is a compound proposition containing the simpler propositions: • Today is Sunday. • It's raining. Moreover, the connective "and" which joins them is truth-functional, that is, the truth-value of the compound proposition is a function of the truth-values of its components. The truth-value of a conjunction, that is, a compound proposition formed with "and", is true if both of its components are true, and false otherwise. Propositional logic studies the logical relations which hold between propositions as a result of truth-functional combinations, for instance, the example conjunction implies "today is Sunday". There are a number of other truth-functional connectives in English in addition to conjunction, and the ones most frequently studied in propositional logic are: Since a validating argument form is one in which it is impossible for the premisses to be true and the conclusion false, you can use the truth-functions to determine that forms in propositional logic are validating. For instance, the earlier example involving conjunction is an instance of the following argument form: p and q. Therefore, p. This form is validating because, no matter what propositions we put for p and q, if the premiss is true, then both p and q will be true, which means that the conclusion will also be true. Thus, to show that a propositional argument form is non-validating, all that you have to do is find an argument of that form which has true premisses and a false conclusion. Robert Audi (General Editor), The Cambridge Dictionary of Philosophy, 1995. This discussion of propositional logic is by necessity brief, since I am only trying to give the minimal background required to understand the subfallacies above. For a lengthier explanation of propositional logic, see the following:
{"url":"http://www.fallacyfiles.org/propfall.html","timestamp":"2014-04-17T18:23:57Z","content_type":null,"content_length":"7780","record_id":"<urn:uuid:022fcade-665e-42c9-80d1-b8c4fdeec359>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
minimization problem (linear programming) a veterenarian mixes two types of animal food: Food 1 and Food 2. Each unit of Food 1 costs 200 and contains 40 grams of fat, 30 grams of protein and 1200 calories. Each unit of Food 2 costs 250 and contains 80 grams of fat, 60 grams of protein, and 1600 calories. Suppose the vet wants each unit of the final product to yield not more than 360 grams of fat, at least 240 grams of protein and at least 9600 calories, how many grams of each type of ingredient should the vet use to minimize his cost? *ive defined the variables and constraints. but not so sure about it. the problem is i cant find the feasible region and the optimal mix when the lines are put in a graph. thanx ahead! ** Let X1= Food 1; x2= Food 2 objective function= minimize Z= 200x1+250X2 1. 40x1 + 80x2 less than or equal to 360 2. 30x1 + 60x2 greater than or equal to 240 3. 1200x1 + 1600x2 greater than or equal to 9600 4. non negativity constraints
{"url":"http://mathhelpforum.com/math-topics/116458-minimization-problem-linear-programming.html","timestamp":"2014-04-16T13:52:50Z","content_type":null,"content_length":"44761","record_id":"<urn:uuid:9e962301-dbae-41af-8d78-247d423fcb35>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
Critical System Cascading Collapse Assessment for Determining the Sensitive Transmission Lines and Severity of Total Loading Conditions Mathematical Problems in Engineering Volume 2013 (2013), Article ID 965628, 10 pages Research Article Critical System Cascading Collapse Assessment for Determining the Sensitive Transmission Lines and Severity of Total Loading Conditions ^1Faculty of Electrical Engineering, Universiti Teknologi MARA, 13500 Pulau Pinang, Malaysia ^2Faculty of Electrical Engineering, Universiti Teknologi MARA, 40450 Shah Alam, Selangor, Malaysia ^3Advanced Power Solutions Sdn. Bhd., Worldwide Business Centre, Jalan Tinju 13/50, 40000 Shah Alam, Selangor, Malaysia Received 4 April 2013; Revised 19 June 2013; Accepted 7 July 2013 Academic Editor: Yang Tang Copyright © 2013 Nur Ashida Salim et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper presents a computationally accurate technique used to determine the estimated average probability of a system cascading collapse considering the effect of hidden failure on a protection system. This includes an accurate calculation of the probability of hidden failure as it will give significant effect on the results of the estimated average probability of system cascading collapse. The estimated average probability of a system cascading collapse is then used to determine the severe loading condition contributing to a higher risk of a system cascading collapse. This information is important because it will assist the utility to determine the maximum level of increase in the system loading condition before the occurrence of critical power system cascading collapse. Furthermore, the initial tripping of sensitive transmission line contributing to a critical system cascading collapse can also be determined by using the proposed method. Based on the results obtained from this study, it was found that selecting the accurate probability of hidden failure is very important as it will affect the estimated average probability of a system cascading collapse. Comparative study has been done with other techniques to verify the effectiveness of the proposed method used in the determination of sensitive transmission lines. 1. Introduction In recent years, many blackouts have occurred around the world and the latest blackout had happened in India on the 30th and 31st of July 2012 that affected over 620 million people throughout the country which is estimated to be 9% of the world population [1]. According to the report of grid disturbance in India [2], the major factors that lead to the initiation of the grid disturbance on the 30th and the 31st of July 2012 were due to weak interconnection between regions. This system was undermined by a number of transmission lines outages connecting the western and northern regions of India. This condition also happened to the Arizona-Southern California Outages on the 8th of September 2011 where the disconnection of a single transmission line was the main reason of the system disturbance that caused approximately 2.7 million populations without electrical power [3]. Other major blackouts afflicted by the system cascading collapse have been reported in [4, 5]. Normally, cascading outages of transmission lines are the main reason that leads to a large system blackout [6, 7]. Cascading outage is a sequence of multiple dependent component outages that gradually happened in a power system. Cascading outage is usually caused by an initial failure of a transmission line that propagates to a widespread of system outage. There are various techniques used to perform the analysis of system cascading collapse. Dobson et al. [5] have applied the branching process model to determine the distribution of cascading outages for a given initial failure. Carreras et al. [7] use the OPA model to identify the overloaded transmission lines with high probability. By applying the OPA model, the proposed technique is capable of recognizing the critical lines which contribute to a high probability of cascading collapse. A hybrid approach was proposed by Chen et al. [8] in order to study the structural vulnerability of power networks by using the topological structure error and vulnerability of networks caused by attack from system failure. This technique takes into account the power flow equations and the effect of hidden failure of a protection system. Shi et al. [9] perform the analysis of cascading collapse by considering the hidden failure effect of a protection relay. Wang et al. [10] use the optimal power flow (OPF) technique to identify chains of events occurred in a system cascading outage. The system cascading collapse is performed by taking into account the total power loss reduction which is obtained through the optimum dispatch of active and reactive power generation. Wang et al. [11] use the fault chain theory to perform a system cascading collapse initiated by a transmission line failure. A power flow at several lines is basically used to derive the predictive index which will then be used to instigate subsequent tripping of transmission lines at nonfaulty or normal condition. The process of system cascading collapse is halted once the system instability occurs during the subsequent tripping of transmission lines. Finally, the number of line tripping is basically used to derive the vulnerability index for identifying the sensitive initial line tripping leading to the severity of system cascading collapse. From the literature review that has been conducted, it is important to study the effect of cascading collapse due to its significant impact on a power system. The report of blackout that has been explained earlier revealed that the main factor that led to a system cascading collapse is due to the initial tripping of transmission line. For that reason, it is crucially important to identify the sensitive transmission lines and also the severity of total loading condition that could initiate system disturbances. Therefore, in the proposed study of cascading collapse, the analysis is carried out to study the impact of different values of hidden failure probability on the estimated average probability of cascading collapse, severity of total loading condition, and sensitive transmission lines. Hidden failure is the main cause of the occurrence of cascading outages which could lead to cascading collapse of a power system. Hidden failure is defined as unobserved deficiency of a protection system that remains hidden due to the anticipation of an unusual system operating condition. In this study, the determination of sensitive transmission lines and severity of total loading condition is performed based on the criticality of the system cascading collapse where the overall procedure can be found in Section 3. The IEEE-RTS79 and IEEE-RTS96 are used as the case studies to validate the effectiveness of the proposed approach considered in the assessment on system cascading collapse. The assessment of cascading events needs to be conducted regularly in the power system operation and planning so that a power system could be prevented from any kind of disastrous events. Therefore, it is important for the utility and power system planner to identify the severe total loading condition and the sensitive transmission lines that will cause significant impact of system cascading collapse. 2. Probability of Exposed Line Incorrect Tripping Caused by the Hidden Failure When a transmission line trips or disconnects from a system, there is a significant probability that the lines connected to either end of the disconnected transmission line might incorrectly trip due to its misoperation of protection relay. These further lines trippings are known as hidden failures as they do not turn out to be noticeable until it appears at the neighboring lines exposed by the initial line tripping. Relay protection hidden failure is one of the main reasons that led to a system cascading collapse. The probability of exposed line tripping caused by hidden failure should be calculated accurately due to its significant impact on the assessment of system cascading collapse, severe total loading condition, and sensitive transmission lines. As mentioned in Section 1, the hidden failure caused by defective tripping of an exposed line normally starts with an initial tripping of overloaded or faulty line. An initial component tripping may result in a cascading tripping by affecting the neighboring components that will become a contributing factor in spreading the disturbances and finally causing the whole system to collapse. Hidden failures cannot be detected during normal system operating condition. However, when a fault or overloads occur, it will expose the neighboring lines and causing unnecessary outages to the other equipment. The hidden failure leading to an exposed line tripping is selected according to the probability of an incorrect tripping curve, , and this is shown in Figure 1 [12]. Each exposed line has its own load-dependent probability of incorrect tripping, , specifically modeled based on the escalating function of line loading seen by the line protective relay. In Figure 1, the same values are obtained for a line loading that is equivalent to or lower than the line limit. At the same time, the increases linearly with regard to the increased in line loading, which is based on the power flow results, until it reaches 1.4 of the line limit. Based on the excerption taken from NERC report, there were 400 events of cascading collapse caused by the hidden failure of a line protective relay happened in the past 16 years that is from 1984 to 1999 [12]. This means that the probability of occurrence for one exposed line tripping event due to hidden failure is very small but cannot be neglected due to its catastrophic effect on a power system condition. The probability of one exposed line tripping event due to the hidden failure can be calculated by using (1) as For that reason, the probability of an exposed line tripping event due to hidden failure according to the historical information [8] is given by Then, the probability increases linearly until it reaches = 1, that is, 1.4 of its line limit. The probability remains unchanged as = 1 for line loading that is above 1.4 of line limit [13], line tripping probability caused by the hidden failure, , since this will assist the utility in providing accurate decision on a power system planning. 3. Determination of Sensitive Transmission Lines and Severe Total Loading Condition Based on the Critical Cascading Collapse This section will discuss the procedures involved to analyze the probability of cascading collapse by taking into consideration three different case studies of hidden failure probability which are = 8 × 10^−7, = 1 × 10^−12, and = 1 × 10^−2 [13]. For each case study, the results of average probability of cascading collapse will be used to further analyze the severe total loading condition and also to identify the sensitive transmission lines that would cause critical cascading collapse of a power system. The proposed algorithm shown in Figure 2 begins with an initial tripping event of a transmission line. Simultaneously, the power flow solution is performed by taking into account 10% increase of the total system loading condition. Then, calculate the probability of incorrect tripping, , for each exposed lines connected adjacent to the tripping line. Perform arbitrary tripping on the exposed lines with higher than the selected value of = 8 × 10^−7. In this case study, = 8 × 10^−7 is obtained based on the historical data of transmission line tripping events that caused by the effect of hidden failure and this has been discussed elaborately in Section 2. This will be used as a benchmark by comparing it with the other case studies which are at the lower end of = 1 × 10^−12 and higher end of = 1 × 10^−2 in order to observe its significant impact on the probability of cascading collapse, severe total loading condition, and sensitive transmission lines. Simultaneously, calculate the conditional probability of tripping, , [12] by using (3): This process is repeated until there is no exposed line to perform the random tripping event. Then, (4) is used to calculate the tripping events probability, , by considering all of the conditional probability of tripping, , as For the selected initial line tripping, the simulation is repeated for times in order to obtain the average probability of cascading collapse, , that can be calculated using (5), as Then, the entire simulation is repeated until the last initial line tripping has been reached. Collect the average probability of cascading collapse, , for all of the initial line tripping. Use (6) to calculate the estimated average probability of cascading collapsefor identifying the sensitive transmission lines contributing to a critical system cascading collapse as Rank the in descending order to identify the sensitive transmission lines at initial tripping. The sensitive transmission lines are obtained by referring to the initial line tripping that leads to a sudden increase of . Then, calculate the estimated average probability of system cascading collapse, , that is used to identify the criticality of system cascading collapse prior to a severe total loading condition using (7), as Arrange the in descending order to identify the severity of total loading condition that led to a critical system cascading collapse. The severity of total loading conditions is determined based on the changes of total loading condition that leads to a significant increase of . 4. Results and Discussion This section will discuss the estimated average probability of system cascading collapse, and , that takes into account the consequence of hidden failure in a relay protection system. The total loading condition is increased by 10% while maintaining a constant power factor at all buses. An initial tripping event of a transmission line is performed for every increase of total loading condition. The simulation is repeated for 1000 times in order to obtain the . Further analysis was performed on the results of and based on three different cases of hidden failure which are = 8 × 10^−7, = 1 × 10^−12, and = 1 × 10^−2. The = 8 × 10^−7 represents the value of probability of hidden failure obtained in accordance with the historical information of a line tripping event caused by the hidden failure. Meanwhile, = 1 × 10^−2 used in this analysis is obtained from [9]. Furthermore, the analysis is also performed by considering the = 1 × 10^−12 as an additional case study which is lower than the value of = 8 × 10^−7 obtained based on the real data. This analysis is performed in order to observe its significant impact on the system cascading collapse when is smaller than the actual value which is obtained on the historical information. The IEEE-RTS79 and IEEE-RTS96 are used as the case studies to verify the effectiveness and robustness of the proposed approach considered in the cascading collapse assessment. The data for each system can be found in [14, 15], respectively. 4.1. Determination of Sensitive Transmission Lines due to the Effect of Hidden Failure The estimated average probability of cascading collapse is used to identify the sensitive initial lines tripping which would have high tendency to cause a critical system cascading collapse. By referring to the three case studies of conducted on the IEEE-RTS79 and IEEE-RTS96, the sensitive transmission lines are obtained by referring to a significantly large value of which indicates the criticality of a system cascading collapse. The = 8 × 10^−7, = 1 × 10^−12, and = 1 × 10^−2 are the three case studies of probability of exposed line tripping event due to the hidden failure. The results of the sensitive transmission lines for the IEEE-RTS79 and IEEE-RTS96 are shown in Tables 1 and 2, respectively. Table 1 shows the results of the sensitive transmission lines obtained based on the critical system cascading collapse for the IEEE-RTS79. For the three case studies of , the was ranked in descending order to identify the sensitive transmission lines that will lead to a critical system cascading collapse. From the results obtained, it was found that the sensitive transmission line 12-13, line 14–16, and line 12–23 provide significantly large value of compared to other lines in the system and this can also be observed in Figure 3. In Figure 3, the initial tripping of sensitive transmission line 12-13 and line 14–16 leading to a sudden increase of which implies that the system is experiencing a critical cascading collapse. Therefore, major precaution should be given to circumvent from the disconnection of the three sensitive transmission lines which will lead to a critical system cascading collapse. Besides that, the = 8 × 10^−7 yields the highest value of for all initial lines tripping compared to the other two probabilities which are = 1 × 10^−12 and = 1 × 10^−2. This implies that the actual information of cascading collapse events that caused by the protection system hidden failure is important to be taken into account in the calculation because it will give a significant difference and an accurate result of . Table 2 represents the results of sensitive transmission lines obtained from the analysis of cascading collapse for the IEEE-RTS96. For the case studies of = 1 × 10^−12 and = 8 × 10^−7, the initial tripping of sensitive transmission line 318–223, line 112-113, and line 113–123 yields a significantly high value of , compared to the initial tripping of other transmission lines. The effect of = 1 × 10^−12, = 8 × 10^−7, and = 1 × 10^−2 can also be observed in Figure 4 whereby critical cascading collapse may occur due to a rapid increase of caused by the initial tripping of the three sensitive transmission lines. In Figure 4, the initial tripping of sensitive transmission line 318–223, line 112-113, and line 113–123 leads to a sudden increase of and these are referring to the three case studies of . For the case study of = 1 × 10^−2, even though the three sensitive lines are not in a congenial sequence compared to the case studies of = 1 × 10^−12 and = 8 × 10^−7, it still needs to be given more attention because disconnection on any of the sensitive transmission lines may cause critical cascading collapse of the system. Based on the results of the sensitive transmission lines obtained in this paper, the proposed approach of cascading collapse is an important and useful method that can be used by the utility which will facilitate them in providing the best planning decision to prevent the critical cascading collapse from happening. 4.2. Criticality of System Cascading Collapse due to the Effect of Hidden Failure and Severe Total Loading Condition In this section, the estimated average probability of cascading collapse is used in obtaining the severity of total loading condition which may lead to a critical system cascading collapse. From the three different case studies of hidden failure which are = 8 × 10^−7, = 1 × 10^−12, and = 1 × 10^−2, the results for the are calculated and it is tabulated in Tables 3 and 4 corresponding to the IEEE-RTS79 and IEEE-RTS96, respectively. By referring to Table 3, the is also obtained in accordance with the total loading condition increased from 150% to 240% for the IEEE-RTS79. The results shown in Table 3 are also depicted in Figure 5. In conjunction to the three case studies of hidden failure, the varies as the total loading condition increased by 10%. In addition, an upward trend of is quite significant for all the three case studies when the total loading condition increased above 170%. Moreover, the highest value of is obtained when the total loading condition increased above 230%. This information can be useful to the utility for estimating the maximum allowable level of total loading condition before the system is afflicted with the highest risk of cascading collapse. There are no significant difference between the obtained on the = 8 × 10^−7 and = 1 × 10^−12. For an example, the results of = 0.50190445 and = 0.50233278 are relatively similar and these are referring to the = 1 × 10^−12 and = 8 × 10^ −7, respectively, obtained at 240% increase of total loading condition. Therefore, any value that is lower than = 1 × 10^−12 will produce similar result of as in the case study of = 8 × 10^−7. However, the twovalues are providing the results of with higher risk compared to the obtained based on = 1 × 10^−2. This indicates that it is important to choose the correct value of in order to obtain an accurate estimated average probability of system cascading collapse, . Table 4 illustrates the result of that is obtained by increasing the total loading condition from 130% to 220% for IEEE-RTS96. For this case study, the analysis of cascading collapse was performed based on the probability of hidden failure that is = 8 × 10^−7, = 1 × 10^−12, and = 1 × 10^−2. The results tabulated in Table 4 are also illustrated in Figure 6. It is obvious that the results of are rather small possibly due to a stable system condition assisted by a large number of generating units and transmission lines. Even though the is relatively small, it cannot not be ignored because its impact to the power system could be disastrous. A rapid increase of can be seen for all the three case studies when the total loading condition is increased beyond 200%. It will continue to increase significantly until it reaches to the highest due to the 220% of total loading condition. This information is useful to identify which total loading condition agitates to the criticality of a system cascading collapse. From the three different case studies of , = 8 × 10^−7 and = 1 × 10^−12 provide higher risk compared to the obtained based on = 1 × 10^−2. This points out that it is significant to select the accurate value of in order to obtain an accurate estimated average probability of system cascading collapse, . A detailed analysis of cascading collapse has been carried out on the IEEE-RTS79 and IEEE-RTS96 by taking into consideration the three different values of probability of hidden failure which are = 8 × 10^−7, = 1 × 10^−12, and = 1 × 10^−2. From the results obtained, it is important to determine the accurate value of probability of hidden failure, , since this will significantly affect the results of estimated average probability of cascading collapse, , and the criticality of system cascading collapse prior to a severe total loading condition. 4.3. Performance Comparison between the Cascading Collapse Methods Comparative study was performed on the results of sensitive transmission lines determined by using the proposed method and fault chain theory discussed in [11]. It is worthwhile to mention that an initial tripping of sensitive transmission line will instigate to a critical system cascading collapse. The robustness of both methods in the sensitive transmission line determination is tested on the IEEE 14-bus system which comprised of 20 transmission lines, 5 generating units, and 11 load buses. The total load for the system is 259MW and the total generation capacity is 272MW. The process involved in the proposed method is comparatively similar with the fault chain theory wherein the subsequent tripping of transmission line is executed until the system instability occurs. Eventually, major tripping of transmission line may lead to an inadequate amount of total generation capacity required by the total load demand and this will agitate to system instability. The disadvantage of fault chain theory is that it does not consider the incorrect line tripping caused by protection relay hidden failure. This may yield to an inaccurate result of sensitive transmission Table 5 and Figure 7 present the results of initial line tripping ranked in descending order according to and vulnerability index determined by using the proposed method and fault chain theory [11], respectively. It is obvious that the initial transmission line tripping was ranked differently by both methods. Hence, the obtained sensitive transmission lines will be different for both methods. The proposed method with hidden failure provides the most sensitive transmission line 4-5 which is not similar to the most sensitive transmission line 1-2 and line 2–4 determined by the fault chain theory. From the results, it is obvious that the proposed method provides the most sensitive transmission line 4-5 based on the largest value of compared to other lines in the system. On the other hand, the fault chain theory provides the most sensitive transmission line 1-2 and line 2–4 based on the largest value of vulnerability index compared to the rest of the transmission lines. However, the proposed method with hidden failure accentuates to a clearer picture on the results of which is larger and explicit compared to the results of vulnerability index determined by the fault chain theory. The proposed method with hidden failure will assist the researcher to a much easier way in determining the sensitive transmission lines which is based on the ≥ 50% and vice versa. Therefore, the transmission line 4-5, line 2-3, line 2–4, line 1-2, and line 9–14 are considered sensitive determined by the proposed method with hidden failure. This implies that the inclusion of hidden failure in the proposed method provides more accurate results of sensitive transmission lines compared to the fault chain theory which does not consider the hidden failure. In particular, the historical information of incorrect line tripping caused by hidden failure is represented by wherein it is the main contribution which improves the performance of the proposed method in providing more accurate result of sensitive transmission lines. According to [16], cascaded tripping of transmission lines is the main cause to a system blackout which may disrupt the economic and social life of nation in a country. As a result, it is crucially important for the utility and power system planner to identify the accurate sensitive transmission lines in order to circumvent from disastrous impact of system cascading collapse. 5. Conclusions The escalating number of critical cascading collapse happened recently has revealed that there is an urgent need for new techniques required by the system planning and operation. The critical system cascading collapse usually can happen just by an initial tripping event of a transmission line. This paper has discussed the estimated average probability of cascading collapse, evaluated by assuming that each transmission line has a different load-dependent probability of incorrect tripping due to hidden failure. In this paper, the estimated average probability of cascading collapse was determined based on three different case studies of hidden failure probability. The results have shown that it is imperative to select an accurate probability of hidden failure as it has a significant effect on the result of estimated average probability of cascading collapse. The evaluation of critical cascading events due to hidden failure should be carried out periodically in the power system operation and planning in order to avert the power system from any kind of catastrophic events. The estimated average probability of cascading collapse was also analyzed to determine the sensitive initial transmission lines tripping and also the severity of total loading condition. For that reason, precaution and necessary actions should be made by the utility and power system planner to ensure that severe total loading condition does not occur and that sensitive transmission lines are well preserved or maintained in order to avoid catastrophic impact on system cascading collapse. Comparison with fault chain theory has proven that the proposed method with hidden failure provides more accurate result of sensitive transmission lines. : Probability of exposed line incorrect tripping caused by hidden failure : Total number of cascading collapse events due to hidden failure : Total number of years when the events of cascading collapse occur : Total number of days in a year, that is, 365 days : Total number of hours in a day, that is, 24 hours : Total number of minutes in an hour, that is, 60 minutes : Total number of seconds in a minute, that is, 60 seconds : Conditional probability of tripping in state : Product of tripping events probability : Probability of exposed transmission line encountering the random tripping event in state : Probability of the exposed transmission line not encountering the random tripping event in state : Total number of system state at initial tripping, : Total number of iterations to perform the random tripping : Total number of steps for the increase of total loading condition : Total number of transmission line in the system Estimated average probability of cascading collapse used to identify the sensitive transmission line contributing to a critical system cascading collapse Estimated average probability of system cascading collapse used to identify the criticality of system cascading collapse prior to a severe total loading condition. The authors would like to thank the Research Management Institute (RMI), Universiti Teknologi MARA, Malaysia, and the Ministry of Higher Education (MOHE), Malaysia, through research grant 600-RMI/ ERGS5/3(18/2012) for the financial support of this research. The authors also would like to express their sincere gratitude to Professor Mahmud Fotuhi Firuzabad from Sharif University of Technology, Iran, for his continuous support in completing this research. 1. H. Pidd, “India blackouts leave 700 million without power,” http://www.guardian.co.uk/world/2012/jul/31/india-blackout-electricity-power-cuts. 2. A. S. Bakshi, A. Velayutham, S. C. Srivastava et al., “Report of the Enquiry Committee on Grid Disturbance in Northern Region on 30th July 2012 and in Northern, Eastern & North-Eastern Region on 31st July 2012,” New Delhi, India, 2012. 3. FERC and NERC, “Arizona-Southern California Outages on September 8, 2011. Causes and Recommendations,” 2012. 4. M. Vaiman, K. Bell, Y. Chen et al., “Risk assessment of cascading outages: methodologies and challenges,” IEEE Transactions on Power Systems, vol. 27, no. 2, pp. 631–641, 2012. View at Publisher · View at Google Scholar · View at Scopus 5. I. Dobson, J. McCalley, and C. C. Liu, Fast Simulation, Monitoring and Mitigation of Cascading Failure, Power Systems Engineering Research Center, 2010. 6. I. Dobson, “Estimating the propagation and extent of cascading line outages from utility data with a branching process,” IEEE Transactions on Power Systems, vol. 27, no. 4, pp. 2146–2155, 2012. View at Publisher · View at Google Scholar · View at Scopus 7. B. A. Carreras, D. E. Newman, and I. Dobson, “Determining the vulnerabilities of the power transmission system,” in Proceedings of the 45th Hawaii International Conference on System Sciences (HICSS '12), pp. 2044–2053, January 2012. View at Publisher · View at Google Scholar · View at Scopus 8. G. Chen, Z. Y. Dong, D. J. Hill, G. H. Zhang, and K. Q. Hua, “Attack structural vulnerability of power grids: a hybrid approach based on complex networks,” Physica A, vol. 389, no. 3, pp. 595–603, 2010. View at Publisher · View at Google Scholar · View at Scopus 9. Z. Shi, L. Shi, Y. Ni, L. Yao, and M. Bazargan, “Identifying chains of events during power system cascading failure,” in Proceedings of the Asia-Pacific Power and Energy Engineering Conference (APPEEC '11), March 2011. View at Publisher · View at Google Scholar · View at Scopus 10. S.-P. Wang, A. Chen, C.-W. Liu, C.-H. Chen, and J. Shortle, “Rare-event splitting simulation for analysis of power system blackouts,” in Proceedings of the IEEE Power and Energy Society General Meeting, July 2011. View at Publisher · View at Google Scholar · View at Scopus 11. A. Wang, Y. Luo, G. Tu, and P. Liu, “Vulnerability assessment scheme for power system transmission networks based on the fault chain theory,” IEEE Transactions on Power Systems, vol. 26, no. 1, pp. 442–450, 2011. View at Publisher · View at Google Scholar · View at Scopus 12. J. Chen, J. S. Thorp, and I. Dobson, “Cascading dynamics and mitigation assessment in power system disturbances via a hidden failure model,” International Journal of Electrical Power and Energy Systems, vol. 27, no. 4, pp. 318–326, 2005. View at Publisher · View at Google Scholar · View at Scopus 13. N. A. Salim, M. M. Othman, I. Musirin, and M. S. Serwan, “Cascading collapse assessment considering hidden failure,” in Proceedings of the 1st International Conference on Informatics and Computational Intelligence (ICI '11), pp. 318–323, December 2011. View at Publisher · View at Google Scholar · View at Scopus 14. C. Grigg and P. Wong, “The IEEE reliability test system-1996 a report prepared by the reliability test system task force of the application of probability methods subcommittee,” IEEE Transactions on Power Systems, vol. 14, no. 3, pp. 1010–1020, 1999. View at Publisher · View at Google Scholar · View at Scopus 15. P. M. Subcommittee, “IEEE reliability test system,” IEEE Transactions on Power Apparatus and Systems, vol. 98, no. 6, pp. 2047–2054, 1979. View at Scopus 16. M. Bruch, V. Munch, M. Aichinger, M. Kuhn, M. Weymann, and G. Schmid, “Power blackout risks. Risk management options. Emerging risk initiative—position paper,” November 2011.
{"url":"http://www.hindawi.com/journals/mpe/2013/965628/","timestamp":"2014-04-19T13:35:43Z","content_type":null,"content_length":"189825","record_id":"<urn:uuid:025347c8-d870-465d-b703-34cffb1bb7aa>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Singularities or discontinuities May 26th 2009, 01:54 PM #1 Junior Member May 2009 Singularities or discontinuities ok I have understand how to do it but i ave confuse when is removable and when is non-removable here is the question: Find any singularities or discontinuities in the following functions: (an:non-removable singularity at x=3, removable singularity at x=-3, put f(x)=-1/6) (an:removabe singularity at x=π/2, put f(π/2)=-1) A discontinuity is removable if the limit exists Remember Continuity means lim x->a f(x) = f(a) which implies 1. f(a) is defined 2. lim x->a f(x) 3. 1 = 2 For example limas x->-3 = -1/6 so if we define f(-3) =-1/6 all conditions are met limasx->3 DNE fails 2 cannot remove discontinuity why the lim x->3= -1/6!???!?i found it -infinite Re read the post I said lim x-> -3 = -1/6 and lim x-> 3 DNE ok I have understand how to do it but i ave confuse when is removable and when is non-removable here is the question: Find any singularities or discontinuities in the following functions: (an:non-removable singularity at x=3, removable singularity at x=-3, put f(x)=-1/6) (an:removabe singularity at x=π/2, put f(π/2)=-1) The simple idea is that if $f(x)$ is undefined at a point $x_0$ but the limit at the point $\lim_{x \to x_0}f(x)=c < \infty$ the we can extend the function to the value c at $x_0$ ie $f(x_0)=c$ Edit: Geez I am really late i find the same answer!? where do you subtract the limit?! I'm not sure what you mean by " subtract the limit" ? Hmm is ok I have understand thanks a lot! May 26th 2009, 02:03 PM #2 May 26th 2009, 02:15 PM #3 Junior Member May 2009 May 26th 2009, 02:18 PM #4 May 26th 2009, 02:18 PM #5 May 26th 2009, 02:21 PM #6 Junior Member May 2009 May 26th 2009, 02:25 PM #7 May 26th 2009, 02:26 PM #8 Junior Member May 2009
{"url":"http://mathhelpforum.com/calculus/90608-singularities-discontinuities.html","timestamp":"2014-04-17T14:36:19Z","content_type":null,"content_length":"50121","record_id":"<urn:uuid:c9f81c34-d80c-45b5-a240-47dc27985fee>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
harmonic function on surface up vote 2 down vote favorite Dear all, I am looking for reference books about real-valued harmonic functions on complete Riemannian surfaces, do you have any reference in your mind about this? I found some books about harmonic functions on the plane (but mainly discussing on conformal maps) or about general harmonic maps (which is too general to possess nice properties). They do not fit my need... Precisely, I would like to understand the zero set (for example, is it a one dimensional manifold?), the existence and linearity of such functions. Thanks for your help in advance! dg.differential-geometry ca.analysis-and-odes The zero set will will have prong singularities, like the zeroes of $Re(z^n)$ or $Im(z^n)$, at least. Meeks used to carry around a book by Rick Schoen that had lots of information about harmonic maps on Riemann surfaces. I tried to find it but its been too long ago to remember. Lots of theorems about minimal surfaces are proved via arguments with harmonic mappings as the coordinate functions are harmonic in the conformal structure underlying the pull back Riemannina metric. – Charlie Frohman Aug 10 '11 at 13:46 Thank you Charlie for the examples. (I will try to find the book of R. Schoen.) – Chih-Wei Chen Aug 12 '11 at 14:36 add comment 1 Answer active oldest votes Two remarks: 1. the notion of harmonic function on a surface is conformally invariant, so if your question is local the it is about harmonic function on $C$. up vote 2 down vote 2. on a surface, a harmonic function is the same as the real part of a holomorphic function, see wikipedia. So you can reformulate your question on zeros as a question on the inverse image of the real line by a holomorphic function defined on (a subset of) $C$. Thank you Jean-Marc, I had found these two remarks but I cannot figure out more "explicit" properties from them in general. However I think they will be helpful when certain particular cases are discussed. Thanks again. – Chih-Wei Chen Aug 12 '11 at 14:34 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry ca.analysis-and-odes or ask your own question.
{"url":"http://mathoverflow.net/questions/72569/harmonic-function-on-surface/72655","timestamp":"2014-04-16T20:13:51Z","content_type":null,"content_length":"52680","record_id":"<urn:uuid:5e37bc56-a910-416b-9057-fb4009f0564c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
TCSS 343 Notes Directory of all slides and notes Note: Slides/notes are usually modified versions of those provided by the author of this book (Levitin) or of the previous book I used (Goodrich). Also note that most of the class is done on the board (not via slides), so slides are sometimes not polished in terms of formatting. • Week 1: Introductory analysis slides; Web archive of good loop invariants link. • Week 2: BigOh, Brute Force, Divide and Conquer slides. • Week 3: RecurrenceTree method, analysis and recurrence equation practice sheets: a newer one , and an older one. Also, a midterm1 review sheet is now available. Closest pair notes. • Week 4: Dynamic Programming and Knapsack slides.notes on Binary Tree bounding . Convex Hull problem not covered this quarter, QuickHull Animation, Convex Hull notes. • Week 5: More dynamic programming • Week 6: Graphs and Floyd-Warshall slides. Also Decrease & Conquer, DFS, BFS slides. • Week 7: Midterm 2 review sheet. Also here are some dynamic programming practice problems; dynamicpractice1 (with solutions in solution directory), and dynamicpractice2. • Week 8: Greedy Algorithm slides (including Prim's algorithm). • Week 9: Transform and conquer slides (including heapsort). Heapsort animation. Strongly connected components slides. • Week 10: Lower bounds slides, Final Review sheet
{"url":"http://courses.washington.edu/tcss343/slides/slides.shtml","timestamp":"2014-04-16T10:22:26Z","content_type":null,"content_length":"4858","record_id":"<urn:uuid:bbae80d6-9a7b-47cd-8408-f14539320afe>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
maths tutor in Cheetwood | Gumtree Other • Maths and Statistics Tutor 3 days ago I am a PhD student at Manchester University. I offer tuition in areas of mathematics and statistics, at either the high school or university level. In addition to tutoring, I am also able... Rusholme, Manchester • Maths Tutor Very Competitive prices starting with £15 per lesson 5 days ago I am a teacher with 15 year experience teaching in a high school and tutoring privately, I offer Maths tuition to students studying for Primary Sats, GCSE, IGCSE and AS/A2 Maths exams. I... Didsbury, Manchester • Experienced Maths & Physics Tutor. GCSE, A-level and Undergrad. 8 days ago Hello, I am an experienced maths and physics tutor who has helped A-level and undergraduates in their exam preparation. I am a patient teacher and like to convey a greater understanding... Trafford, Manchester • Manchester Tutor, Physics, Maths, Chemistry, Bio 9 days ago I am a Phd Scholar at the University of Manchester and offer home tuitions in Maths, Physics, Chemistry, Bio, English, General Science and Urdu languages contact me at 0746 0500 282 Thanks Cheetham Hill, Manchester • Maths Tutors in Didsbury 12 days ago Maths Doctor offers quality, personalised maths tuition to suit you or your child. Dedicated to boosting grades and improving confidence in maths, we provide professional, tailor-made... Didsbury, Manchester • Private Tutors in Huddersfield - English & Maths Tuition 12 days ago We are two experienced tutors, both educated to postgraduate level, offering tuition in Maths, English and ESOL in Marsden, Huddersfield. Tuition is Chemistry and Biology is also... Huddersfield, Marsden • Maths Tutor prices starting with £15 per lesson, Primary Sats, GCSE, IGCSE and A Level 12 days ago I am a qualified teacher who holds PGCE, MSc and BSc in Mathematics, and currently teaching in a high school. I offer Maths tuition from an experienced and qualified tutor to students... Didsbury, Manchester • English tuition primary and secondary - maths ks1-3- home tutor 19 days ago I am currently a secondary school teacher in Manchester. I specialise in teaching both Language and literature from key stage 2 to 4. I work within intervention and specialise in ensuring... • Maths Tutor, All Levels Uinversity A Level, IGCSE, GCSE, Sats and Primary 19 days ago I am a qualified teacher with PGCE, MSc and BSc in mathematics and currently teaching in a high school. I offer Maths tuition from an experienced tutor to students studying for Primary... Didsbury, Manchester • Maths tuition, physics tuition, science tuition. Professional & friendly lady tutor in Bolton area. 21 days ago A Star Science and Maths Tutoring Bolton Specialists in maths and physics tutoring from age 11 to degree level and dual science to AS level. GCSE/AS/A2 Degree qualified. Enhanced CRB... • GCSE and AS/A Level Maths and Physics tutor - including Further Maths 21 days ago I am a BSc (Hons) Physics graduate from the University of Manchester, and am currently taking some time out of an MSc in Nuclear Science and Technology from the same institution. I worked... Didsbury, Manchester • Maths One-to-One Tutor Competitive Prices for AS/A2l, IGCSE, GCSE and Sats 26 days ago I hold a BSc, MSc, and PGCE in Mathematics and currently teaching in a high school I offer Maths tuition from an experienced and qualified tutor to students studying for Primary Sats,... Didsbury, Manchester • Maths Tutor, KS3 and GCSE 1 month ago A full-time Maths Teacher in Manchester. Completed the Teach First Graduate scheme. Available for tuition at weekends or late afternoons/ evenings during the week. South or central... Trafford, Manchester • Maths A Level,IGCSE, GCSE, and Sats Private Tutor 1 month ago I offer Maths tuition from an experienced and qualified tutor to students studying for Primary Sats, GCSE, IGCSE and AS/A2 Maths exams. I have BSc, MSc and PGCE in Maths, and Currently... Didsbury, Manchester • Maths Tutor for Sats, GCSE AQA Edexcel IGCSE, A Level 1 month ago I offer Maths tuition from an experienced and qualified tutor to students studying for Primary Sats, GCSE and AS/A2 Maths exams. I have 15 years experience of teaching students at all... Didsbury, Manchester • Experienced Maths Tutor 1 month ago QTS Qualified Maths tutor available in Manchester area. Available weekday evenings after 5.30. Over 7 years experience teaching in schools including KS2 SATS preparation and KS3. From £20... • Maths Tutor in Oldham for GCSE, A-Level, Further Maths, STEP, and University Level Students. 1 month ago I am an experienced freelance maths tutor living in Oldham. I have a masters degree in maths and am able to tutor O Level GCSE, A-Level, Further Maths, STEP, and some 1st year and 2nd... Tamworth Street, Oldham • All Examination boards A Level, GCSE, IGCSE and Sats Maths Tutor 1 month ago I offer Maths tuition from an experienced and qualified tutor to students studying for Primary Sats, GCSE, IGCSE and AS/A2 Maths exams. I have 15 years experience of teaching students at... Didsbury, Manchester • Maths Tutor South Manchester and Cheshire 1 month ago Hi, my name is Kevin. I'm a friendly and approachable full time Maths tutor with a patient and empathetic attitude. I studied Maths at Uni after gaining 11 A grades at GCSE and 4 A grades... • KS1 KS2 KS3 GSCE A-Level Maths Chemistry Physics Biology Science ICT Tutor 1 month ago Hi I`m Aqib, а 3rd year medicаl student аt the University оf Liverpооl. I`m аn energetic, enthusiаstic, friendly аnd very аpprоаchаble persоn. I hugely enjоy wоrking with peоple in grоups... Trafford, Manchester • Maths Tutor OCR, AQA, Edexcel, A Level, GCSE, IGCSE and Sats 1 month ago I offer Maths tuition from an experienced and qualified tutor to students studying for Primary Sats, GCSE, IGCSE and AS/A2 Maths exams. I have BSc, MSc and PGCE in Maths, and Currently... Didsbury, Manchester • Tutor required for Maths and History GCSE 1 month ago A tutor required for tutions for GCSE History and Maths. Please message me with the rates and availability . Many thanks Whalley Range, Manchester • QTS Qulaified Maths Tutor 1 month ago I have been teaching for over 8 years and have in depth experience teaching Year 6 KS2 SATS. I can arrange to tutor around the Manchester City centre area. I am available evenings and... • Maths Tutor Guarantee Success (A Level, GCSE and Sats) 1 month ago I offer Maths tuition from an experienced and qualified tutor to students studying for Primary Sats, GCSE, IGCSE and AS/A2 Maths exams. I have BSc, MSc and PGCE in Maths, and Currently... Didsbury, Manchester • A Level,IGCSE, GCSE and Sats Maths Tutor 1 month ago I offer Maths tuition from an experienced and qualified tutor to students studying for Primary Sats, GCSE and AS/A2 Maths exams. I have BSc, MSc and PGCE in Mathematics, and currently... Didsbury, Manchester • Maths Tutor 1 month ago I offer Maths tuition from an experienced and qualified tutor to students studying for Primary Sats, GCSE , IGCSE and AS/A2 Maths exams. I have 15 years experience of teaching students at... Didsbury, Manchester • Maths Tutor in the Salford area 1 month ago I graduated from Lancaster University in July 2013 with BSc First Class Honors in Mathematics. I am currently completing my PGCE year through Manchester Metropolitan University to become... Salford, Manchester • Medical student- GCSE maths tutor 1 month ago Hellо, I аm а friendly 21 yeаr оld, currently in my third yeаr оf Medicine аt the University оf Manchester. I hаve а lоt оf experience оf wоrking with children аnd аdоlescents, аnd being... • Female maths tutor in Bolton (Wanted) 1 month ago looking for a local female maths tutor based on Bolton for an ongoing help for our little girl who is KS2 age 10 if you are local maths student at uni with a favourable hourly rate then... • Maths A Level, GCSE and Sats with an experienced Tutor 1 month ago I offer Maths tuition from an experienced and qualified tutor to students studying for Primary Sats, GCSE and AS/A2 Maths exams. I have BSc, MSc and PGCE in Mathematics, and currently... Didsbury, Manchester
{"url":"http://www.gumtree.com/other-tuition-lesson-services/cheetwood/maths+tutor","timestamp":"2014-04-16T07:49:17Z","content_type":null,"content_length":"143762","record_id":"<urn:uuid:2c4f88ee-3f9f-4e48-b272-a9f49773bae4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
Conspiracy Numbers and Caching for Searching And/Or Trees and Theorem-Proving - Artificial Intelligence , 1991 "... In this paper we outline a general approach to the study of metareasoning, not in the sense of explicating the semantics of explicitly specified meta-level control policies, but in the sense of providing a basis for selecting and justifying computational actions. This research contributes to a devel ..." Cited by 162 (10 self) Add to MetaCart In this paper we outline a general approach to the study of metareasoning, not in the sense of explicating the semantics of explicitly specified meta-level control policies, but in the sense of providing a basis for selecting and justifying computational actions. This research contributes to a developing attack on the problem of resource-bounded rationality, by providing a means for analysing and generating optimal computational strategies. Because reasoning about a computation without doing it necessarily involves uncertainty as to its outcome, probability and decision theory will be our main tools. We develop a general formula for the utility of computations, this utility being derived directly from the ability of computations to affect an agent's external actions. We address some philosophical difficulties that arise in specifying this formula, given our assumption of limited rationality. We also describe a methodology for applying the theory to particular problem-solving systems, a... - Artificial Intelligence , 1994 "... We introduce a technique for analyzing the behavior of sophisticated A.I. search programs working on realistic, large-scale problems. This approach allows us to predict where, in a space of problem instances, the hardest problems are to be found and where the fluctuations in difficulty are greatest. ..." Cited by 73 (8 self) Add to MetaCart We introduce a technique for analyzing the behavior of sophisticated A.I. search programs working on realistic, large-scale problems. This approach allows us to predict where, in a space of problem instances, the hardest problems are to be found and where the fluctuations in difficulty are greatest. Our key insight is to shift emphasis from modelling sophisticated algorithms directly to modelling a search space that captures their principal effects. We compare our model’s predictions with actual data on real problems obtained independently and show that the agreement is quite good. By systematically relaxing our underlying modelling assumptions we identify their relative contribution to the remaining error and then remedy it. We also discuss further applications of our model and suggest how this type of analysis can be generalized to other kinds of A.I. problems. Chapter 1 - In AAAI National Conference , 1993 "... Best-first search algorithms require exponential memory, while depth-first algorithms require only linear memory. On graphs with cycles, however, depth-first searches do not detect duplicate nodes, and hence may generate asymptotically more nodes than best-first searches. We present a technique for ..." Cited by 37 (3 self) Add to MetaCart Best-first search algorithms require exponential memory, while depth-first algorithms require only linear memory. On graphs with cycles, however, depth-first searches do not detect duplicate nodes, and hence may generate asymptotically more nodes than best-first searches. We present a technique for reducing the asymptotic complexity of depth-first search by eliminating the generation of duplicate nodes. The automatic discovery and application of a finite state machine (FSM) that enforces pruning rules in a depth-first search, has significantly extended the power of search in several domains. We have implemented and tested the technique on a grid, the Fifteen Puzzle, the Twenty-Four Puzzle, and two versions of Rubik's Cube. In each case, the effective branching factor of the depth-first search is reduced, reducing the asymptotic time complexity. Introduction---The Problem Search techniques are fundamental to artificial intelligence. Best-first search algorithms such as breadthfirst se... - Journal of Automated Reasoning , 1999 "... Goal-sensitive resolution methods, such as Model Elimination, have been observed to have a higher degree of search redundancy than model-search methods, Therefore, resolution methods have not been seen in high performance propositional satis ability testers. A method to reduce search redundancy in g ..." Cited by 12 (3 self) Add to MetaCart Goal-sensitive resolution methods, such as Model Elimination, have been observed to have a higher degree of search redundancy than model-search methods, Therefore, resolution methods have not been seen in high performance propositional satis ability testers. A method to reduce search redundancy in goal-sensitive resolution methods is introduced. The idea at the heart of the method is to attempt to construct a refutation and a model simultaneously and incrementally, based on sub-search outcomes. The method exploits the concept of \autarky&quot;, which can be informally described as a \self-su cient &quot; model for some clauses, but which does not a ect the remaining clauses of the formula. Incorporating this method into Model Elimination leads to an algorithm called Modoc. Modoc is shown, both analytically and experimentally, to be faster than Model Elimination by an exponential factor. Modoc, unlike Model Elimination, is able to nd a model if it fails to nd a refutation, essentially by combining autarkies. Unlike the pruning strategies of most re nements of resolution, autarky-related pruning does not prune any successful refutation; it only prunes attempts that ultimately will be unsuccessful; consequently, it will not force the underlying Modoc search to nd an unnecessarily long refutation. To prove correctness and other properties, a game characterization of refutation search isintroduced, which demonstrates , 1995 "... Methodology is developed to attempt to construct simultaneously either a refutation or a model for a propositional formula in conjunctive normal form. The method exploits the concept of "autarky", which was introduced by Monien and Speckenmeyer. Informally, an autarky is a "self-sufficient" model ..." Cited by 8 (5 self) Add to MetaCart Methodology is developed to attempt to construct simultaneously either a refutation or a model for a propositional formula in conjunctive normal form. The method exploits the concept of "autarky", which was introduced by Monien and Speckenmeyer. Informally, an autarky is a "self-sufficient" model for some clauses, but which does not affect the remaining clauses of the formula. Whereas their work was oriented toward finding a model, our method has as its primary goal to find a refutation in the style of model elimination. It also finds a model if it fails to find a refutation, essentially by combining autarkies. However, the autarky-related processing is integrated with the refutation search, and can greatly improve the efficiency of that search even when a refutation does exist. Unlike the pruning strategies of most refinements of resolution, autarky-related pruning does not prune any successful refutation; it only prunes attempts that ultimately will be unsuccessful; - In Proceedings of the First International Conference on AI Planning Systems , 1992 "... In this paper we present several domain-independent search optimizations and heuristics that have been developed in a totally-ordered nonlinear planner in prodigy. We also describe the extension of the system into a full hierarchical planner with the ability to search among the different levels of a ..." Cited by 7 (5 self) Add to MetaCart In this paper we present several domain-independent search optimizations and heuristics that have been developed in a totally-ordered nonlinear planner in prodigy. We also describe the extension of the system into a full hierarchical planner with the ability to search among the different levels of abstraction. We analyze and illustrate the performance of the system with its different search capabilities in a few domains. , 1996 "... Resolution has not been an effective tool for deciding satisfiability of propositional CNF formulas, due to explosion of the search space, particularly when the formula is satisfiable. A new pruning method is described, which is designed to eliminate certain refutation attempts that cannot succeed. ..." Cited by 6 (3 self) Add to MetaCart Resolution has not been an effective tool for deciding satisfiability of propositional CNF formulas, due to explosion of the search space, particularly when the formula is satisfiable. A new pruning method is described, which is designed to eliminate certain refutation attempts that cannot succeed. The method exploits the concept of "autarky", which was introduced by Monien and Speckenmeyer. New forms of lemma creation are also introduced, which eliminate the need to carry out refutation attempts that must succeed. The resulting algorithm, called "Modoc", is a modification of propositional model elimination. Informally, an autarky is a "self-sufficient" model for some clauses, but which does not affect the remaining clauses of the formula. Whereas Monien and Speckenmeyer's work was oriented toward finding a model, our method has as its primary goal to find a refutation in the style of model elimination. However, Modoc finds a model if it fails to find a refutation, essentially by combi... , 1997 "... . Our research has been motivated by the task of forming a solution subgraph which satisfies given constraints. The problem is represented by an A=O graph. Our approach is to apply a suitably modified technique of dependency-directed backtracking. We present our formulation of the standard chronolog ..." Cited by 1 (1 self) Add to MetaCart . Our research has been motivated by the task of forming a solution subgraph which satisfies given constraints. The problem is represented by an A=O graph. Our approach is to apply a suitably modified technique of dependency-directed backtracking. We present our formulation of the standard chronological backtracking algorithm in Prolog. Based on it, we have developed an enhanced algorithm which makes use of special heuristic knowledge. It involves also the technique of node marking. We have gathered experience with the prototype Prolog implementation of the algorithm in applying it to (one step of) the problem of building a software configuration. Our experience shows that Prolog programming techniques offer a considerable flexibility in implementing the above outlined tasks. Keywords. A=O-graph, non-chronological backtrack, Prolog 1 PROBLEM AREA AND GOAL Many problems to which artificial intelligence techniques are often applied can be described as constraint satisfaction problems. We... , 1992 "... PRODIGY is a general-purpose problem-solving architecture that serves as a basis for research in planning, machine learning, apprentice-type knowledge-refinement interfaces, and expert systems. This document is a manual for the latest version of the PRODIGY system, PRODIGY4.0, and includes descripti ..." Add to MetaCart PRODIGY is a general-purpose problem-solving architecture that serves as a basis for research in planning, machine learning, apprentice-type knowledge-refinement interfaces, and expert systems. This document is a manual for the latest version of the PRODIGY system, PRODIGY4.0, and includes descriptions of the PRODIGY representation language, control structure, user interface, abstraction module, and other features. The tutorial style is meant to provide the reader with the ability to run PRODIGY and make use of all the basic features, as well as gradually learning the more esoteric aspects of PRODIGY4.0. 1 This research was sponsored by the Avionics Laboratory, Wright Research and Development Center, Aeronautical Systems Division (AFSC), U. S. Air Force, Wright-Patterson AFB, OH 45433-6543 under Contract F33615-90-C-1465, Arpa Order No. 7597. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official polici... "... Best-first search algorithms require exponential memory, while depth-first algorithms require only linear memory. On graphs with cycles, however, depth-first searches do not detect duplicate nodes, and hence may generate asymptotically more nodes than best-first searches. We present a technique for ..." Add to MetaCart Best-first search algorithms require exponential memory, while depth-first algorithms require only linear memory. On graphs with cycles, however, depth-first searches do not detect duplicate nodes, and hence may generate asymptotically more nodes than best-first searches. We present a technique for reducing the asymptotic complexity of depth-first search by eliminating the generation of duplicate nodes. The automatic discovery and application of a finite state machine (FSM) that enforces pruning rules in a depth-first search, has significantly extended the power of search in several domains. We have implemented and tested the technique on a grid, the Fifteen Puzzle, the
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1569167","timestamp":"2014-04-18T12:48:37Z","content_type":null,"content_length":"39565","record_id":"<urn:uuid:b0cb9cfd-58e0-4a39-ae47-f5607296effd>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there a classification of surface(smooth and projective) over arbitrary field? up vote 2 down vote favorite Is there a classification of surfaces(smooth and projective) over arbitrary field? Whether using the approach of Enriques or not. thanks P.S. By arbitrary I mean the field may not be algebraic closed, even not perfect, since as far as I know variety over perfect field is much like one over closed field. So Is there a treatment on the case of non-perfect case. Thanks 1 What is a smooth surface over an arbitrary field? – Kofi Feb 24 '12 at 14:39 I mean a smooth surface X/k if the structure morphism X to k is smooth – stjc Feb 26 '12 at 10:12 add comment 3 Answers active oldest votes One of the main subtleties in trying to classify surfaces over non-algebraically closed fields is that there are minimal surfaces which become non-minimal over the algebraic closure. As an example I will focus on the case that I know best, that of (geometrically) rational surfaces. Over an algebraically closed field, it is well-known that the only such minimal surfaces are $\mathbb{P}^2$ and the rational ruled surfaces $\mathbb{F}_n$ for $n \geq 0$. If the field is not algebraically closed, then things are a lot more complicated. It is a theorem of Iskovskikh that a minimal rational surface over a perfect field is one of the following types: up vote 6 • $\mathbb{P}^2$. down vote • A smooth quadric $X \subset \mathbb{P}^3$ with $\mathrm{Pic}(X) = \mathbb{Z}$. • A Del Pezzo surface $X$ with $\mathrm{Pic}(X) = \mathbb{Z}K_X$, here $K_X$ denotes the canonical divisor. • A conic bundle $f : X \to C$ over a rational curve $C$, with $\mathrm{Pic}(X) = \mathbb{Z} \oplus \mathbb{Z}$. In particular conic bundles form a very large family and can have arbitrarily many (geometrically) degenerate fibres. If you want to learn more about this result, I heartily recommend the notes "Rational surfaces over nonclosed fields" by Brendan Hassett, which can be found on his webpage. add comment You probably want to work over an algebraically closed field, at least initially. For surfaces in positive characteristic, have a look at these very nice notes by Christian Liedtke: up vote 2 down Algebraic Surfaces in Positive Characteristic. 1 thanks a lot, I didn't make the question clear, but I want to know surface over non algebraic closed field in particular:) – stjc Feb 26 '12 at 10:37 add comment Try Wikipedia http://en.wikipedia.org/wiki/Enriques%E2%80%93Kodaira_classification: they say that the classification was begun by Mumford, and completed by Mumford and Bombieri, and they up vote 1 give references. They say ``it is similar to the characteristic projective 0 case, except there are a few extra types of surface in characteristics 2 and 3.'' down vote add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/89395/is-there-a-classification-of-surfacesmooth-and-projective-over-arbitrary-field?sort=newest","timestamp":"2014-04-18T08:28:43Z","content_type":null,"content_length":"61058","record_id":"<urn:uuid:5f4489c9-f415-4dbf-87d3-e7e6d897afdf>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Untitled Document 50-Minute Talks: Joel Hass (University of California, Davis) Title: Discretizing area and energy, and applications Abstract: There have been several approaches to computing discrete, or combinatorial, analogs of the energy of a map between two manifolds. We introduce a simplicial energy that is closely connected to a discretized area associated to a simplicial complex. We will discuss applications to both mathematical and applied problems. This is joint work with Peter Scott. Sa'ar Hersonsky (University of Georgia) Title: Boundary Value Problems on Planar Graphs and Flat Surfaces with integer conical singularities Abstract: We are given a cellular decomposition of a planar, bounded domain and its boundary, where each 2-cell is either a triangle or a quadrilateral. From these data and a conductance function we construct a canonical pair (S, f ) where S is a special type of a genus (m-1 ) singular flat surface, tiled by rectangles and f is an energy preserving mapping from the edges of the decomposition onto S. In this lecture, we will employ a Dirichlet-Neumann boundary value problem. Feng Luo (Rutgers University) Title: A dilogarithm identity on the moduli space of curves Abstract: Given any closed hyperbolic surface of a fixed genus, we establish an identity involving dilogarithm of lengths of simple closed geodesics in all embedded pairs of pants and one-holed tori in the surface. This is a joint work with Ser Peow Tan. William Minicozzi (Johns Hopkins University) Title: Mean Curvature Flow Abstract: I will describe joint work with Toby Colding on singularities of mean curvature flow, including the dynamics near a singularity. Igor Rivin (Temple University) Title: Conformal matching Abstract: We will describe some of the mathematics involved in finding the best conformal map between two densities (mostly in low dimensions). Boris Springborn (Universität Bonn) Title: Discrete conformal maps and ideal hyperbolic polyhedra Abstract: A straightforward discretization of the concept of conformal change of metric leads to a surprisingly rich theory of discrete conformal maps. I will explain some of the salient features of this theory and its connection with hyperbolic polyhedra. This elucidates the relationship with the theory of circle packings, and it leads to a variant of discrete conformal equivalence that allows mapping to the hyperbolic plane. This is joint work with Alexander Bobenko, Ulrich Pinkall, and Peter Schröder 25-Minutes Talks: Richard Bamler (Princeton University) Title: Stability of symmetric spaces of noncompact type under Ricci flow Abstract: We establish stability results for symmetric spaces of noncompact type under Ricci flow, i.e. we will show that any small perturbation of the symmetric metric is flown back to the original metric under an appropriately rescaled Ricci flow. It will be important for us which smallness assumptions we have to impose on the initial perturbation. We will find that as long as the symmetric space does not contain any hyperbolic or complex hyperbolic factor, we don't have to assume any decay on the perturbation. Furthermore, in the hyperbolic and complex hyperbolic case, we show stability under a very weak assumption on the initial perturbation. This will generalize a result obtained by Schulze, Schnürer and Simon in the hyperbolic case. The proofs of those results make use of an improved L1 -decay estimate for the heat kernel in vector bundles as well as elementary geometry of negatively curved spaces. Jacob Bernstein (Stanford University) Title: A Variational Characterizaton of the Catenoid Abstract: We show that the catenoid is the unique surface of least area within a geometrically natural class of minimal surfaces. The proof relies on a technique involving the Weierstrass representation used by Osserman and Schiffer to show the sharp isoperimetric inequality for minimal annuli. Ian Biringer (Yale University) Title: Extending pseudo-Anosov maps into handlebodies Abstract: Let f be a pseudo-Anosov homeomorphism of the boundary S of a handlebody. We show how the attracting lamination of f determines whether (a power of) f extends into the handlebody. The proof rests on an analysis of the accumulation points of a certain sequence of representations from the fundamental group of S into PSL (2, C). Joint work with Jesse Johnson and Yair Minsky. Christine Breiner (Massachusetts Institute of Technology) Title: Symmetries of genus-g helicoids Abstract: Every embedded genus-1 helicoid possesses an orientation preserving isometry. In this talk we outline how to extend this result to genus-g helicoids that have a hyperelliptic underlying conformal structure. The proof relies on the existence of a non-trivial biholomorphic involution as well as an understanding of the weak asymptotic geometry of genus-g helicoids. This is joint work with J. Bernstein. William Breslin (University of Michigan) Title: Short geodesics and Heegaard surfaces in hyperbolic 3-manifolds Abstract: I will discuss how fat Margulis tubes, bounded area sweepouts, the Rubinstein-Scharlemann graphic, and thin position can be used to show that short geodesics in hyperbolic 3-manifolds are isotopic into strongly irreducible Heegaard surfaces. Will Cavendish (Princeton University) Title: On the Growth of the Weil-Peterson Diameter of Moduli Space Abstract: The Weil-Petersson metric on Teichmuller space is a negatively curved Kähler metric that relates in interesting ways to hyperbolic geometry in dimensions 2 and 3. Though this metric is incomplete, its completion is a CAT(0) metric space on which the mapping class group acts co-compactly, and the quotient of this completion by the mapping class group is the Deligne-Mumford compactification of moduli space Baris Coskunuzer (Koc University) Title: Generic uniqueness of area minimizing disks for extreme curves Abstract: In this talk, we will give a sketch of the proof of the following statement: For a generic nullhomotopic simple closed curve C in the boundary of a compact, orientable, mean convex 3-manifold M with trivial second homology, there is a unique area minimizing disk D embedded in M where the boundary of D is C. The same statement is also true for absolutely area minimizing surfaces, too. Steven Frankel (California Institute of Technology) Title: Closed Orbits of Quasigeodesic Flows Abstract: We discuss quasigeodesic flows on hyperbolic 3-manifolds. Danny Calegari has shown that the orbit space of such a flow comes with a pair of decompositions reminiscent of the pair of transverse laminations that we'd get in the pseudo-Anosov case. The fundamental group acts on the orbit space preserving this structure and this can be used to construct an action on a circle at infinity. We use this to translate some properties of the flow to properties the circle action. In particular, we give sufficient conditions for finding closed orbits in the flow. This is part of a conjectural proof that every quasigeodesic flow on a closed hyperbolic manifold has closed orbits. David Futer (Temple University) Title: The geometry of unknotting tunnels Abstract: Given a 3-manifold M, with boundary a union of tori, an unknotting tunnel for M is an arc τ from the boundary back to the boundary, such that the complement of τ in M is a genus-2 handlebody. Fifteen years ago, Colin Adams asked a series of questions about how the topological data of an unknotting tunnel fits into the hyperbolic structure on M. For example: is τ isotopic to a geodesic? Can it be arbitrarily long, relative to a maximal cusp neighborhood? Does τ appear as an edge in the canonical polyhedral decomposition? Although the most general versions of these questions are still open today, I will describe fairly complete answers in the case where M is created by a ``generic'' Dehn filling. As an application, there is an explicit family of knots in S3 whose tunnels are arbitrarily long. This is joint work with Daryl Cooper and Jessica Purcell. Stephen Kleene (Massachusetts Institute of Technology) Title: Embedded and immersed MCF self-shrinkers Abstract: (Joint work with N. Kapouleas and N. M. Moller). We present examples of embedded and Immersed MCF self-shrinkers, and discuss relevant gluing and doubling constructions. Nam Le (Columbia University) Title: Blow-up rate of the mean curvature during the mean curvature flow Abstract: In this talk, we will show that at the first singular time of any compact, Type I mean curvature flow, the mean curvature blows up at the same rate as the second fundamental form. For the mean curvature flow of surfaces, we obtain similar result provided that the Gaussian density is less than two. Our proofs are based on continuous rescaling and the classification of self-shrinkers. We show that all notions of singular sets defined in A. Stone (A density function and the structure of singularities of the mean curvature flow. Calc. Var. Partial Differential Equations {2} (1994), no. 4, 443--480.) coincide for any Type I mean curvature flow, thus generalizing the result of Stone who established that for any mean convex Type I Mean curvature flow. This talk is based joint work with Natasa Sesum. Ovidiu Munteanu (Columbia University) Title: Rigidity theorems for complete noncompact manifolds Abstract: I will talk about certain characterizations of the hyperbolic and complex hyperbolic spaces by their bottom of spectrum of the Laplace operator on functions. Andy Sanders (Maryland) Title: Closed minimal immersions in quasi-Fuchsian 3-manifolds Abstract: I will quickly review some of the known facts about closed minimal surfaces in quasi-Fuchsian 3-manifolds and explain how the Jacobi operator associated to the minimal immersion plays a particularly crucial role in understanding how the minimal surfaces vary when the ambient quasi- Fuchsian metric is varied in the deformation space of quasi-Fuchsian metrics. Hongbin Sun (Princeton University) Title: Degree ±1 self-maps and self-homeomorphisms on S3-manifolds Abstract: We determine which 3-manifolds supporting S3 geometry admit adegree 1 or -1 self-map that does not homotopic to a self-homeomorphism. By Mostow Rigidity theorem, Waldhausen's theorem and the result in this paper, we can answer the same question for all prime 3-manifolds. Trnkova, Maria Title: Hyperbolic Exceptional Manifolds Abstract: An exceptional manifold is a closed hyperbolic manifold which does not have a shortest geodesic with an embedded tube of radius ln(3)/2. These manifolds arise in the proof of the homotopy rigidity theorem proved by D. Gabai, R. Meyerhoff and N. Thurston. The authors made several conjectures about the exceptional manifolds most of which have been proved. In my talk I will present an improved version of the conjecture and show that some exceptional manifolds non-trivially cover manifolds. The proof is based on the results obtained by programs Snap and SnapPy. Lu Wang (Massachusetts Institute of Technology) Title: Uniqueness of Self-Shrinkers of Mean Curvature Flow Abstract: In this talk, we will discuss the uniqueness of selfshrinking ends of mean curvature flow in 3-dimension Euclidean space, given fixed asymptotic behaviours and its relation with the classification problem of non-compact complete self-shrinkers. Conan Wu (Princeton University) Title: Volume preserving extensions and ergodicity of Anosov diffeomorphisms Abstract: Given a C1 self-diffeomorphism of a compact subset in ℝn, from Whitney's extension theorem we know exactly when does it C1 extend to ℝn. How about volume preserving extensions? It is a classical result that any volume preserving Anosov di ffeomorphism of regularity C1+ɛ is ergodic. The question is open for C1. In 1975 Rufus Bowen constructed an (non-volume-preserving) Anosov map on the 2-torus with an invariant positive measured Cantor set. Various attempts have been made to make the construction volume preserving. By studying the above extension problem we conclude, in particular, that the Bowen-type mapping on positive measured Cantor sets can never be volume preservingly extended to the torus. This is joint work with Charles Pugh and Amie Wilkinson. Tian Yang (Rutgers University) Title: A Deformation of Penner's Coordinate of the Decorated Teichmüller Space Abstract: We find a one-parameter family of coordinates {Ψh}hϵℝ which is a deformation of Penner's coordinate of the decorated Teichmüller space of an ideally triangulated punctured surface (S, T) of negative Euler characteristic. If h ≥ 0, the decorated Teichmüller space in the Ψh coordinate becomes an explicit convex polytope P(T) independent of h; and if h < 0, the decorated Teichmüller space becomes an explicit bounded convex polytope Ph(T) so that Ph(T) ⊂ Ph'(T) if h <h'. As a consequence, Bowditch-Epstein and Penner's cell decomposition of the decorated Teichmüller space is reproduced
{"url":"http://web.math.princeton.edu/conference/frggeometry2011/talks.html","timestamp":"2014-04-17T21:48:20Z","content_type":null,"content_length":"17568","record_id":"<urn:uuid:363e2332-4ad7-476a-911d-9f0e4fec963a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: AMS Chelsea Publishing 2008; 192 pp; hardcover Volume: 366 ISBN-10: 0-8218-4426-1 ISBN-13: 978-0-8218-4426-7 List Price: US$36 Member Price: US$32.40 Order Code: CHEL/366.H This classic book, originally published in 1968, is based on notes of a year-long seminar the authors ran at Princeton University. The primary goal of the book was to give a rather complete presentation of algebraic aspects of global class field theory, and the authors accomplished this goal spectacularly: for more than 40 years since its first publication, the book has served as an ultimate source for many generations of mathematicians. In this revised edition, two mathematical additions complementing the exposition in the original text are made. The new edition also contains several new footnotes, additional references, and historical comments. Graduate students and research mathematicians interested in number theory. "This new edition of the famous Artin-Tate notes on class field theory is a must-have, even for those who already have a copy of the original. This is a classic, a book that has inspired a generation of number theorists. It's hard going but deep, insightful, and essential." -- MAA Reviews • Preliminaries • The first fundamental inequality • Second fundamental inequality • Reciprocity law • The existence theorem • Connected component of idèle classes • The Grunwald-Wang theorem • Higher ramification theory • Explicit reciprocity laws • Group extensions • Abstract class field theory • Weil groups • Bibliography
{"url":"http://ams.org/bookstore?fn=20&arg1=chelsealist&ikey=CHEL-366-H","timestamp":"2014-04-19T23:31:36Z","content_type":null,"content_length":"15894","record_id":"<urn:uuid:ea09bcf8-5157-454e-98ef-c0696f9f9523>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Using Subtotals is much more flexible than I put in my previous post about Subtotals. I kept it compact because that is what I need to suit my needs in the work that I perform. Here I would like to expand – literally – the Subtotal function to show some other options that might help you do the work that you need to do. I will start with a basic file; here it is before I do anything to it: Now as before, go to the Data tab on the ribbon, and select the Subtotal Button: Once you have that open, I will select At each change in Shipment Number, the Sum Function, and select subtotals for Discount, Net Amount, and Gross Amounts. That way I will know the total cost of each shipment in my file. See the results below. Now let’s add some other functions to the list to show you what can be done. Again select the Subtotal button, now, to add some other functions don’t forget to “Unclick” the check box beside Replace current subtotals. If you forget to uncheck this box, your new subtotals will show up and all your previous subtotals will disappear. So make sure you uncheck that box! While you are unchecking boxes, uncheck the previous Add subtotal to selections, in our case the Discount, Net Amount, and Gross Amount have all been unchecked. Once that is unchecked you can add functions, I want to add count of Package Quantity. Here are the results of that Subtotal: I highlighted the first Subtotal results in green; the Count Subtotal I just completed in yellow. I now have the totals of the costs and the number of Packages in each shipment. Now let’s add a subtotal for another column without deleting our existing subtotals. We will repeat our steps for setting up a subtotal, I am adding in the average of the weight, destination code and discount. Here is what we get (the newest is highlighted in blue.) As you can see all of my subtotals are here for easy access and use. By clicking the numbers in the top left I can reduce my file so that all I am looking at is my subtotals, I have highlighted all of my subtotals to make it easy to see what I have done. There are other functions that you can use within the Subtotal Function such as: Max, Min, Product, Count Numbers, Standard Deviation, Standard Deviation for the Population, Variance, and Variance for the Population. I am not going into these in this blog, but this should give you some ideas to start experimenting with. Play around with some different data and functions to see what you come up with. That is after all the best way to learn new things, experiment with your data. See also 7 Responses to "Applying multiple Subtotals to your Excel table" 1. Exactly what I needed. Thank you!!! 2. Very helpful, took care of my work need handily. THANKS, 3. Is there a way to check off all columns in one shot, other than each column one at a time. I have a YTD report that have 36 columns I have to check off one at a time when I do the subtotal and it would be great to just “check all”, then take out about 5 columns, rather than check off 36. Any one know how to do that or if possible? Thank you. Scott Eaton 4. Thank you so much! It was the tips I was looking for!! 5. Very helpful 6. Very helpful. 7. thanks. Post a comment
{"url":"http://www.ablebits.com/office-addins-blog/2011/11/17/multiple-excel-subtotals/","timestamp":"2014-04-20T08:14:22Z","content_type":null,"content_length":"29327","record_id":"<urn:uuid:aeafaf15-d617-4196-a822-d8e37b0df50b>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of results: 22 1 What is the organelles that package and stor proteins (2 words) ? Wednesday, November 11, 2009 at 4:45pm by Alsha I have a Question about The Merchant of Venice By william shakspear. What are Some promises made in the stor??? Please help! Sunday, March 4, 2012 at 8:16pm by Josh THE MEAN PRICE OF DIGITAL CAMERAS AT AN ELECTRONIS STOR IS $224, WITH A STANDARD DEVIATION OF $8. RANDOM SAMPLES OF SIZE 36 ARE DRAWN FROM THIS POPULATION AND THE MEAN OF EACH SAMPLE IS DETERMINED. Thursday, November 4, 2010 at 11:19pm by caintgetrite a. Each occupation has varying ages; occupation is not a continuous variable. b. r cannot be > 1.00. c. They are not continuous variables. Thursday, September 6, 2012 at 7:16pm by PsyDAG Stor 155 rent - quantitative cable - categorical pets - categorical bedrooms - quantitative distance - quantitative Saturday, August 25, 2012 at 6:36pm by Damon Stor 155 Think of the mean as a fulcrum (balance point) of a distribution. To balance, the weights and distances from the fulcrum must be equal, therefore subtracting one side from the other will always = 0 (assuming that you did your calculations correctly). Thursday, August 30, 2012 at 1:38pm by PsyDAG Stor 155 Think of the mean as a fulcrum (balance point) of a distribution. To balance, the weights and distances from the fulcrum must be equal, therefore subtracting one side from the other will always = 0 (assuming that you did your calculations correctly). Thursday, August 30, 2012 at 1:38pm by PsyDAG Stor 155 Use the definition of the mean to show the sum of the deviation of the observations from their mean is always zero. This is one reason why variance and standard deviation use squared deviations. Thursday, August 30, 2012 at 1:38pm by ami Stor 155 The median worth is the number in the middle when all net worths are arranged from lowest to highest. The median is the average. This tells me that there are many households with very high net worth. Tuesday, August 28, 2012 at 12:41pm by Ms. Sue The clothing stor had a sale for 75% off most of its shoes.Malik selected a pair of shoes that regulary sell for 80. What is the sale of Malik's new shoes. Monday, February 6, 2012 at 5:18pm by Nora Stor 155 a report on the assets of american households says that the median net worth of the U.S. families is $120,300. The mean worth of these families is $556,300. What explain the difference between these two measures of center? Tuesday, August 28, 2012 at 12:41pm by ami Suppose that you flip a fair coin (P(H)=P(T)=1 2 ) three times and you record if it landed on heads, H, or tails, T. (a) What is the sample space of this experiment? What is the probability of each event? (b) [1 pt] Let X be the number of times that you observe heads. What ... Saturday, September 29, 2012 at 6:18pm by ami Stor 155 In computing the median income of any group, some federal agencies omit all members of the group who had no income. Give an example to show that the reported median income of a group can go down even though the group becomes economically better off. Is this true of the mean ... Thursday, August 30, 2012 at 12:49pm by ami Stor 155 a data set lists apartments available for students to rent. Information provided includes the monthly rent, whether or not cable is included free of charge, whether or not pets are allowed, the number of bedrooms, and the distance to the campus. Describe the cases in the data ... Saturday, August 25, 2012 at 6:36pm by ami Each of the following statements contain a blunder. Explain in each case what is wrong. a) There is a high correlation between the age of American workers and their occupation. b) we found a high correlation (r=1.19) between students rating of faculty teaching and ratings made... Thursday, September 6, 2012 at 7:16pm by ami Here is a simple way to create a random variable X that has mean μ and stan- dard deviation σ: X takes only the two values μ−σ and μ+σ, eachwith probability 0.5. Use the definition of the mean and variance for discrete random variables to ... Thursday, October 4, 2012 at 1:15pm by ami It would be quite risky for you to insure the life of a 25-year-old friend . There is a high probability that your friend would live and you would gain $875 in premiums. But if he were to die, you would lose almost $100,000. Explain carefully why selling insurance is not risky... Thursday, October 4, 2012 at 1:13pm by ami In the second list, listing all possible outcomes there is one way to get three heads p(3h) = 1/8 three ways of getting two heads p(2h) = 3/8 three ways of getting 1 head p(1h) = 3/8 one way of getting 0 heads p (0h) = 1/8 let's look at a binomial distribution where p(h) = 1/2... Saturday, September 29, 2012 at 6:18pm by Damon Twitter: Suppose that the population proportion of Internet users whosay they use Twitter or a similar service to post updates about themselves or to see updates about others is 19%. Think about selecting random samples from a population in which 19% are Twitter users. a) ... Monday, September 17, 2012 at 7:13pm by ami Stressed syllables Here is a list of words now can someone help me find the stressed syllables: 1. diff(icult) 2. (imag)ine 3. (fam)iliar 4. relative 5. compan(ions) 6. (won)derful 7. impat(ience) 8. (coun)tryside 9. for(gott)en 10. univ(erse) 11. un(iver)sal 12. (acc)ident 13. (acc)idental 14. ... Monday, February 19, 2007 at 9:49am by Holly Recall the Prisoners problem from class. We have three prisoners (A, B, and C) on death row. The Governor will pardon one of the prisoners, and he tells the Warden who he chose, but the warden is not allowed to reveal who was pardoned. Lets switch things up some, Prisoner C ... Saturday, September 29, 2012 at 6:53pm by ami english - What is true love These articles might be helpful. I hope these article will be helpful. A LINE ON LIFE 2/9/97 Love with Style What is love? Is love the same for everyone? There are many different ways of viewing love, but none of them "tell the whole story." From one perspective, sociologist ... Thursday, February 9, 2012 at 10:49pm by PsyDAG
{"url":"http://www.jiskha.com/search/index.cgi?query=Stor","timestamp":"2014-04-19T10:30:50Z","content_type":null,"content_length":"13544","record_id":"<urn:uuid:b0083118-1d77-43f8-8887-ac969006c6ff>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Heisenberg double Heisenberg double Given two bialgebras $A$ and $B$ in Hopf pairing $\lt, \gt$ (i.e. making comultiplication on one transposed to multiplication to another and viceversa), one define a left Hopf action $\triangleright$ of $B$ on $A$ by formulas $b\triangleright a = \sum \lt b, a_{(2)}\gt a_{(1)}= (\lt,\gt \otimes \id)(b\otimes \tau\Delta_A(a))$ one forms the Heisenberg double corresponding to these data as the crossed product algebra (“smash product”) $A\sharp B$ associated to the Hopf action $\triangleright$. For example if $A = S(V)$ is the symmetric (Hopf) algebra on a finite-dimensional vector space $V$, and $B$ its algebraic dual $(S(V))^*\cong \hat{S}(V^*)$, considered as its dual topological Hopf algebra, the result is the Weyl algebra of regular differential operators, completed with respect to the filtration corresponding to the degree of differential operator. If $B$ is just the finite dual of $S(V)$ which is a usual Hopf algebra, then there is no completion, of course. • J.-H. Lu, On the Drinfeld double and the Heisenberg double of a Hopf algebra, Duke Math. J. 74 (1994) 763–776. In the following paper there is an example showing that the Heisenberg double $A^*\sharp A$ has a structure of a Hopf algebroid over $A^*$; moreover $A^*$ can be replaced by any module algebra over the Drinfel'd double $D(A)$:
{"url":"http://ncatlab.org/nlab/show/Heisenberg+double","timestamp":"2014-04-17T15:27:48Z","content_type":null,"content_length":"20024","record_id":"<urn:uuid:2281b2fa-d684-4cc9-bf12-52386cda6c98>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
- - Please install Math Player to see the Math Symbols properly Click on a 'View Solution' below for other questions: cc Pythagorean theorem is applicable for ________. cc View Solution cc What is the length of the side PR of ΔPQR in the figure? View Solution cc What is the length of the side PR in the right triangle? PR View Solution cc What is the length of the third side of the triangle in the figure? View Solution cc What are the values of x and y in the triangle? View Solution cc State which of the following measures will form a right triangle. (i) 6, 8, 10 (ii) 5, 7, 74 View Solution (iii) 2, 3, 5 cc One side of a right triangle is 3 ft and the length of its hypotenuse is 4 ft. Find the length of the other side. View Solution cc What is the height of the ΔABC, if D is the midpoint of AB, AB = 8 in. and AC = 5 in.? View Solution cc What is the diameter of the circle with center O, if AC = 6 ft and AB = 7 ft? View Solution cc What is the measure of the side of the square, if the diagonal of the square is 8 feet? cc View Solution cc Mike walked diagonally across a square garden of side 20 ft from one corner to the opposite corner. How far did he walk? cc View Solution cc If one side of a right triangle is two times the other and the length of hypotenuse is 15 ft, then what are the measures of the two sides? cc View Solution cc Find the length of the diagonal of the rectangle, if the length and the width of the rectangle are 6 ft and 3 ft. View Solution cc The hypotenuse of a right triangle is 5 units and the other two sides are in the ratio 3:4. What is the length of the shortest side? cc View Solution cc The length and the width of a rectangular field are 28 feet and 21 feet respectively. A diagonal walkway is made from one end to the opposite end. What is the length of the walkway? View Solution cc The length and the width of a rectangular football court are 20 feet and 15 feet respectively. A diagonal walkway is made from one end to the opposite end. What is the length of the View walkway? cc Solution cc 'The hypotenuse is the shortest side in a right triangle.' State whether the statement is true or false. cc View Solution cc Which of the following is true for the right triangle? cc View Solution cc State which of the following measures will form a right triangle. (i) 9, 12, 15 (ii) 2, 4, 20 View Solution (iii) 3, 4, 6 cc One side of a right triangle is 6 cm and the length of its hypotenuse is 8 cm. Find the length of the other side. View Solution cc Which of the figures is a right triangle? cc View Solution cc What is the diameter of the circle with center O, if AC = 5 in and AB = 6 in? View Solution cc What is the measure of the side of the square, if the diagonal of the square is 6 feet? cc View Solution cc What is the value of x in the triangle? View Solution cc What is the length of the third side of the triangle in the figure? View Solution cc What is the height of the ΔABC, if D is the midpoint of AB, AB = 16 in. and AC = 10 in.? View Solution cc What is the sum of the sides AB and AC of the triangle? View Solution cc The lengths p, q and r of the sides of a right triangle satisfy the condition p^2 - q^2 = r^2. What is the length of hypotenuse? cc View Solution cc Gary walked diagonally across a square garden of side 25 ft from one corner to the opposite corner. How far did he walk? cc View Solution cc The hypotenuse of a right triangle is 10 units and the other two sides are in the ratio 3:4. What is the length of the shortest side? cc View Solution cc Find the length of the diagonal of the rectangle, if the length and the width of the rectangle are 6 m and 3 m. View Solution cc If one side of a right triangle is two times the other and the length of hypotenuse is 25 ft, then what are the measures of the two sides? cc View Solution cc Find the lengths of AB and BC of the right triangle. View Solution cc If, in an isosceles right triangle, the length of the hypotenuse is c and the length of one of the sides is a, then c = ________. cc View Solution cc The length and the width of a rectangular field are 20 feet and 15 feet respectively. A diagonal walkway is made from one end to the opposite end. What is the length of the walkway? View Solution cc The length and the width of a rectangular field are 24 feet and 18 feet respectively. A diagonal walkway is made from one end to the opposite end. What is the length of the walkway? View Solution cc The radius of the circle is 6 inches. Find the length of the sides of the square using the Pythagorean theorem. cc View Solution cc Find the length of RT in the triangle RST. View Solution cc State which of the following measures will form a right triangle. I. 6, 8, 10. II. 2, 4, 20. View Solution III 2, 3, 5. cc State whether the lengths 12, 13 and 14 are sides of a right triangle. cc View Solution cc State which of the following measures will form a right triangle. (i) 3, 4, 5 View Solution (ii)4, 6, 10 (iii)1, 2, 4 cc
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgdedxkjeme&.html","timestamp":"2014-04-17T18:25:16Z","content_type":null,"content_length":"83609","record_id":"<urn:uuid:09c71fc3-1db0-4b45-913b-204599bdbe6e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Haskell/YAHT/Type basics From Wikibooks, open books for an open world Haskell uses a system of static type checking. This means that every expression in Haskell is assigned a type. For instance 'a' would have type Char, for "character." Then, if you have a function which expects an argument of a certain type and you give it the wrong type, a compile-time error will be generated (that is, you will not be able to compile the program). This vastly reduces the number of bugs that can creep into your program. Furthermore, Haskell uses a system of type inference. This means that you don't even need to specify the type of expressions. For comparison, in C, when you define a variable, you need to specify its type (for instance, int, char, etc.). In Haskell, you needn't do this -- the type will be inferred from context. If you want, you certainly are allowed to explicitly specify the type of an expression; this often helps debugging. In fact, it is sometimes considered good style to explicitly specify the types of outermost functions. Both Hugs and GHCi allow you to apply type inference to an expression to find its type. This is done by using the :t command. For instance, start up your favorite shell and try the following: Prelude> :t 'c' 'c' :: Char This tells us that the expression 'c' has type Char (the double colon :: is used throughout Haskell to specify types). Simple Types[edit] There are a slew of built-in types, including Int (for integers, both positive and negative), Double (for floating point numbers), Char (for single characters), String (for strings), and others. We have already seen an expression of type Char; let's examine one of type String: Prelude> :t "Hello" "Hello" :: String You can also enter more complicated expressions, for instance, a test of equality: Prelude> :t 'a' == 'b' 'a' == 'b' :: Bool You should note that even though this expression is false, it still has a type, namely the type Bool. Bool is short for Boolean (pronounced "boo-lee-uhn") and has two possible values: True and False. You can observe the process of type checking and type inference by trying to get the shell to give you the type of an ill-typed expression. For instance, the equality operator requires that the type of both of its arguments are of the same type. We can see that Char and String are of different types by trying to compare a character to a string: Prelude> :t 'a' == "a" ERROR - Type error in application *** Expression : 'a' == "a" *** Term : 'a' *** Type : Char *** Does not match : [Char] The first line of the error (the line containing "Expression") tells us the expression in which the type error occurred. The second line tells us which part of this expression is ill-typed. The third line tells us the inferred type of this term and the fourth line tells us what it needs to have matched. In this case, it says that type Char doesn't match the type [Char] (a list of characters -- a string in Haskell is represented as a list of characters). As mentioned before, you can explicitly specify the type of an expression using the :: operator. For instance, instead of "a" in the previous example, we could have written ("a"::String). In this case, this has no effect since there's only one possible interpretation of "a". However, consider the case of numbers. You can try: Prelude> :t 5 :: Int 5 :: Int Prelude> :t 5 :: Double 5 :: Double Here, we can see that the number 5 can be instantiated as either an Int or a Double. What if we don't specify the type? Prelude> :t 5 5 :: Num a => a Not quite what you expected? What this means, briefly, is that if some type a is an instance of the Num class, then type of the expression 5 can be of type a. If that made no sense, that's okay for now. In Section Classes we talk extensively about type classes (which is what this is). The way to read this, though, is to say "a being an instance of Num implies a." Figure out for yourself, and then verify the types of the following expressions, if they have a type. Also note if the expression is a type error: 1. 'h':'e':'l':'l':'o':[] 2. [5,'a'] 3. (5,'a') 4. (5::Int) + 10 5. (5::Int) + (10::Double) Polymorphic Types[edit] Haskell employs a polymorphic type system. This essentially means that you can have type variables, which we have alluded to before. For instance, note that a function like tail doesn't care what the elements in the list are: Prelude> tail [5,6,7,8,9] Prelude> tail "hello" Prelude> tail ["the","man","is","happy"] This is possible because tail has a polymorphic type: [$\alpha$] -> [$\alpha$]. That means it can take as an argument any list and return a value which is a list of the same type. The same analysis can explain the type of fst: Prelude> :t fst fst :: (a,b) -> a Here, GHCi has made explicit the universal quantification of the type values. That is, it is saying that for all types a and b, fst is a function from (a,b) to a. Figure out for yourself, and then verify the types of the following expressions, if they have a type. Also note if the expression is a type error: 1. snd 2. head 3. null 4. head . tail 5. head . head Type Classes[edit] We saw last section some strange typing having to do with the number five. Before we delve too deeply into the subject of type classes, let's take a step back and see some of the motivation. In many languages (C++, Java, etc.), there exists a system of overloading. That is, a function can be written that takes parameters of differing types. For instance, the canonical example is the equality function. If we want to compare two integers, we should use an integer comparison; if we want to compare two floating point numbers, we should use a floating point comparison; if we want to compare two characters, we should use a character comparison. In general, if we want to compare two things which have type $\alpha$, we want to use an $\alpha-compare$. We call $\alpha$ a type variable since it is a variable whose value is a type. In general, type variables will be written using the first part of the Greek alphabet: $\alpha, \beta, \gamma, \delta, ...$. Unfortunately, this presents some problems for static type checking, since the type checker doesn't know which types a certain operation (for instance, equality testing) will be defined for. There are as many solutions to this problem as there are statically typed languages (perhaps a slight exaggeration, but not so much so). The one chosen in Haskell is the system of type classes. Whether this is the "correct" solution or the "best" solution of course depends on your application domain. It is, however, the one we have, so you should learn to love it. Equality Testing[edit] Returning to the issue of equality testing, what we want to be able to do is define a function == (the equality operator) which takes two parameters, each of the same type (call it $\alpha$), and returns a boolean. But this function may not be defined for every type; just for some. Thus, we associate this function == with a type class, which we call Eq. If a specific type $\alpha$ belongs to a certain type class (that is, all functions associated with that class are implemented for $\alpha$), we say that $\alpha$ is an instance of that class. For instance, Int is an instance of Eq since equality is defined over integers. The Num Class[edit] In addition to overloading operators like ==, Haskell has overloaded numeric constants (i.e., 1, 2, 3, etc.). This was done so that when you type in a number like 5, the compiler is free to say 5 is an integer or floating point number as it sees fit. It defines the Num class to contain all of these numbers and certain minimal operations over them (addition, for instance). The basic numeric types (Int, Double) are defined to be instances of Num. We have only skimmed the surface of the power (and complexity) of type classes here. There will be much more discussion of them in Section Classes, but we need some more background before we can get there. Before we do that, we need to talk a little more about functions. The Show Class[edit] Another of the standard classes in Haskell is the Show class. Types which are members of the Show class have functions which convert values of that type to a string. This function is called show. For instance show applied to the integer 5 is the string "5"; show applied to the character 'a' is the three-character string "'a'" (the first and last characters are apostrophes). show applied to a string simply puts quotes around it. You can test this in the interpreter: Prelude> show 5 Prelude> show 'a' Prelude> show "Hello World" "\"Hello World\"" The reason the backslashes appear in the last line is because the interior quotes are "escaped", meaning that they are part of the string, not part of the interpreter printing the value. The actual string doesn't contain the backslashes. Some types are not instances of Show; functions for example. If you try to show a function (like sqrt), the compiler or interpreter will give you some cryptic error message, complaining about a missing instance declaration or an illegal class constraint. Function Types[edit] In Haskell, functions are first class values, meaning that just as 1 or 'c' are values which have a type, so are functions like square or ++. Before we talk too much about functions, we need to make a short diversion into very theoretical computer science (don't worry, it won't be too painful) and talk about the lambda calculus. Lambda Calculus[edit] The name "Lambda Calculus", while perhaps daunting, describes a fairly simple system for representing functions. The way we would write a squaring function in lambda calculus is: $\lambda x . x*x$, which means that we take a value, which we will call $x$ (that's what $\lambda x .$ means) and then multiply it by itself. The $\lambda$ is called "lambda abstraction." In general, lambdas can only have one parameter. If we want to write a function that takes two numbers, doubles the first and adds it to the second, we would write: $\lambda x \lambda y . 2*x+y$. When we apply a value to a lambda expression, we remove the outermost $\lambda$ and replace every occurrence of the lambda variable with the value. For instance, if we evaluate $(\lambda x . x*x) 5$, we remove the lambda and replace every occurrence of $x$ with $5$, yielding $(5*5)$ which is $25$. In fact, Haskell is largely based on an extension of the lambda calculus, and these two expressions can be written directly in Haskell (we simply replace the $\lambda$ with a backslash(\) and the $.$ with (->); also we don't need to repeat the lambdas; and, of course, in Haskell we have to give them names if we're defining functions): square = \x -> x*x f = \x y -> 2*x + y You can also evaluate lambda expressions in your interactive shell: Prelude> (\x -> x*x) 5 Prelude> (\x y -> 2*x + y) 5 4 We can see in the second example that we need to give the lambda abstraction two arguments, one corresponding to x and the other corresponding to y. Higher-Order Types[edit] "Higher-Order Types" is the name given to those types whose elements are functions. The type given to functions mimicks the lambda calculus representation of the functions. For instance, the definition of square gives $\lambda x . x*x$. To get the type of this, we first ask ourselves what the type of x is. Say we decide x is an Int. Then, we notice that the function square takes an Int and produces a value x*x. We know that when we multiply two Ints together, we get another Int, so the type of the results of square is also an Int. Thus, we say the type of square is Int -> Int. We can apply a similar analysis to the function f above(\x y -> 2*x + y). The value of this function (remember, functions are values) is something which takes a value x and produces a new value, which takes a value y and produces 2*x+y. For instance, if we take f and apply only one number to it, we get $(\lambda x \lambda y . 2x+y) 5$ which becomes our new value $\lambda y . 2(5)+y$, where all occurrences of $x$ have been replaced with the applied value, $5$. So we know that f takes an Int and produces a value of some type, of which we're not sure. But we know the type of this value is the type of $\lambda y . 2(5)+y$. We apply the above analysis and find out that this expression has type Int -> Int. Thus, f takes an Int and produces something which has type Int -> Int. So the type of f is Int -> (Int -> Int). The parentheses are not necessary; in function types, if you have $\alpha \rightarrow \beta \rightarrow \gamma$ it is assumed that $\beta \rightarrow \gamma$ is grouped. If you want the other way, with $\alpha \rightarrow \beta$ grouped, you need to put parentheses around them. This isn't entirely accurate. As we saw before, numbers like 5 aren't really of type Int, they are of type Num a => a. We can easily find the type of Prelude functions using ":t" as before: Prelude> :t head head :: [a] -> a Prelude> :t tail tail :: [a] -> [a] Prelude> :t null null :: [a] -> Bool Prelude> :t fst fst :: (a,b) -> a Prelude> :t snd snd :: (a,b) -> b We read this as: "head" is a function that takes a list containing values of type "a" and gives back a value of type "a"; "tail" takes a list of "a"s and gives back another list of "a"s; "null" takes a list of "a"s and gives back a boolean; "fst" takes a pair of type "(a,b)" and gives back something of type "a", and so on. Saying that the type of fst is (a,b) -> a does not necessarily mean that it simply gives back the first element; it only means that it gives back something with the same type as the first element. We can also get the type of operators like + and * and ++ and :; however, in order to do this we need to put them in parentheses. In general, any function which is used infix (meaning in the middle of two arguments rather than before them) must be put in parentheses when getting its type. Prelude> :t (+) (+) :: Num a => a -> a -> a Prelude> :t (*) (*) :: Num a => a -> a -> a Prelude> :t (++) (++) :: [a] -> [a] -> [a] Prelude> :t (:) (:) :: a -> [a] -> [a] The types of + and * are the same, and mean that + is a function which, for some type a which is an instance of Num, takes a value of type a and produces another function which takes a value of type a and produces a value of type a. In short hand, we might say that + takes two values of type a and produces a value of type a, but this is less precise. The type of ++ means, in shorthand, that, for a given type a, ++ takes two lists of as and produces a new list of as. Similarly, : takes a value of type a and another value of type [a] (list of as) and produces another value of type [a]. That Pesky IO Type[edit] You might be tempted to try getting the type of a function like putStrLn: Prelude> :t putStrLn putStrLn :: String -> IO () Prelude> :t readFile readFile :: FilePath -> IO String What in the world is that IO thing? It's basically Haskell's way of representing that these functions aren't really functions. They're called "IO Actions" (hence the IO). The immediate question which arises is: okay, so how do I get rid of the IO. In brief, you can't directly remove it. That is, you cannot write a function with type IO String -> String. The only way to use things with an IO type is to combine them with other functions using (for example), the do notation. For example, if you're reading a file using readFile, presumably you want to do something with the string it returns (otherwise, why would you read the file in the first place). Suppose you have a function f which takes a String and produces an Int. You can't directly apply f to the result of readFile since the input to f is String and the output of readFile is IO String and these don't match. However, you can combine these as: main = do s <- readFile "somefile" let i = f s putStrLn (show i) Here, we use the arrow convention to "get the string out of the IO action" and then apply f to the string (called s). We then, for example, print i to the screen. Note that the let here doesn't have a corresponding in. This is because we are in a do block. Also note that we don't write i <- f s because f is just a normal function, not an IO action. Note: putStrLn (show i) can be simplified to print i if you want. Explicit Type Declarations[edit] It is sometimes desirable to explicitly specify the types of some elements or functions, for one (or more) of the following reasons: Some people consider it good software engineering to specify the types of all top-level functions. If nothing else, if you're trying to compile a program and you get type errors that you cannot understand, if you declare the types of some of your functions explicitly, it may be easier to figure out where the error is. Type declarations are written separately from the function definition. For instance, we could explicitly type the function square as in the following code (an explicitly declared type is called a type signature): square :: Num a => a -> a square x = x*x These two lines do not even have to be next to each other. However, the type that you specify must match the inferred type of the function definition (or be more specific). In this definition, you could apply square to anything which is an instance of Num: Int, Double, etc. However, if you knew apriori that square were only going to be applied to value of type Int, you could refine its type square :: Int -> Int square x = x*x Now, you could only apply square to values of type Int. Moreover, with this definition, the compiler doesn't have to generate the general code specified in the original function definition since it knows you will only apply square to Ints, so it may be able to generate faster code. If you have extensions turned on ("-98" in Hugs or "-fglasgow-exts" in GHC(i)), you can also add a type signature to expressions and not just functions. For instance, you could write: square (x :: Int) = x*x which tells the compiler that x is an Int; however, it leaves the compiler alone to infer the type of the rest of the expression. What is the type of square in this example? Make your guess then you can check it either by entering this code into a file and loading it into your interpreter or by asking for the type of the expression: Prelude> :t (\(x :: Int) -> x*x) since this lambda abstraction is equivalent to the above function declaration. Functional Arguments[edit] In the section on Lists we saw examples of functions taking other functions as arguments. For instance, map took a function to apply to each element in a list, filter took a function that told it which elements of a list to keep, and foldl took a function which told it how to combine list elements together. As with every other function in Haskell, these are well-typed. Let's first think about the map function. Its job is to take a list of elements and produce another list of elements. These two lists don't necessarily have to have the same types of elements. So map will take a value of type [a] and produce a value of type [b]. How does it do this? It uses the user-supplied function to convert. In order to convert an a to a b, this function must have type a -> b. Thus, the type of map is (a -> b) -> [a] -> [b], which you can verify in your interpreter with ":t". We can apply the same sort of analysis to filter and discern that it has type (a -> Bool) -> [a] -> [a]. As we presented the foldr function, you might be tempted to give it type (a -> a -> a) -> a -> [a] -> a, meaning that you take a function which combines two as into another one, an initial value of type a, a list of as to produce a final value of type a. In fact, foldr has a more general type: (a -> b -> b) -> b -> [a] -> b. So it takes a function which turn an a and a b into a b, an initial value of type b and a list of as. It produces a b. To see this, we can write a function count which counts how many members of a list satisfy a given constraint. You can of course use filter and length to do this, but we will also do it using foldr: module Count import Char count1 p l = length (filter p l) count2 p l = foldr (\x c -> if p x then c+1 else c) 0 l The functioning of count1 is simple. It filters the list l according to the predicate p, then takes the length of the resulting list. On the other hand, count2 uses the initial value (which is an integer) to hold the current count. For each element in the list l, it applies the lambda expression shown. This takes two arguments, c which holds the current count and x which is the current element in the list that we're looking at. It checks to see if p holds about x. If it does, it returns the new value c+1, increasing the count of elements for which the predicate holds. If it doesn't, it just returns c, the old count. Figure out for yourself, and then verify the types of the following expressions, if they have a type. Also note if the expression is a type error: 1. \x -> [x] 2. \x y z -> (x,y:z:[]) 3. \x -> x + 5 4. \x -> "hello, world" 5. \x -> x 'a' 6. \x -> x x 7. \x -> x + x Data Types[edit] Tuples and lists are nice, common ways to define structured values. However, it is often desirable to be able to define our own data structures and functions over them. So-called "datatypes" are defined using the data keyword. For instance, a definition of a pair of elements (much like the standard, built-in pair type) could be: data Pair a b = Pair a b Let's walk through this code one word at a time. First we say "data" meaning that we're defining a datatype. We then give the name of the datatype, in this case, "Pair." The "a" and "b" that follow "Pair" are unique type parameters, just like the "a" is the type of the function map. So up until this point, we've said that we're going to define a data structure called "Pair" which is parameterized over two types, a and b. Note that you can't have Pair a a = Pair a a — in this case write Pair a = Pair a a. After the equals sign, we specify the constructors of this data type. In this case, there is a single constructor, "Pair" (this doesn't necessarily have to have the same name as the type, but in the case of a single constructor it seems to make more sense). After this pair, we again write "a b", which means that in order to construct a Pair we need two values, one of type a and one of type b. This definition introduces a function, Pair :: a -> b -> Pair a b that you can use to construct Pairs. If you enter this code into a file and load it, you can see how these are constructed: Datatypes> :t Pair Pair :: a -> b -> Pair a b Datatypes> :t Pair 'a' Pair 'a' :: a -> Pair Char a Datatypes> :t Pair 'a' "Hello" :t Pair 'a' "Hello" Pair 'a' "Hello" :: Pair Char [Char] So, by giving Pair two values, we have completely constructed a value of type Pair. We can write functions involving pairs as: pairFst (Pair x y) = x pairSnd (Pair x y) = y In this, we've used the pattern matching capabilities of Haskell to look at a pair and extract values from it. In the definition of pairFst we take an entire Pair and extract the first element; similarly for pairSnd. We'll discuss pattern matching in much more detail in the section on Pattern matching. 1. Write a data type declaration for Triple, a type which contains three elements, all of different types. Write functions tripleFst, tripleSnd and tripleThr to extract respectively the first, second and third. 2. Write a datatype Quadruple which holds four elements. However, the first two elements must be the same type and the last two elements must be the same type. Write a function firstTwo which returns a list containing the first two elements and a function lastTwo which returns a list containing the last two elements. Write type signatures for these functions. Multiple Constructors[edit] We have seen an example of the data type with one constructor: Pair. It is also possible (and extremely useful) to have multiple constructors. Let us consider a simple function which searches through a list for an element satisfying a given predicate and then returns the first element satisfying that predicate. What should we do if none of the elements in the list satisfy the predicate? A few options are listed below: • Raise an error • Loop indefinitely • Write a check function • Return the first element • $...$ Raising an error is certainly an option (see the section on Exceptions to see how to do this). The problem is that it is difficult/impossible to recover from such errors. Looping indefinitely is possible, but not terribly useful. We could write a sister function which checks to see if the list contains an element satisfying a predicate and leave it up to the user to always use this function first. We could return the first element, but this is very ad-hoc and difficult to remember; and what if the list itself is empty? The fact that there is no basic option to solve this problem simply means we have to think about it a little more. What are we trying to do? We're trying to write a function which might succeed and might not. Furthermore, if it does succeed, it returns some sort of value. Let's write a datatype: data Maybe a = Nothing | Just a This is one of the most common datatypes in Haskell and is defined in the Prelude. Here, we're saying that there are two possible ways to create something of type Maybe a. The first is to use the nullary constructor Nothing, which takes no arguments (this is what "nullary" means). The second is to use the constructor Just, together with a value of type a. The Maybe type is useful in all sorts of circumstances. For instance, suppose we want to write a function (like head) which returns the first element of a given list. However, we don't want the program to die if the given list is empty. We can accomplish this with a function like: firstElement :: [a] -> Maybe a firstElement [] = Nothing firstElement (x:xs) = Just x The type signature here says that firstElement takes a list of as and produces something with type Maybe a. In the first line of code, we match against the empty list []. If this match succeeds (i.e., the list is, in fact, empty), we return Nothing. If the first match fails, then we try to match against x:xs which must succeed. In this case, we return Just x. For our findElement function, we represent failure by the value Nothing and success with value a by Just a. Our function might look something like this: findElement :: (a -> Bool) -> [a] -> Maybe a findElement p [] = Nothing findElement p (x:xs) = if p x then Just x else findElement p xs The first line here gives the type of the function. In this case, our first argument is the predicate (and takes an element of type a and returns True if and only if the element satisfies the predicate); the second argument is a list of as. Our return value is maybe an a. That is, if the function succeeds, we will return Just a and if not, Nothing. Another useful datatype is the Either type, defined as: data Either a b = Left a | Right b This is a way of expressing alternation. That is, something of type Either a b is either a value of type a (using the Left constructor) or a value of type b (using the Right constructor). 1. Write a datatype Tuple which can hold one, two, three or four elements, depending on the constructor (that is, there should be four constructors, one for each number of arguments). Also provide functions tuple1 through tuple4 which take a tuple and return Just the value in that position, or Nothing if the number is invalid (i.e., you ask for the tuple4 on a tuple holding only two 2. Based on our definition of Tuple from the previous exercise, write a function which takes a Tuple and returns either the value (if it's a one-tuple), a Haskell-pair (i.e., ('a',5)) if it's a two-tuple, a Haskell-triple if it's a three-tuple or a Haskell-quadruple if it's a four-tuple. You will need to use the Either type to represent this. Recursive Datatypes[edit] We can also define recursive datatypes. These are datatypes whose definitions are based on themselves. For instance, we could define a list datatype as: data List a = Nil | Cons a (List a) In this definition, we have defined what it means to be of type List a. We say that a list is either empty (Nil) or it's the Cons of a value of type a and another value of type List a. This is almost identical to the actual definition of the list datatype in Haskell, except that uses special syntax where [] corresponds to Nil and : corresponds to Cons. We can write our own length function for our lists as: listLength Nil = 0 listLength (Cons x xs) = 1 + listLength xs This function is slightly more complicated and uses recursion to calculate the length of a List. The first line says that the length of an empty list (a Nil) is $0$. This much is obvious. The second line tells us how to calculate the length of a non-empty list. A non-empty list must be of the form Cons x xs for some values of x and xs. We know that xs is another list and we know that whatever the length of the current list is, it's the length of its tail (the value of xs) plus one (to account for x). Thus, we apply the listLength function to xs and add one to the result. This gives us the length of the entire list. Write functions listHead, listTail, listFoldl and listFoldr which are equivalent to their Prelude twins, but function on our List datatype. Don't worry about exceptional conditions on the first two. Binary Trees[edit] We can define datatypes that are more complicated than lists. Suppose we want to define a structure that looks like a binary tree. A binary tree is a structure that has a single root node; each node in the tree is either a "leaf" or a "branch." If it's a leaf, it holds a value; if it's a branch, it holds a value and a left child and a right child. Each of these children is another node. We can define such a data type as: data BinaryTree a = Leaf a | Branch (BinaryTree a) a (BinaryTree a) In this datatype declaration we say that a BinaryTree of as is either a Leaf which holds an a, or it's a branch with a left child (which is a BinaryTree of as), a node value (which is an a), and a right child (which is also a BinaryTree of as). It is simple to modify the listLength function so that instead of calculating the length of lists, it calculates the number of nodes in a BinaryTree. Can you figure out how? We can call this function treeSize. The solution is given below: treeSize (Leaf x) = 1 treeSize (Branch left x right) = 1 + treeSize left + treeSize right Here, we say that the size of a leaf is $1$ and the size of a branch is the size of its left child, plus the size of its right child, plus one. 1. Write a function elements which returns the elements in a BinaryTree in a bottom-up, left-to-right manner (i.e., the first element returned is the left-most leaf, followed by its parent's value, followed by the other child's value, and so on). The result type should be a normal Haskell list. 2. Write a foldr function treeFoldr for BinaryTrees and rewrite elements in terms of it (call the new one elements2). 3. Write a foldl function treeFoldl for BinaryTrees and rewrite elements in terms of it (call the new one elements3). Enumerated Sets[edit] You can also use datatypes to define things like enumerated sets, for instance, a type which can only have a constrained number of values. We could define a color type: data Color = Red | Orange | Yellow | Green | Blue | Purple | White | Black This would be sufficient to deal with simple colors. Suppose we were using this to write a drawing program, we could then write a function to convert between a Color and a RGB triple. We can write a colorToRGB function, as: colorToRGB Red = (255,0,0) colorToRGB Orange = (255,128,0) colorToRGB Yellow = (255,255,0) colorToRGB Green = (0,255,0) colorToRGB Blue = (0,0,255) colorToRGB Purple = (255,0,255) colorToRGB White = (255,255,255) colorToRGB Black = (0,0,0) If we wanted also to allow the user to define his own custom colors, we could change the Color datatype to something like: data Color = Red | Orange | Yellow | Green | Blue | Purple | White | Black | Custom Int Int Int -- R G B components And add a final definition for colorToRGB: colorToRGB (Custom r g b) = (r,g,b) The Unit type[edit] A final useful datatype defined in Haskell (from the Prelude) is the unit type. Its definition is: data () = () The only true value of this type is (). This is essentially the same as a void type in a language like C or Java and will be useful when we talk about IO in the chapter Io. We'll dwell much more on data types in the sections on Pattern matching and Datatypes. Continuation Passing Style[edit] There is a style of functional programming called "Continuation Passing Style" (also simply "CPS"). The idea behind CPS is to pass around as a function argument what to do next. I will handwave through an example which is too complex to write out at this point and then give a real example, though one with less motivation. Consider the problem of parsing. The idea here is that we have a sequence of tokens (words, letters, whatever) and we want to ascribe structure to them. The task of converting a string of Java tokens to a Java abstract syntax tree is an example of a parsing problem. So is the task of parsing English sentences (though the latter is extremely difficult, even for native English users parsing sentences from the real world). Suppose we're parsing something like C or Java where functions take arguments in parentheses. But for simplicity, assume they are not separated by commas. That is, a function call looks like myFunction(x y z). We want to convert this into something like a pair containing first the string "myFunction" and then a list with three string elements: "x", "y" and "z". The general approach to solving this would be to write a function which parses function calls like this one. First it would look for an identifier ("myFunction"), then for an open parenthesis, then for zero or more identifiers, then for a close parenthesis. One way to do this would be to have two functions: parseFunction :: [Token] -> Maybe ((String, [String]), [Token]) parseIdentifier :: [Token] -> Maybe (String, [Token]) The idea would be that if we call parseFunction, if it doesn't return Nothing, then it returns the pair described earlier, together with whatever is left after parsing the function. Similarly, parseIdentifier will parse one of the arguments. If it returns Nothing, then it's not an argument; if it returns Just something, then that something is the argument paired with the rest of the What the parseFunction function would do is to parse an identifier. If this fails, it fails itself. Otherwise, it continues and tries to parse an open parenthesis. If that succeeds, it repeatedly calls parseIdentifier until that fails. It then tries to parse a close parenthesis. If that succeeds, then it's done. Otherwise, it fails. There is, however, another way to think about this problem. The advantage to this solution is that functions no longer need to return the remaining tokens (which tends to get ugly). Instead of the above, we write functions: parseFunction :: [Token] -> ((String, [String]) -> [Token] -> a) -> ([Token] -> a) -> a parseIdentifier :: [Token] -> (String -> [Token] -> a) -> ([Token] -> a) -> a Let's consider parseIdentifier. This takes three arguments: a list of tokens and two continuations. The first continuation is what to do when you succeed. The second continuation is what to do if you fail. What parseIdentifier does, then, is try to read an identifier. If this succeeds, it calls the first continuation with that identifier and the remaining tokens as arguments. If reading the identifier fails, it calls the second continuation with all the tokens. Now consider parseFunction. Recall that it wants to read an identifier, an open parenthesis, zero or more identifiers and a close parenthesis. Thus, the first thing it does is call parseIdentifier. The first argument it gives is the list of tokens. The first continuation (which is what parseIdentifier should do if it succeeds) is in turn a function which will look for an open parenthesis, zero or more arguments and a close parethesis. The second continuation (the failure argument) is just going to be the failure function given to parseFunction. Now, we simply need to define this function which looks for an open parenthesis, zero or more arguments and a close parethesis. This is easy. We write a function which looks for the open parenthesis and then calls parseIdentifier with a success continuation that looks for more identifiers, and a "failure" continuation which looks for the close parenthesis (note that this failure doesn't really mean failure -- it just means there are no more arguments left). I realize this discussion has been quite abstract. I would willingly give code for all this parsing, but it is perhaps too complex at the moment. Instead, consider the problem of folding across a list. We can write a CPS fold as: cfold' f z [] = z cfold' f z (x:xs) = f x z (\y -> cfold' f y xs) In this code, cfold' takes a function f which takes three arguments, slightly different from the standard folds. The first is the current list element, x, the second is the accumulated element, z, and the third is the continuation: basically, what to do next. We can write a wrapper function for cfold' that will make it behave more like a normal fold: cfold f z l = cfold' (\x t g -> f x (g t)) z l We can test that this function behaves as we desire: CPS> cfold (+) 0 [1,2,3,4] CPS> cfold (:) [] [1,2,3] One thing that's nice about formulating cfold in terms of the helper function cfold' is that we can use the helper function directly. This enables us to change, for instance, the evaluation order of the fold very easily: CPS> cfold' (\x t g -> (x : g t)) [] [1..10] CPS> cfold' (\x t g -> g (x : t)) [] [1..10] The only difference between these calls to cfold' is whether we call the continuation before or after constructing the list. As it turns out, this slight difference changes the behavior for being like foldr to being like foldl. We can evaluate both of these calls as follows (let f be the folding function): cfold' (\x t g -> (x : g t)) [] [1,2,3] ==> cfold' f [] [1,2,3] ==> f 1 [] (\y -> cfold' f y [2,3]) ==> 1 : ((\y -> cfold' f y [2,3]) []) ==> 1 : (cfold' f [] [2,3]) ==> 1 : (f 2 [] (\y -> cfold' f y [3])) ==> 1 : (2 : ((\y -> cfold' f y [3]) [])) ==> 1 : (2 : (cfold' f [] [3])) ==> 1 : (2 : (f 3 [] (\y -> cfold' f y []))) ==> 1 : (2 : (3 : (cfold' f [] []))) ==> 1 : (2 : (3 : [])) ==> [1,2,3] cfold' (\x t g -> g (x:t)) [] [1,2,3] ==> cfold' f [] [1,2,3] ==> (\x t g -> g (x:t)) 1 [] (\y -> cfold' f y [2,3]) ==> (\g -> g [1]) (\y -> cfold' f y [2,3]) ==> (\y -> cfold' f y [2,3]) [1] ==> cfold' f [1] [2,3] ==> (\x t g -> g (x:t)) 2 [1] (\y -> cfold' f y [3]) ==> cfold' f (2:[1]) [3] ==> cfold' f [2,1] [3] ==> (\x t g -> g (x:t)) 3 [2,1] (\y -> cfold' f y []) ==> cfold' f (3:[2,1]) [] ==> [3,2,1] In general, continuation passing style is a very powerful abstraction, though it can be difficult to master. We will revisit the topic more thoroughly later in the book. 1. Test whether the CPS-style fold mimics either of foldr and foldl. If not, where is the difference? 2. Write map and filter using continuation passing style.
{"url":"http://en.wikibooks.org/wiki/Haskell/YAHT/Type_basics","timestamp":"2014-04-17T04:03:28Z","content_type":null,"content_length":"92082","record_id":"<urn:uuid:43ea9e80-f5b4-4493-b357-4c8ddf961c85>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Number Sense = Preview Document = Member Document = Pin to Pinterest Printable flashcards with colorful dandelions to show each number 1-10. Printable flashcards with dandelions in order from 1-10. Blank cards. Addition and subtraction practice and assessment. Includes; practice worksheets, in and out boxes, and matching game. This penguin theme unit is a great way to practice counting and adding to 20. This 21 page unit includes; tracing numbers, cut and paste, finding patterns, ten frame activity, in and out boxes and much more! CC: Math: K.CC.B.4 • Set of numbers 1-20 in a penguin shape. Great to use for bulletin boards. • This chart has a variety of uses for your counting and place value lessons. Students can color the penguins for patterning, cut and paste in order, and use for addition and subtraction. • This set includes penguins with a number in each shape, up to twenty. Also, included are math symbols: plus sign, minus sign and equals sign. Print off as many as you need for counting, adding or subtracting. Print off two copies to create a matching game. Interactive .notebook activity that asks students to represent a number (1-10) in five different ways; numberal, ten frame, tally marks, picture and number line. Common Core: Math: K.CC.3, K.CC.4 This document visually represents how numbers can be classified as greater or least and provides practice for placing numbers in correct order. This worksheets shows how numbers can be reprsented visually as greater or least. Provides practice for placing numbers in correct order. Place the numbers in order from least to greatest. Visually represents how numbers can be classified. These posters are used to show students 4 differnt ways a number of objects can be represented; numeral, word form, ten frame and domino pattern. Common Core Math: K.CC.3, K.CC.4 These posters are used to show students 4 different ways a number of objects can be represented; numeral, word form, ten frame and domino pattern. This supports students learning numbers based on the Common Core Standards for kindgarten. Common Core Math: K.CC.3, K.CC.4 Smart Notebook file: This is a fun and colorful interactive activity with an aquarium theme providing practice for number patterns. Audio feedback. A set of five worksheets of numbers from 20-0 with missing numbers to be written in spaces by students. • Four pumpkins to a page, with numbers 1-12. Includes minus, add, and equal signs. Room to draw pumpkin faces. One hundred large, color, numbered, heart-shaped tiles to use with "Heart Counting Board." Tie-in to Valentine's and 100s Day. "Circle the number of elephants you see on each line." Counting up to 5. "Count the number of frogs you see on each line." Counting up to 5. Match the words to the correct number of teddy bears. Match the numerals to the correct number of teddy bears. Match the number words to the numerals (open 4). Match the numerals to the correct number of teddy bears. (Open 4) Match the number word/numeral cards to themselves or to the bears. Match the number word/numeral cards to themselves or to the bears. Cut out math counters (available on abcteach) and glue the correct number into each box. • Match the circle with the flowers to the section on the caterpillar with the corresponding number. • Match the game pieces to the correct numbers on the caterpillar game board to solve the math facts. • Match the game pieces with the correct circles on the caterpillar game board to solve the math facts. • Count the flowers on each circle. Match the circle to the section on the caterpillar with the corresponding number. Count the spots on each ladybug. Match the ladybug to the flower with the corresponding number. Count the spots on each ladybug. Match the ladybug to the flower with the corresponding number. • [member-created using abctools] Match numerals with the number word. Easy matching game. Can be used on a bulletin board.
{"url":"http://www.abcteach.com/directory/prek-early-childhood-mathematics-number-sense-2999-4-1","timestamp":"2014-04-16T16:39:20Z","content_type":null,"content_length":"158169","record_id":"<urn:uuid:75e32fac-7791-4aec-9d1f-5e339c265ae7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the best unbiased estimator of 1/b of gamma dist. May 5th 2009, 06:19 PM #1 May 2009 Hi guys. I have a question and I hope someone can help me out Let X1,.....Xn be a random sample from gamma(a,b) with a known. Find the best unbiased estimator of 1/b Waiting for your response as soon as you can Thanks in advance Read this: Best Unbiased Estimators 1 I need to know how you're writing your gamma density. Sometimes the b is in the numerator, sometimes it in the denomiator. 2 is best=min variance? Hence UMVUE Thank you so much guys for speedy response. Let X₁. X₂, X₃,…, Xn be a random sample from gamma(α, Β) with α known. Find the best unbiased estimator of 1/ Β Note 1-α = alpha, Β=beta 2- Yes the best unbiased estimatore is the same as UMVUE I cannot help you until I see how you write your density. I said that yesterday, nor do I know what '1-α = alpha, Β=beta' means. Last edited by matheagle; May 6th 2009 at 04:48 PM. Hi all This is the function OK, I'll work with that, BUT that is not how most people write the gamma density. I place beta in the denomiator. The likelihood function is $L(\beta )={\beta^{n\alpha}\over \bigl(\Gamma (\alpha )\bigr)^n}\bigl(\Pi_{i=1}^nx_i\bigr)^{\alpha-1}e^{-\beta\sum_{i=1}^nx_i}$. Thus with $\alpha$ known we have $\sum_{i=1}^nx_i$ suficient for $\beta$. And since $E\biggl(\sum_{i=1}^nX_i\biggr)={n\alpha\over \beta}$ we have ${\sum_{i=1}^nX_i\over n\alpha}$ UMVUE for ${1\over \beta}$. I could have done this yesterday, but the mean of my gamma's is $\alpha\beta$ and getting an unbiased estimator of $1/\beta$ is a lot harder in that case. And I wasn't going to do this until I knew how you were writing your density. Last edited by matheagle; May 6th 2009 at 04:48 PM. OK, I'll work with that, BUT that is not how most people write the gamma density. I place beta in the denomiator. The likelihood function is $L(\beta )={\beta^{n\alpha}\over \bigl(\Gamma (\alpha )\bigr)^n}\bigl(\Pi_{i=1}^nx_i\bigr)^{\alpha-1}e^{-\beta\sum_{i=1}^nx_i}$. Thus with $\alpha$ known we have $\sum_{i=1}^nx_i$ suficient for $\beta$. And since $E\biggl(\sum_{i=1}^nX_i\biggr)={n\alpha\over \beta}$ we have ${\sum_{i=1}^nX_i\over n\alpha}$ UMVUE for ${1\over \beta}$. I could have done this yesterday, but the mean of my gamma's is $\alpha\beta$ and getting an unbiased estimator of $1/\beta$ is a lot harder in that case. And I wasn't going to do this until I knew how you were writing your density. I met the same problem with this question. if the the mean of gamma's is $\alpha\beta$. How to get an unbiased estimator of $1/\beta$? Anyone can help? THANKS~ It can be shown that Sum(Xi) is not only sufficient but complete for beta (as w(beta) of exponential function contains an open set), so try 1/sum(xi) to estimate 1/beta. Lehmann-Scheffe tells us that an unbiased estimator that is a function of a complete statistic is the best unbiased estimator. It can be shown that, assuming iid xi, Sum(xi) is distributed as Gamma(n*alpha,beta). Given the above, let Y=(1/sum(xi)). Y is distributed as an inverted gamma(n*alpha, 1/beta) with mean=(1/beta)/(n*alpha-1). Thus, E(1/sum(xi)) = E(Y) = (1/beta)/(n*alpha-1), which is obviously a biased estimator of 1/beta. Now let T = (n*alpha - 1)/sum(xi) be the unbiased estimator of 1/beta which is a function of a complete statistic. Thus, T is the best unbiased estimator of 1/beta, but it can be shown that it does not attain the lower Cramer-Rao bound. I see. Thanks for the info. If the pdf can be written like this, it'd be so easy. Like we can show that it is an exponential family thus it is a complete sufficient estimator for beta. Then we can use MLE to get 1/beta(^). Then to show that 1/beta(^) is an unbiased estimator we calculate the mean of it and we get 1/beta. So 1/beta(^) in unbiased estimator. So we can conclude that 1/beta(^) is the best estimator. Is it right? May 5th 2009, 07:34 PM #2 May 5th 2009, 08:45 PM #3 May 5th 2009, 09:31 PM #4 May 2009 May 6th 2009, 04:20 PM #5 May 6th 2009, 04:32 PM #6 May 2009 May 6th 2009, 04:34 PM #7 February 28th 2010, 09:22 AM #8 Feb 2010 April 12th 2010, 07:06 PM #9 Apr 2010 April 12th 2010, 07:32 PM #10 Feb 2010 April 12th 2010, 08:42 PM #11 May 3rd 2011, 02:45 AM #12 May 2011 May 3rd 2011, 03:19 AM #13 May 3rd 2011, 04:15 AM #14 May 2011
{"url":"http://mathhelpforum.com/advanced-statistics/87705-find-best-unbiased-estimator-1-b-gamma-dist.html","timestamp":"2014-04-17T11:35:51Z","content_type":null,"content_length":"78961","record_id":"<urn:uuid:57c2ca61-0cd9-4189-b376-9db9ea8cfc63>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
L_2-norm representation up vote 0 down vote favorite Let $$ f^{\alpha}_+(x)=\frac{1}{\Gamma(\alpha+1)}\sum_{k\ge 0}(-1)^k{\alpha+1 \choose k}(x-k)^{\alpha}_+, $$ where $\alpha > -\frac 12$. I am wondering if one can get nice representation of $L^ 2$-norm of the function $f^{\alpha}(x)$, namely $$ \int_{-\infty}^{\infty}(f^{\alpha}_+(x))^2dx. $$ (Here $y^{\alpha}_+=\max(0, y)^{\alpha}$ is a one-sided power function). Thank you. fa.functional-analysis normalization banach-spaces I fixed the Latex. – Ulrich Pennig May 16 '12 at 17:19 Cross-posted to MSE: math.stackexchange.com/questions/145188/… – Yemon Choi Jun 8 '12 at 21:19 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged fa.functional-analysis normalization banach-spaces or ask your own question.
{"url":"http://mathoverflow.net/questions/97141/l-2-norm-representation","timestamp":"2014-04-16T11:10:33Z","content_type":null,"content_length":"47869","record_id":"<urn:uuid:f5d5ad53-c1d9-423d-a19f-fc6a3b4312d2>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Mc Cook, IL Algebra 2 Tutor Find a Mc Cook, IL Algebra 2 Tutor ...Proper time management and attention to detail are the keys to a high score. This requires effortful engagement with the material and some open mindedness on the part of the student. The tutors job is the provide the student with the strategies that will help them overcome an obstacles. 24 Subjects: including algebra 2, calculus, physics, GRE ...While in Baltimore I began working with MERIT to help tutor some of the city's most promising youth. I worked specifically to develop and implement results driven SAT math lessons geared towards raising overall SAT scores. We have seen great success in the program and it continues to change the game for those students that are able to be a part of it. 20 Subjects: including algebra 2, physics, geometry, algebra 1 I am a certified Illinois Math teacher who taught Algebra, Pre-Algebra, Geometry, and Algebra II in a Chicago High School. I also taught Computer Applications and Computer Science including AP Computer Science. I left teaching to become a corporate trainer at several of Chicago’s largest law firms. 14 Subjects: including algebra 2, geometry, algebra 1, GED ...Topics included set theory, graph theory, combinatorics. Since then I have worked as a TA for "Finite Mathematics for Business" which had a major component of counting (combinations, permutations) problems, and linear programming, both of which are common in discrete math. Other topics in which... 22 Subjects: including algebra 2, calculus, geometry, statistics ...I give students additional practice problems to ensure that they understand the concepts, and I have the student explain the concepts back to me. I also help students prepare for the ACT. I provide my own materials, including actual ACT exams from previous years. 21 Subjects: including algebra 2, reading, study skills, algebra 1 Related Mc Cook, IL Tutors Mc Cook, IL Accounting Tutors Mc Cook, IL ACT Tutors Mc Cook, IL Algebra Tutors Mc Cook, IL Algebra 2 Tutors Mc Cook, IL Calculus Tutors Mc Cook, IL Geometry Tutors Mc Cook, IL Math Tutors Mc Cook, IL Prealgebra Tutors Mc Cook, IL Precalculus Tutors Mc Cook, IL SAT Tutors Mc Cook, IL SAT Math Tutors Mc Cook, IL Science Tutors Mc Cook, IL Statistics Tutors Mc Cook, IL Trigonometry Tutors Nearby Cities With algebra 2 Tutor Argo, IL algebra 2 Tutors Brookfield, IL algebra 2 Tutors Countryside, IL algebra 2 Tutors Forest View, IL algebra 2 Tutors Hodgkins, IL algebra 2 Tutors La Grange Park algebra 2 Tutors La Grange, IL algebra 2 Tutors Lyons, IL algebra 2 Tutors Mccook, IL algebra 2 Tutors North Riverside, IL algebra 2 Tutors Riverside, IL algebra 2 Tutors Summit Argo algebra 2 Tutors Summit, IL algebra 2 Tutors Western, IL algebra 2 Tutors Willow Springs, IL algebra 2 Tutors
{"url":"http://www.purplemath.com/Mc_Cook_IL_Algebra_2_tutors.php","timestamp":"2014-04-18T00:36:20Z","content_type":null,"content_length":"24141","record_id":"<urn:uuid:cf567521-bc8a-44a4-8bc3-670e418d32a0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Collected Works 2001; 1198 pp; hardcover Volume: 16 ISBN-10: 0-8218-0688-2 ISBN-13: 978-0-8218-0688-3 List Price: US$216 Member Price: US$172.80 Order Code: CWORKS/16 A lead figure in twentieth century noncommutative algebra, S. A. Amitsur's contributions are wide-ranging and enduring. This volume collects almost all of his work. The papers are organized into broad topic areas: general ring theory, rings satisfying a polynomial identity, combinatorial polynomial identity theory, and division algebras. Included are essays by the editors on Amitsur's work in these four areas and a biography of Amitsur written by A. Mann. This volume makes a fine addition to any mathematics book collection. Graduate students and research mathematicians interested in ring theory, combinatorial algebra, and number theory. "These volumes are a collection of short and very readable papers, which many working algebraists will want to own; they should certainly form part of every mathematics library." -- Zentralblatt MATH Part 1 General ring theory • Amitsur and ring theory • A generalization of a theorem on linear differential equations • A general theory of radicals. I. Radicals in complete lattices • A general theory of radicals. II. Radicals in rings and bicategories • A general theory of radicals. III. Applications • Algebras over infinite fields • Radicals of polynomial rings • Invariant submodules of simple rings • Derivations in simple rings • The radical of field extensions • Countably generated division algebras over nondenumerable fields • Commutative linear differential operators • Rings with a pivotal monomial • On the semi-simplicity of group algebras • Derived functors in abelian categories • Remarks on principal ideal rings • Generalized polynomial identities and pivotal monomials • Rings with involution • Rings of quotients and Morita contexts • Nil radicals. Historical notes and some new results • On rings of quotients • Recognition of matrix rings II Rings satisfying a polynomial identity • Amitsur and PI rings • Nil PI-rings • An embedding of PI-rings • On rings with identities • The \(T\)-ideals of the free ring • A generalization of Hilbert's Nullstellensatz • Groups with representations of bounded degree II • Jacobson-rings and Hilbert algebras with polynomial identities • Nil semi-groups of rings with a polynomial identity • Rational identities and applications to algebra and geometry • Prime rings having polynomial identities with arbitrary coefficients • Identities in rings with involutions • A noncommutative Hilbert basis theorem and subrings of matrices • Embeddings in matrix rings • Some results on rings with polynomial identities • A note on PI-rings • On universal embeddings in matrix rings • Polynomial identities and Azumaya algebras • Polynomial identities • Central embeddings in semi-simple rings • Polynomials over division rings • Prime ideals in PI-rings • Finite-dimensional representations of PI algebras • GK-dimensions of corners and ideals • Contributions of PI theory to Azumaya algebras • Finite-dimensional representation of PI algebras, II • Algebras over infinite fields, revisited • Acknowledgement Part 2 Combinatoiral polynomial identity theory • Amitsur and combinatorial P.I. theory • Minimal identities for algebras • Remarks on minimal identities for algebras • The identities of PI-rings • Identities and generators of matrix rings • Identities and linear dependence • On a central identity for matrix rings • Alternating identities • PI-algebras and their cocharacters • The sequence of codimensions of PI-algebras Division algebras • Amitsur and division algebras • Contributions to the theory of central simple algebras • La représentation d'algèbres centrales simples • Construction d'algèbres centrales simples sur des corps de caractéristique zéro • Non-commutative cyclic fields • Differential polynomials and division algebras • Generic splitting fields of central simple algebras • Finite subgroups of division rings • Some results on central simple algebras • On arithmetic functions • Simple algebras and cohomology groups of arbitrary fields • Some results on arithmetic functions • Finite dimensional central divison algebras • Homology groups and double complexes for arbitrary fields • On a lemma in elementary proofs of the prime number theorem • Complexes of rings • On central division algebras • The generic division rings • Generic abelian crossed products and \(p\)-algebras • Division algebras of degree 4 and 8 with involution • On the characteristic polynomial of a sum of matrices • Generic splitting fields. Brauer groups in ring theory and algebraic geometry. • Extension of derivations to central simple algebras • Kummer subfields of Malcev-Neumann division algebras • Symplectic modules • Totally ramified splitting fields of central simple algebras over Henselian fields • Galois splitting fields of a universal division algebra • Elements of reduced trace 0 • Finite-dimensional subalgebras of division rings • Acknowledgment
{"url":"http://ams.org/bookstore?fn=20&arg1=cworksseries&ikey=CWORKS-16","timestamp":"2014-04-20T20:41:32Z","content_type":null,"content_length":"20706","record_id":"<urn:uuid:bc46c0b4-0ea8-4038-bd57-8f1d6d3e44cd>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Brief Summary of Each Supplement Progress of Theoretical Physics Supplement No. 181 Realization of Symmetry in the ERG Approach to Quantum Field Theory By Y. Igarashi, K. Itoh and H. Sonoda The exact renormalization group (ERG) has been applied for many years to a variety of fields, including field theory and critical phenomena. It has gained the reputation as a practical method of non-perturbative approximations. In this review we explain the ERG formulation of field theory emphasizing the following two aspects: &nbsp 1) &nbsp how to construct the continuum limit of a field theory, &nbsp 2) &nbsp how to introduce continuous symmetry. We complement the general theory with many but mostly perturbative examples. In the ERG formulation of field theory, a theory is defined through the Wilson action or equivalently the effective average action. We first introduce the two types of actions and explain their relationship. In this review we mainly discuss the Wilson action because of the relative ease of incorporating symmetry with it. We then proceed to such topics as renormalizability, continuum limits, and "composite operators"; the last of which are defined via flow equations and play an important role in the realization of symmetry. Ordinarily, regularization of a field theory with a momentum cutoff may conflict with the symmetry of the theory, especially local gauge symmetry. Using ERG, however, any continuous symmetry can be realized with no compromise. This situation is analogous to the realization of chiral symmetry on a lattice. We devote the second half of the review to the realization of continuous symmetry via the Ward-Takahashi identity or the quantum master equation in the antifield formalism. We elucidate the general theory with concrete examples. We have written the review with a sincere hope that the exact renormalization group would become part of the shared knowledge among all the practitioners of field theory. Copyright © 2012 Progress of Theoretical Physics
{"url":"http://www2.yukawa.kyoto-u.ac.jp/~web.ptp/supple/sup.181-eng.html","timestamp":"2014-04-19T17:08:46Z","content_type":null,"content_length":"4962","record_id":"<urn:uuid:29f8a7bc-b4e1-441f-8824-3d0149e4d42c>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
How many different 5-person teams can be formed from a group Author Message How many different 5-person teams can be formed from a group [#permalink] 22 Jun 2010, 18:14 15% (low) Question Stats: Joined: 22 Dec 2009 (01:00) correct Posts: 40 14% (01:20) Followers: 0 Kudos [?]: 4 [0], given: 13 based on 155 sessions How many different 5-person teams can be formed from a group of x individuals? (1) If there had been x + 2 individuals in the group, exactly 126 different 5-person teams could have been formed. (2) If there had been x + 1 individuals in the group, exactly 56 different 3-person teams could have been formed. The answer is D and I know how to figure out now but is there any trick to know each question is sufficient without actual compute? cuz its time consuming until I found out e.g. 1) is 9!/5!4! thanks in advance Spoiler: OA Re: Manhattan Q/DS [#permalink] 25 Jun 2010, 04:41 This post received Expert's post gmatJP wrote: HI AbhayPrasanna.. Thanks for the reply... I understood the first half but I dont follow your explanation of 'the only non-unique solution will be for the "r" in nCr eg. (8C3 = 8C5) but for n not equal to m, nCr can never be equal to mCr' ... How do you know (x+2) C 5 = 126 is computable praveenism wrote: I agree with GmatJP...but I think we need to assume that whatever is given in the options is assumed to be true.So the x has to have an integral value. But my concern is..what if the polynomial equation so formed of degree 5 have multiple solutions. In that case, the options fails to answer the situation. @Abhayprasanna : The logic is correct and infact I also choose D going by the same logic. The point here is the following: Suppose we are told that there are 10 ways to choose x people out of 5. What is . So we cannot determine single numerical value of . Note that in some cases we'll be able to find , as there will be only one solution for it, but generally when we are told that there are ways to choose out of there will be (in most cases) two solutions of But if we are told that there are 10 ways to choose 2 out of x , then there will be only one value of possible --> Bunuel \frac{x(x-1)}{2!}=10 Math Expert --> Joined: 02 Sep 2009 x(x-1)=20 Posts: 17283 --> Followers: 2869 x=5 Kudos [?]: 18334 [8] , . given: 2345 In our original question, statement (1) says that there are 126 ways to choose 5 out of --> there will be only one value possible for , so we can find . Sufficient. Just to show how it can be done: . Basically we have that the product of five consecutive integers ( ) equal to some number ( ) --> only one such sequence is possible, hence even though we have the equation of 5th degree it will have only one positive integer solution Statement (2) says that there are 56 ways to choose 3 out of --> there will be only one value possible for , so we can find . Sufficient. . Again we have that the product of three consecutive integers ( ) equal to some number ( ) --> only one such sequence is possible, hence even though we have the equation of 3rd degree it will have only one positive integer solution Hope it helps. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests AbhayPrasanna Re: Manhattan Q/DS [#permalink] 22 Jun 2010, 20:31 Manager 5 Joined: 03 May 2010 This post received Posts: 89 We need a unique value. WE 1: 2 yrs - Oilfield Service 1. (x+2) C 5 = 126. There is only one possible value for x+2 that would yield a value of 126. Don't bother trying to find out what it is. Remember, the only non-unique solution will be for the "r" in nCr eg. (8C3 = 8C5) but for n not equal to m, nCr can never be equal to mCr. Followers: 9 2. (x+1) C 3 = 56 Again, you should be able to see that there can be only one value of x+1 that would yield a value of 56. Why bother finding out what the value is? As long as Kudos [?]: 52 [5] , we have an equation in one variable, we can find a value. given: 7 Re: Manhattan Q/DS [#permalink] 25 Jun 2010, 22:29 This post received Bunuel supplied an awesome and exhaustive mathematical algebraic explanation. Perhaps it will be of benefit to review the concept verbally as well. Kaplan GMAT Instructor 8C5 = 8C3 because everytime we pull a subgroup of 5 objects from the bigger group of 8, we can see that we are also "setting aside" a subgroup of 3. Likewise, everytime we Joined: 21 Jun 2010 pull out a subgroup of 3, we also set aside a subgrup of 5. So, the number of ways we can pull out 5 object subgroups must be the same as the number of ways we can pull out 3 object subgroups. Posts: 75 But if there are 126 ways to pull 5 objects from a big group "x + 2", then "x+2" must be just one value. If it were not, then it would imply that increasing or decresing the Location: Toronto size of the big group doesn't necessarily affect how many ways you can pull out a smaller subgroup--surely an absurd conclusion. Absurd because clearly there are more ways to pull 5 objects out of a set of 100 than out of a set of, say, 10. Followers: 20 Kudos [?]: 92 [2] , given: 2 Kaplan Teacher in Toronto Prepare with Kaplan and save $150 on a course! Kaplan Reviews Re: Manhattan Q/DS [#permalink] 22 Jun 2010, 21:27 HI AbhayPrasanna.. Thanks for the reply... Joined: 22 Dec 2009 I understood the first half but I dont follow your explanation of 'the only non-unique solution will be for the "r" in nCr eg. (8C3 = 8C5) but for n not equal to m, nCr can Posts: 40 never be equal to mCr' ... Followers: 0 How do you know (x+2) C 5 = 126 is computable Kudos [?]: 4 [0], given: 13 praveenism Re: Manhattan Q/DS [#permalink] 24 Jun 2010, 01:50 Manager I agree with GmatJP...but I think we need to assume that whatever is given in the options is assumed to be true.So the x has to have an integral value. Joined: 28 Feb 2010 But my concern is..what if the polynomial equation so formed of degree 5 have multiple solutions. In that case, the options fails to answer the situation. Posts: 176 @Abhayprasanna : The logic is correct and infact I also choose D going by the same logic. WE 1: 3 (Mining _________________ Followers: 3 Invincible... "The way to succeed is to double your error rate." Kudos [?]: 18 [0], "Most people who succeed in the face of seemingly impossible conditions are people who simply don't know how to quit." given: 33 Re: How many different 5-person teams can be formed from a group [#permalink] 06 Apr 2012, 07:07 Hello to all, Joined: 03 Dec 2010 Can any1 pls explain the calculation in a bit more simplier way. I didn't get how, Posts: 22 3Cx+1 =56 and how, 5Cx+2 = 126 ? Followers: 0 Kudos [?]: 1 [0], given: 0 Math Expert Re: How many different 5-person teams can be formed from a group [#permalink] 06 Apr 2012, 07:28 Joined: 02 Sep 2009 Expert's post Posts: 17283 Followers: 2869 Kudos [?]: 18334 [0], given: 2345 Re: How many different 5-person teams can be formed from a group [#permalink] 20 Aug 2012, 02:39 Can you please confirm if my understanding is correct? (x+1)!= 3 consecutive numbers (x-1) x (x+1) (x+3)!= Basically 7 consecutive numbers i.e. (x-3) (x-2) (x-1) x (x+1) (x+2) (x+3) (x+2)!= 5 consecutive numbers i.e. (x-2) (x-1) x (x+1) (x+2) Bunuel wrote: priyalr wrote: Hello to all, Can any1 pls explain the calculation in a bit more simplier way. I didn't get how, 3Cx+1 =56 and how, 5Cx+2 = 126 ? # of ways to pick rajareena k Manager objects out of Joined: 07 Sep 2011 n Posts: 64 distinct objects is Location: United States C^k_n=\frac{n!}{(n-k)!*k!} Concentration: . Strategy, International Business # of ways to pick 3 people out of x+1 people is GMAT 1: 640 Q39 V38 C^3_{(x+1)}=\frac{(x+1)!}{(x+1-3)!*3!}=\frac{(x+1)!}{(x-2)!*3!} WE: General Management . Now, since (Real Estate) Followers: 3 Kudos [?]: 24 [0], given: 3 (x-2)! get reduced and we'll have: . We are told that this equals to 56: You can apply similar logic to Hope it's clear. Re: How many different 5-person teams can be formed from a group [#permalink] 22 Aug 2012, 13:23 manjeet1972 wrote: Can you please confirm if my understanding is correct? (x+1)!= 3 consecutive numbers (x-1) x (x+1) Joined: 05 Mar 2012 (x+3)!= Basically 7 consecutive numbers i.e. (x-3) (x-2) (x-1) x (x+1) (x+2) (x+3) (x+2)!= 5 consecutive numbers i.e. (x-2) (x-1) x (x+1) (x+2) Posts: 67 Schools: Tepper '15 Followers: 0 Plug in numbers yourself. let x = 4, so you have (4+1)! = 5! = 5 * 4 * 3 * 2 * 1. But if x = 219739218731927 then you have a lot more terms. Kudos [?]: 2 [0], given: 7 Instead, try to compare things logically. Say you have (x+1)! / (x-4)! ...now what happens normally for simplifying factorials? If you have 11!/6! then it's 11 * 10 * 9 * 8 * 7. You go down by 1 term until you hit the denominator and remove. Same concept. So now (x+1)! = (x+1) * x * (x-1) * (x-2) * (x-3) * (x-4) ...until you reach the end. But since we know the denominator is (x-4)! then we only have to go up to (x+1) * x * (x-1) * (x-2) * (x-3) on the numerator. Once you simplify the factorials, you can line them up like how Bunuel did and match it with the expanded form of (5!)(126) which would take a while to flat out compute but you just need to determine if it's solvable. Re: How many different 5-person teams can be formed from a group [#permalink] 22 Aug 2012, 21:38 rajareena Good explanation. Through practice I will learn. Manager Injuin wrote: Joined: 07 Sep 2011 manjeet1972 wrote: Posts: 64 Bunuel, Location: United States Can you please confirm if my understanding is correct? Concentration: (x+1)!= 3 consecutive numbers (x-1) x (x+1) Strategy, International (x+3)!= Basically 7 consecutive numbers i.e. (x-3) (x-2) (x-1) x (x+1) (x+2) (x+3) Business (x+2)!= 5 consecutive numbers i.e. (x-2) (x-1) x (x+1) (x+2) GMAT 1: 640 Q39 V38 WE: General Management (Real Estate) Plug in numbers yourself. let x = 4, so you have (4+1)! = 5! = 5 * 4 * 3 * 2 * 1. But if x = 219739218731927 then you have a lot more terms. Followers: 3 Instead, try to compare things logically. Say you have (x+1)! / (x-4)! ...now what happens normally for simplifying factorials? If you have 11!/6! then it's 11 * 10 * 9 * 8 * 7. You go down by 1 term until you hit the denominator and remove. Same concept. Kudos [?]: 24 [0], given: 3 So now (x+1)! = (x+1) * x * (x-1) * (x-2) * (x-3) * (x-4) ...until you reach the end. But since we know the denominator is (x-4)! then we only have to go up to (x+1) * x * (x-1) * (x-2) * (x-3) on the numerator. Once you simplify the factorials, you can line them up like how Bunuel did and match it with the expanded form of (5!)(126) which would take a while to flat out compute but you just need to determine if it's solvable.[/quote] Math Expert Re: How many different 5-person teams can be formed from a group [#permalink] 30 Jun 2013, 23:46 Joined: 02 Sep 2009 Expert's post Posts: 17283 Followers: 2869 Kudos [?]: 18334 [0], given: 2345 gmatclubot Re: How many different 5-person teams can be formed from a group [#permalink] 30 Jun 2013, 23:46
{"url":"http://gmatclub.com/forum/how-many-different-5-person-teams-can-be-formed-from-a-group-96244.html?kudos=1","timestamp":"2014-04-17T01:26:09Z","content_type":null,"content_length":"235721","record_id":"<urn:uuid:0ac04259-8404-4ee0-bd36-ec0ba8481e42>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Read an EKG Strip EKG paper is a grid where time is measured along the horizontal axis. • Each small square is 1 mm in length and represents 0.04 seconds. • Each larger square is 5 mm in length and represents 0.2 seconds. Voltage is measured along the vertical axis. • 10 mm is equal to 1mV in voltage. • The diagram below illustrates the configuration of EKG graph paper and where to measure the components of the EKG wave form Heart rate can be easily calculated from the EKG strip: • When the rhythm is regular, the heart rate is 300 divided by the number of large squares between the QRS complexes. □ For example, if there are 4 large squares between regular QRS complexes, the heart rate is 75 (300/4=75). • The second method can be used with an irregular rhythm to estimate the rate. Count the number of R waves in a 6 second strip and multiply by 10. □ For example, if there are 7 R waves in a 6 second strip, the heart rate is 70 (7x10=70). Instant Feedback: On a typical EKG grid, 5 small squares, or 1 large square, represent 0.20 seconds of time RnCeus Homepage | Course catalog | Discount prices | Login | Nursing jobs | Help
{"url":"http://www.rnceus.com/ekg/ekghowto.html","timestamp":"2014-04-21T08:00:17Z","content_type":null,"content_length":"6422","record_id":"<urn:uuid:977ced4c-53d8-4067-bc35-e2a4592c1f1a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Portfolio Management Software - Xignite Hit Calculation Examples from Xand : Joel York - 31st December 1969 The opinions expressed by this blogger and those providing comments are theirs alone, this does not reflect the opinion of Automated Trader or any employee thereof. Automated Trader is not responsible for the accuracy of any of the information supplied by this article. One of the most common financial applications of Xignite on-demand market data is portfolio management software. Portfolio management software is used by a wide range of Xignite customers including asset managers, wealth managers, hedge funds, financial advisers, broker-dealers and publishers of online portfolio websites. Each portfolio management software customer has unique requirements that vary across asset classes and update frequencies, however, the hit calculation required to select the right Xignite Web services subscription plans is essentially the same. This is the third post in a blog series on hit calculation that will provide detailed example calculations for portfolio management software applications. The first post and second post in this series provided a comprehensive hit calculation tutorial as well as a general hit calculation spreadsheet. The examples in this post are also available in this sample portfolio management software hit calculation spreadsheet. Single Hit Data Block for Portfolio Management Software Every portfolio management software application will access a number of Xignite Web services covering a variety of asset classes (XigniteQuotes, XigniteFunds, XigniteFutures, XigniteOptions, XigniteMetals, etc.) to obtain the current price of each asset in each portfolio. Because Xignite Web services count hits uniformly as one hit per price quote, we can dramatically simplify the hit calculation using the single symbol, single hit data block described in the previous blog post in this hit calculation series, i.e., the number of symbols requested per month is equal to the number of hits per month. Update Frequencies for Portfolio Management Software As pointed out in the previous post in this series, the most important factor in determining usage is almost always the update frequency, or the number of requests to refresh the data each month. Monthly Hits = # Symbols per Request x # Requests per Month Therefore, we're going to break our portfolio management software examples into three application scenarios according to update frequency. • End-of-Day Portfolio Accounting System • Real-Time Portfolio Management Software • Online Portfolio Website Each of these update scenarios has a unique trigger event: day, cache refresh, and page view respectively, giving the following hit calculation formulas for each respective scenario. Monthly Hits = # Symbols per Day x # Days per Month Monthly Hits = # Symbols per Cache Refresh x # Cache Refreshes per Month Monthly Hits = # Symbols per Page View x # Page Views per Month In the sections that follow, we'll use the formulas above to estimate the usage for each update frequency scenario above. End-of-Day Portfolio Accounting System - Hit Calculation The trick to calculating hits for portfolio management software is estimating the request population, or number of symbols per request, which for an end-of-day portfolio accounting system is given by the following formula. Request Population = # Symbols per Day (by Asset Class) Depending on the particular assets held in each portfolio, the result looks something like the table below. Asset Total Total Monthly Plan Max Class Symbols Asset Hits Monthly per Day Allocation Estimate Hits Stocks 3,000 60% 60,000 60,000 Funds 1,000 20% 20,000 60,000 Futures 250 5% 5,000 6,000 Options 250 5% 5,000 6,000 Metals 500 10% 10,000 60,000 End-of-Day Portfolio Accounting System Hit Calculation The table above is a simple tally of all symbols across all portfolios managed by the portfolio accounting system. Each symbol requires one price each day (equal to one hit). In this particular example, we've assumed that the portfolio accounting system will run its valuation routine twenty working days each month, hence the hits are equal to twenty times the number of symbols per day. 60,000 Monthly Stock Hits = 3,000 Symbols per Day x 20 Days per Month Xignite subscription plans are generally available in increments of 600, 6,000, 60,000, and so on, therefore the subscription plan is determined by rounding up to the maximum number of hits available in each subscription plan. Real-Time Portfolio Management Software - Hit Calculation The hit calculation for real-time portfolio management software is essentially similar to that for the end-of-day portfolio accounting system above with a straightforward change to the update request frequency. To minimize response time, it makes sense for real-time portfolio management software to cache every symbol in every portfolio using the relevant Xignite Web services, and then serve individual portfolios from within the software application. Using the same symbol count as the example above and assuming a cache refresh rate of 5 updates per minute, the hit calculation for stocks is as follows. 48,000 Cache Refreshes per Month = 5 refreshes/min x 60 mins/hr x 8 hrs/day x 20 days/mo 144,000,000 Monthly Stock Hits = 3,000 Symbols per Refresh x 48,000 Refreshes per Month Extending the calculation above to the other asset classes gives the table below as the complete hit calculation for our real-time portfolio management software. Asset Total Total Monthly Plan Max Class Symbols Asset Hits Monthly per Refresh Allocation Estimate Hits Stocks 3,000 60% 144,000,000 Custom Funds 1,000 20% 20,000 60,000 Futures 250 5% 12,000,000 60,000,000 Options 250 5% 12,000,000 60,000,000 Metals 500 10% 24,000,000 60,000,000 Real-time Portfolio Management Software Hit Calculation As mentioned previously, the most important factor in determining usage is almost always the update frequency. Compared to our end-of-day portfolio accounting system, our real-time portfolio management software requires several orders of magnitude more hits (With the exception of funds which only have end-of-day NAV prices, assuming ETFs are included in stocks). In fact, since most Xignite standard subscription plans top out at 60 million hits, this particular application would require a custom quote for the real-time stock Web service (e.g., XigniteRealTime, XigniteGlobalRealTime, XigniteBATSLastSale, or XigniteNASDAQLastSale) Online Portfolio Website - Hit Calculation While it's possible to architect an online portfolio website to provide either end-of-day or real-time updates like the respective portfolio accounting system or portfolio management software examples above, it is more common to update each website visitor's portfolio on-demand each time a visitor requests a page displaying the portfolio. In this case, the relevant trigger event is a page view, and the associated request population is given by the following formula. Request Population = # Symbols per Page View (By Asset Class) The number of symbols per page view is simply the average number of symbols in a single portfolio. If we use the same total symbol count as in the two previous examples, and further assume that there are 1,000 registered users of our online portfolio website that view their portfolios twice a day on average, then the calculation will look something like the table below. Average Average Monthly Plan Max Asset Symbols Portfolio Hits Monthly Class per Page Asset Estimate Hits View Allocation Stocks 60 60% 3,600,000 6,000,000 Funds 20 20% 1,200,000 6,000,000 Futures 5 5% 300,000 600,000 Options 5 5% 300,000 600,000 Metals 10 10% 600,000 600,000 Online Portfolio Website Hit Calculation The average number of symbols per page view (symbols in a single portfolio) would normally be based on the specific portfolio holdings of the particular online portfolio website, but for this example we've used a little sleight of hand to make the calculation for our online portfolio website example as similar to the earlier end-of-day portfolio accounting system and real-time portfolio software examples. Here we've assumed 50% overlap in portfolio assets across all users, so to get the average symbols per page view we simply divided the total number of symbols in the previous examples by 1,000 users and then multiplied by 2 to adjust for overlapping assets. For example, the average number of stock symbols per page view = average number of stocks in a single portfolio = 60 = 2 x 3,000 / 1000. The total number of hits is then determined by multiplying the number of symbols per page view by the total number of page views per month as in the formula above. 60,000 Page Views per Month = 1,000 users x 2 views/day/user x 30 days/month 3,600,000 Monthly Stock Hits = 60 Symbols per Page View x 60,000 Page Views per Month It is worth noting that the number of hits for funds could be significantly reduced by caching the end-of-day NAVs instead of serving them on-demand. These three common examples demonstrate a number of recurring concepts that can be used for similar hit calculations. • Single Hit, Single Symbol Data Block (one hit per symbol) • Average Request Population (average symbols per portfolio or page view) • Common Event Triggers (EOD Updates vs. Real-time Caching vs. On-Demand Page Views) In addition, they clearly demonstrate that update frequency is the most important factor in determining total hit usage. Future posts in this series will provide more hit calculation examples for the most common financial applications.
{"url":"http://www.automatedtrader.net/headlines/70781/portfolio-management-software--xignite-hit-calculation-examples","timestamp":"2014-04-19T22:06:27Z","content_type":null,"content_length":"34200","record_id":"<urn:uuid:28cb2398-04ef-4652-9e30-e338e39b2994>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Rounding Negative Numbers Date: 05/23/2007 at 17:32:59 From: Paul Subject: Rounding negative numbers I was wondering what -0.5 is rounded to the nearest integer? My first impression was -1, but when we round 0.5 we round up, which would suggest -0.5 should be rounded to 0. Can you even round negatives? I asked a few mathematics graduates and did not get a convincing response, with both 0 and -1 being given. I thought that possibly you round the magnitude of the number and then deal with the negative, giving -1. However, we round 2.5, 1.5, 0.5 upwards, so to follow the pattern we would have to round -0.5 to 0. Date: 05/24/2007 at 10:36:00 From: Doctor Vogler Subject: Re: Rounding negative numbers Hi Paul, Thanks for writing to Dr. Math. The answer to your question is that there are many different ways of rounding. Some include (1) floor: round down to the next integer x-1 < floor(x) <= x (2) ceiling: round up to the next integer x <= ceiling(x) < x+1 (3) truncate: round toward zero trunc(x) = floor(x) when x >= 0 trunc(x) = ceiling(x) when x <= 0 (4) antitruncate: round away from zero (this is more rare) trunc(x) = ceiling(x) when x >= 0 trunc(x) = floor(x) when x <= 0 Then there are also the more familiar methods of rounding to the "nearest integer." But the trouble is that when you have half of an odd number, then there are two integers that are equally "near," so the above definition is ambiguous. I think that the most common method for resolving this confusion is to round halves up to the next (5) round(x) = floor(x + 1/2) But you could also argue that this rule was placed so that you only have to look one decimal digit past the decimal point, and so you want 1.5 to round to the same number as 1.59. By this logic, you would also want -1.5 to round just like -1.59, in which case you might switch conventions when you have a negative number: (6) round(x) = floor(x + 1/2) when x >= 0 round(x) = ceiling(x - 1/2) when x <= 0 Of course, there is no reason why you can't do the opposite of either of those methods: (7) round(x) = ceiling(x - 1/2) (8) round(x) = ceiling(x - 1/2) when x <= 0 round(x) = floor(x + 1/2) when x >= 0 And in more special cases, you might prefer to do something else, like round down if the fraction is less than 1/3 and round up if the fraction is at least 1/3, as in (9) round(x) = floor(x + 2/3) The bottom line is that "round" is not as well-defined a term (mathematically) as the floor (sometimes called "greatest integer") and ceiling functions are, and you can choose to use whichever definition works best for you. But my personal opinion, when no more precise definition is given, would be to use definition (5), despite my argument for definition If you have any questions about this or need more help, please write back, and I will try to offer further suggestions. - Doctor Vogler, The Math Forum Date: 05/24/2007 at 15:06:55 From: Paul Subject: Thank you (Rounding negative numbers) Thanks for the help.
{"url":"http://mathforum.org/library/drmath/view/71202.html","timestamp":"2014-04-16T17:40:38Z","content_type":null,"content_length":"8145","record_id":"<urn:uuid:4cf76a6a-054b-4b46-8b22-72d53af443c5>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
How many cubic inches are in 6.2 liter engine? The cubic inch is a unit of measurement for volume in the Imperial units and United States customary units systems. It is the volume of a cube with each of its three dimensions (length, width, and depth) being one inch long. The cubic inch and the cubic foot are still used as units of volume in the United States, although the common SI units of volume, the liter, milliliter, and cubic meter, are also used, especially in manufacturing and high technology. Chevrolet El Camino was a coupe utility vehicle produced by the Chevrolet division of General Motors between 1959–60 and 1964-87. Introduced in the 1959–1960 model years in response to the success of the Ford Ranchero, its first run lasted only one year. Production resumed for the 1964–1977 model years based on the Chevelle platform, and continued for the 1978–1987 model years based on the GM G-body platform. Although based on corresponding Chevrolet car lines, the vehicle is classified and titled in North America as a truck. GMC's badge engineered El Camino variant, the Sprint, was introduced for the 1971 model year. Renamed Caballero in 1978, it was also produced through the 1987 model year. In Spanish, El Camino means "the road". A disaster is a natural or man-made (or technological) hazard resulting in an event of substantial extent causing significant physical damage or destruction, loss of life, or drastic change to the environment. A disaster can be ostensively defined as any tragic event stemming from events such as earthquakes, floods, catastrophic accidents, fires, or explosions. It is a phenomenon that can cause damage to life and property and destroy the economic, social and cultural life of people. In contemporary academia, disasters are seen as the consequence of inappropriately managed risk. These risks are the product of a combination of both hazard/s and vulnerability. Hazards that strike in areas with low vulnerability will never become disasters, as is the case in uninhabited regions. Related Websites:
{"url":"http://answerparty.com/question/answer/how-many-cubic-inches-are-in-6-2-liter-engine","timestamp":"2014-04-19T22:08:11Z","content_type":null,"content_length":"24940","record_id":"<urn:uuid:16994556-cc2c-4b5f-a0b1-0e1210260bd4>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
proof of Nielsen-Schreier theorem and Schreier index formula proof of Nielsen-Schreier theorem and Schreier index formula While there are purely algebraic proofs of the Nielsen-Schreier theorem, a much easier proof is available through geometric group theory. Let $G$ be a group which is free on a set $X$. Any group acts freely on its Cayley graph, and the Cayley graph of $G$ is a $2|X|$-regular tree, which we will call $\mathcal{T}$. If $H$ is any subgroup of $G$, then $H$ also acts freely on $\mathcal{T}$ by restriction. Since groups that act freely on trees are free, $H$ is free. Moreover, we can obtain the rank of $H$ (the size of the set on which it is free). If $\mathcal{G}$ is a finite graph, then $\pi_{1}(\mathcal{G})$ is free of rank $-\chi(\mathcal{G})-1$, where $\chi (\mathcal{G})$ denotes the Euler characteristic of $\mathcal{G}$. Since $H\cong\pi_{1}(H\backslash\mathcal{T})$, the rank of $H$ is $\chi(H\backslash\mathcal{T})$. If $H$ is of finite index $n$ in $G$, then $H\backslash\mathcal{T}$ is finite, and $\chi(H\backslash\mathcal{T})=n\chi(G\backslash\mathcal{T})$. Of course $-\chi(G\backslash\mathcal{T})+1$ is the rank of $G$. Substituting, we obtain the Schreier index formula: Mathematics Subject Classification no label found no label found After a couple of PMs undoubtedly thinking the other was an idiot, mathprof and I have stumbled on to a weird bug -- The word "graph" in this entry links to the correct phrase in page images mode, but a different entry in html mode. Or is this a "feature"? Is there an easy fix? I'm not sure how easy of a fix this is, but a fix does exist. Namely, you can replace each occurence of "graph" that links to where you do not want it to by \PMlinkname{graph}{Graph}. I would not recommend doing this in places like in the phrase "Cayley graph", which seems to link to where you would want it to in html mode. This is the best that I can do. Actually, a better idea (in my humble opinion) would be to replace the phrase "finite graph" with \PMlinkname{finite graph}{Graph}.
{"url":"http://planetmath.org/ProofOfNielsenSchreierTheoremAndSchreierIndexFormula","timestamp":"2014-04-16T22:13:57Z","content_type":null,"content_length":"61078","record_id":"<urn:uuid:b2786ad7-d810-463f-a887-8ef104197add>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
June 2000 The Qualifications and Curriculum Authority (QCA) has announced that Mathematics A-levels are going to be made more difficult, to correct a slide in standards over the past decade. Maths is apparently the only subject in which standards have fallen, and efforts are required to restore confidence in it. The implicit suggestion that students currently studying for maths A-levels have "got it easy", and that their A-level qualification won't really be worth the paper it's written on, will not be received well by those students, or their teachers. After all, maths is still thought of as a "difficult" subject, with higher intellectual demands than many others. And few teachers want it to be made even more difficult and therefore even less attractive to potential students. Nevertheless, there is no doubt that the A-level course does not cover the same material in the same depth as it did many years ago. This has caused severe problems for University maths courses, who are now having to teach what they regard as A-level material to their freshers. In particular, many A-level students are much worse at algebraic manipulation than was the case in the past, because they don't get as much practice as they used to. Consequently, the amount of maths which Universities can get through in a three year BA course is now less than it used to be. So there are problems on all sides. What solution is there? After all, we don't want to put off people who want to study Chemistry, Biology, Economics, Business Studies, Psychology, Engineering, Computing (or a host of other subjects for which maths is not a prerequisite) at University from taking maths A-level: a proper understanding of mathematical concepts is extremely useful for the technical parts of those subjects at University. Without an understanding of maths, scientists can make fundamental mistakes and not even realise it. However, the fact that those scientists haven't had lots of practice at algebraic manipulation is not in itself a problem. Perhaps the answer lies in creating a new A-level, aimed at those who want to use maths but who don't need to be experts in the more technical aspects of mathematics. Perhaps it could be called Mathematics for Scientists. We shall suggest it to the QCA! About the author Dr. Robert Hunt is the Editor of Plus Magazine.
{"url":"http://plus.maths.org/content/editorial-1","timestamp":"2014-04-21T10:01:49Z","content_type":null,"content_length":"21179","record_id":"<urn:uuid:237fa0d6-8364-42a3-803e-e0b91bf83041>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'Writing Digits' printed from http://nrich.maths.org/ Why do this problem? This problem would make a good starter for those children who are working on the writing and ordering the numbers from $1$ to $20$, and might be suitable for use in conjunction with a number line. Possible approach You could display a number line and ask the group to count outloud along it with you. You could then show them a number line just numbered up to $5$, as in the picture. Encourage the children to count along this number line, filling in the blanks as they go. Then, introduce the problem orally and ask learners to work in pairs on the challenge, with their own number line to help. It may be necessary to clarify the meaning of 'digit' if the children are not familiar with this term. Key questions What do we mean by 'digit'? Which is the tenth digit? How do you know? Possible extension Pose similar questions for the children, such as "If Lee wrote all the numbers from $1$ to $20$, how many digits would she have written altogether?" "How many sevens has Lee written? What digit did she write after each of these sevens?" Possible support Children could be encouraged to count carefully along a number line.
{"url":"http://nrich.maths.org/161/note?nomenu=1","timestamp":"2014-04-19T12:28:49Z","content_type":null,"content_length":"4691","record_id":"<urn:uuid:ea2a57d5-a9e7-4936-ad95-c8135550a5d2>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Children's understanding of probability - Report Available Replies: 0 Children's understanding of probability - Report Available Posted: Aug 16, 2012 1:11 PM The Nuffield Foundation (England) just launched a report that written by Terezinha Nunes and Peter Bryant about children's understanding of probability. It is a review of research with a broad coverage of children's understanding and does not focus on teaching. The link to the review is: Information provided by Terezinha Nunes. Children's understanding of probability The Foundation has published Children's Understanding of Probability, a literature review by Professors Peter Bryant and Terezinha Nunes from the University of Oxford. In this review, the authors identify four 'cognitive demands' made on children when learning about probability, and examine evidence in each of these areas: randomness, the sample space, comparing and quantifying probabilities, and correlations. They draw together international evidence, from the early years through to adulthood, and highlight studies that are of particular relevance to teaching. They also identify areas that have been relatively neglected and would benefit from further research, particularly from fully evaluated intervention projects. Research using computer microworlds has shown that by the age of about ten, many children realise that there is an association between randomness and fairness, and that randomisation can be an effective way of ensuring fair allocations. Both children and adults find it particularly difficult to understand the independence of successive events in a random situation. When tossing a coin for example, people often predict that a run of heads makes it more likely that the next toss will be tails (the 'negative recency' effect), or that the previous run of heads makes it more likely that heads will be the next result too (the 'positive recency' Sample space Children can have particular difficulty with problems where some outcomes are equiprobable and others are not. For example, in throwing two dice at the same time, there are 36 possible equiprobable outcomes (1,1; 1,2; 1,3 etc.). But, if you record the result in terms of the sum of the two numbers thrown, there are only 11 possible outcomes and they are not equiprobable. Understanding the sample space means children have to construct an exhaustive list of alternative, and uncertain, possibilities, but there is currently no research on their ability to do this. Quantifying probability Children understand proportions as ratios before they understand them as fractions, suggesting that children would learn about probabilities more easily if they are initially introduced as ratios. Children and adults are much more likely to work out conditional probabilities correctly if the basic information is given as absolute numbers rather than as decimal fractions. Correlational thinking depends on children realising that the way to work out whether an association is random or not is to consider the relative amount of confirming and disconfirming evidence. When they use simple intuitive reasoning they often fall prey to a confirmation bias; they pay more attention to the confirming than to the disconfirming evidence. Recommendations for further research The authors make two main recommendations in relation to further research. Firstly that researchers should take advantage of research designs that have been successful in research on other aspects of children's intellectual development. In particular, the combined use of intervention and longitudinal methods to study the links between the four aspects of probability. Secondly, they recommend that more attention is paid to the great amount of related data that exists on other aspects of cognitive development. Probability makes a number of different cognitive demands and most of these demands are shared with other aspects of cognitive development about which we know a great deal. Probability is an intensive quantity, but so are density and temperature for example. Many people doing research on probability have not paid attention to research on these related topics, and have missed out on potentially valuable information. Intervention study The Foundation is now funding the authors to undertake a large-scale controlled intervention study of the teaching of probability to 9-to-10-year-olds [2]. See also at http://www.nuffieldfoundation.org/print/3626 Download Children's understanding of probability from the website Summary report [3] Full report [4] Similar projects Probability intervention study >> [2] Mathematics education >> [5] Source URL: [1] http://www.nuffieldfoundation.org/news [5] http://www.nuffieldfoundation.org/mathematics-education-0 Jerry P. Becker Dept. of Curriculum & Instruction Southern Illinois University 625 Wham Drive Mail Code 4610 Carbondale, IL 62901-4610 Phone: (618) 453-4241 [O] (618) 457-8903 [H] Fax: (618) 453-4244 E-mail: jbecker@siu.edu
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2395933","timestamp":"2014-04-20T05:49:28Z","content_type":null,"content_length":"21433","record_id":"<urn:uuid:d0eec308-ab1c-4f81-bc22-f00897ac40e7>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
What is 30mb in kb? You asked: What is 30mb in kb? 31457 7/25 Kilobytes the capacity 31457 7/25 Kilobytes Assuming you meant • the capacity 30 mebibytes and Kilobyte (derived from the SI prefix "kilo-", meaning 1,000), the unit of digital information storage equal to either 1,000 bytes (10) or 1,024 bytes (2), depending on context Did you mean? Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/what_is_30mb_in_kb","timestamp":"2014-04-17T22:21:28Z","content_type":null,"content_length":"58762","record_id":"<urn:uuid:8d4769dc-9aec-4b2e-8297-751791d2aabf>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
Dositey.com printable worksheets 1. Answers: a. There are 28 students altogether. b. They have 26 siblings altogether. c. 10 students do not have any siblings (brothers or sisters). 2. Answers: a. He could save 1 dollar and 40 cents in one week. b. He could save 6 dollars in one month. c. With his grandfather's money Mark could save 7 dollars altogether in one month. 3. Answers: a. There are 5 fruits in each basket. b. There are 15 fruits altogether. c. There are 9 apples altogether.
{"url":"http://www.dositey.com/2008/worksheet/mulprob3a.htm","timestamp":"2014-04-21T09:37:02Z","content_type":null,"content_length":"2117","record_id":"<urn:uuid:4827a050-4fc1-49e5-a5a8-29e145f06d82>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
CBCL Memos (1993 - 2004)Object Detection in Images by ComponentsA Note on Support Vector Machines DegeneracySupport Vector Machines: Training and ApplicationsAn Equivalence Between Sparse Approximation and Support Vector Machines http://hdl.handle.net/1721.1/5462 2014-04-18T09:01:18Z http://hdl.handle.net/1721.1/7293 Object Detection in Images by Components Mohan, Anuj In this paper we present a component based person detection system that is capable of detecting frontal, rear and near side views of people, and partially occluded persons in cluttered scenes. The framework that is described here for people is easily applied to other objects as well. The motivation for developing a component based approach is two fold: first, to enhance the performance of person detection systems on frontal and rear views of people and second, to develop a framework that directly addresses the problem of detecting people who are partially occluded or whose body parts blend in with the background. The data classification is handled by several support vector machine classifiers arranged in two layers. This architecture is known as Adaptive Combination of Classifiers (ACC). The system performs very well and is capable of detecting people even when all components of a person are not found. The performance of the system is significantly better than a full body person detector designed along similar lines. This suggests that the improved performance is due to the components based approach and the ACC data classification structure. 1999-08-11T00:00:00Z http://hdl.handle.net/1721.1/7291 A Note on Support Vector Machines Degeneracy Rifkin, Ryan; Pontil, Massimiliano; Verri, Alessandro When training Support Vector Machines (SVMs) over non-separable data sets, one sets the threshold $b$ using any dual cost coefficient that is strictly between the bounds of $0$ and $C$. We show that there exist SVM training problems with dual optimal solutions with all coefficients at bounds, but that all such problems are degenerate in the sense that the "optimal separating hyperplane" is given by ${f w} = {f 0}$, and the resulting (degenerate) SVM will classify all future points identically (to the class that supplies more training data). We also derive necessary and sufficient conditions on the input data for this to occur. Finally, we show that an SVM training problem can always be made degenerate by the addition of a single data point belonging to a certain unboundedspolyhedron, which we characterize in terms of its extreme points and rays. 1999-08-11T00:00:00Z http:// hdl.handle.net/1721.1/7290 Support Vector Machines: Training and Applications Osuna, Edgar; Freund, Robert; Girosi, Federico The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images. 1997-03-01T00:00:00Z http://hdl.handle.net/1721.1/7289 An Equivalence Between Sparse Approximation and Support Vector Machines Girosi, Federico In the first part of this paper we show a similarity between the principle of Structural Risk Minimization Principle (SRM) (Vapnik, 1982) and the idea of Sparse Approximation, as defined in (Chen, Donoho and Saunders, 1995) and Olshausen and Field (1996). Then we focus on two specific (approximate) implementations of SRM and Sparse Approximation, which have been used to solve the problem of function approximation. For SRM we consider the Support Vector Machine technique proposed by V. Vapnik and his team at AT&T Bell Labs, and for Sparse Approximation we consider a modification of the Basis Pursuit De-Noising algorithm proposed by Chen, Donoho and Saunders (1995). We show that, under certain conditions, these two techniques are equivalent: they give the same solution and they require the solution of the same quadratic programming problem. 1997-05-01T00:00:00Z
{"url":"http://dspace.mit.edu/feed/rss_1.0/1721.1/5462","timestamp":"2014-04-18T09:01:18Z","content_type":null,"content_length":"7145","record_id":"<urn:uuid:ea7d1375-20de-473b-af27-fd7dcbca4697>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Equality of terms containing associative-commutative functions and commutative binding operators is isomorphism complete , 1992 "... In computer science we speak of implementing a logic; this is done in a programming language, such as Lisp, called here the implementation language. We also reason about the logic, as in understanding how to search for proofs; these arguments are expressed in the metalanguage and conducted in the me ..." Cited by 57 (15 self) Add to MetaCart In computer science we speak of implementing a logic; this is done in a programming language, such as Lisp, called here the implementation language. We also reason about the logic, as in understanding how to search for proofs; these arguments are expressed in the metalanguage and conducted in the metalogic of the object language being implemented. We also reason about the implementation itself, say to know it is correct; this is done in a programming logic. How do all these logics relate? This paper considers that question and more. We show that by taking the view that the metalogic is primary, these other parts are related in standard ways. The metalogic should be suitably rich so that the object logic can be presented as an abstract data type, and it must be suitably computational (or constructive) so that an instance of that type is an implementation. The data type abstractly encodes all that is relevant for metareasoning, i.e., not only the term constructing functions but also the... , 1992 "... An improved method to retrieve a library function via its Hindley/Milner type is described. Previous retrieval systems have identified types that are isomorphic in any Cartesian closed category (CCC), and have retrieved library functions of types that are either isomorphic to the query, or have ..." Cited by 18 (0 self) Add to MetaCart An improved method to retrieve a library function via its Hindley/Milner type is described. Previous retrieval systems have identified types that are isomorphic in any Cartesian closed category (CCC), and have retrieved library functions of types that are either isomorphic to the query, or have instances that are. Sometimes it is useful to instantiate the query too, which requires unification modulo isomorphism. Although unifiability modulo CCCisomorphism is undecidable, it is decidable modulo linear isomorphism, that is, isomorphism in any symmetric monoidal closed (SMC) category. We argue that the linear isomorphism should retrieve library functions almost as well as CCC-isomorphism, and we report experiments with such retrieval from the Lazy ML library. When unification is used, the system retrieves too many functions, but sorting by the sizes of the unifiers tends to place the most relevant functions first. R'esum'e Ce papier pr'esente une nouvelle m'ethode pour la re... "... The first order isomorphism problem is to decide whether two nonrecursive types using product- and function-type constructors, are isomorphic under the axioms of commutative and associative products, and currying and distributivity of functions over products. We show that this problem can be solved ..." Cited by 7 (0 self) Add to MetaCart The first order isomorphism problem is to decide whether two nonrecursive types using product- and function-type constructors, are isomorphic under the axioms of commutative and associative products, and currying and distributivity of functions over products. We show that this problem can be solved in 2 is the input size. This result improves upon the space bounds of the best previous algorithm. We also describe an time algorithm for the linear isomorphism problem, which does not include the distributive axiom, whereby improving upon the time of the best previous algorithm for this problem.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=202531","timestamp":"2014-04-16T09:03:48Z","content_type":null,"content_length":"18351","record_id":"<urn:uuid:6e9bcb24-70b6-4a6c-a4a3-1da12f641abc>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Question about Spring compression 1. 91526 Question about Spring compression (See attached file for full problem description with all symbols) Show your work and the equations that you used. A block of mass slides along a frictionless table with a speed of 10 m/s. Directly in front of it, and moving in the same direction, is a block of mass moving at 3.0m/s. A massless spring with spring constant is attached to the near side of (the side facing ). When the blocks collide, what is the maximum compression of the spring? (Hint: At the moment of maximum compression of the spring, the two blocks move as one. Find the velocity by noting that the collision is completely inelastic at this point.)
{"url":"https://brainmass.com/physics/conservation-of-energy/91526","timestamp":"2014-04-18T19:05:27Z","content_type":null,"content_length":"29180","record_id":"<urn:uuid:93cf7193-379d-47a1-a6be-2014398a8ad9>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Two-point functions 3.8 Two-point functions To understand the nature of the possible excitations at the phase transition, one needs to study correlation functions in the vicinity of ^4 [115], using a scalar field propagator to determine Some further data (for [37 , 34 ]. These authors simply used the lattice distance [37], the measure was taken to be of the form
{"url":"http://relativity.livingreviews.org/Articles/lrr-1998-13/articlesu25.html","timestamp":"2014-04-20T20:58:04Z","content_type":null,"content_length":"7532","record_id":"<urn:uuid:f4a8026d-83ec-495c-ba18-7ab4041be9ce>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
eponymous functions from the SKI calculus npm install ski Want to see pretty graphs? Log in now! 14 downloads in the last week 46 downloads in the last month eponymous functions from the SKI calculus $ npm install ski var ski = require(ski) var S = ski.S var K = ski.K var I = ski.I var log = console.log.bind(console) var tenner = S(K, log, 10) // 10 logged to console // tenner === 10 var truth = ski.K(true) // => true // => 5 The module is also split into files, so you can use commonjs path syntax to only load the function(s) you need: var S = require('ski/s') var K = require('ski/k') var I = require('ski/i') descriptions adapted from the wikipedia article: I = function (x), the identify function I returns its argument: I(x) => x K = function (x), the constant function K, when applied to any argument x, yields a one-argument constant function Kx , which, when applied to any argument, returns x: K(x) => (y) => x S = function (x, y, z), the substitution function S is a substitution operator. It takes three arguments and then returns the first argument applied to the third, which is then applied to the result of the second argument applied to the third. More S(x, y, z) === x(z)((yz)) CC 0 (public domain)
{"url":"https://www.npmjs.org/package/ski","timestamp":"2014-04-19T02:08:56Z","content_type":null,"content_length":"8238","record_id":"<urn:uuid:8266a98a-0eaf-4a62-830a-ba1d7c573d35>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
NORMAL_MAP texgen and texture matrix [Archive] - OpenGL Discussion and Help Forums 01-31-2001, 12:40 PM In an effort to do simple nonphotorealistic lighting I am trying to use the NORMAL_MAP_ARB texture coordinate generation mode and a texture matrix to do the dot product with light direction. the result of the dot product then indexes a 1-dimensional texture map. I have found that if I only enable NORMAL_MAP texgen on the S and T coordinates, everything seems to work all right (at least initially). However, when I turn on texgen for R, suddenly all the texture coordinates I get (regardless of the texture matrix I use) are screwed up (I think they are all .5 or something). I was unable to find any explainable cause for this. Even using an identity matrix (which should map only the incoming S coordinate to the outgoing S coordinate) saw the texture coordinates being screwed up merely by enabling or disabling texgen on R. Having given up on the full texgen for a bit I started looking at mapping the -1 to 1 range of the S values being generated to the 0 to 1 range of the texture map. Clearly the first row of the texture matrix should be: .5 0 0 .5 to achieve this, but I quickly found that using .5 0 0 k for any value of k always gave the same results. .5 0 k 0 gave odd results that almost implied that texgen on R was already being performed ( it perturbed the generated coordinates in strange ways as k increased). Finally I found that by turning off texgen on T and using .5 .5 0 0 as the first row of the texture matrix yielded texture coorinates ranging from 0 for completely left-pointing normals and 1 for completely right-pointing normals. This, however, makes no sense whatsoever, as it implies that the T texture coordinate defaulted to 1 after I turned off texgen for it. Does the number of dimensions in the texture being used (1D versus 2D versus 3D and cube map) have any effect on the behaviour of the texture matrix, or the default values of particular texture coordinates? I doubt this, as the problems with using texgen on R persisted even when I used a 2D texture instead. I am working on a Radeon DDR 64MB, latest drivers. Any help at all would be appreciated.
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-139701.html","timestamp":"2014-04-16T04:42:37Z","content_type":null,"content_length":"5598","record_id":"<urn:uuid:29fae1ae-a90d-4d59-8530-3cc113b7b7dd>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
show that the sum = sqrt(2) Last edited by Bruno J.; June 10th 2010 at 11:24 AM. That's the sum $1+\frac{1}{4}+\sum^\infty_{n=2}\frac{1\cdot 3\cdot\ldots\cdots (2n-1)}{4\cdot (4\cdot 2)\cdot\ldots\cdot(4n)}$ . Let's try now to understand the quotient in that infinite series a little better: $\frac{1\cdot 3\cdot\ldots\cdot (2n-1)}{4\cdot (4\cdot 2)\cdot\ldots\cdot (4n)}=\frac{1\cdot 2\cdot 3\cdot 4\cdot\ldots\cdot (2n)}{[4^n\cdot n!][2\cdot 4\cdot\ldots\cdot (2n)]}$$=\frac {(2n)!}{n!4^n\cdot n! 2^n}$$=\frac{(2n)!}{(n!)^22^{3n}}$$=\frac{(2n)!}{2^{2n}(n!)^2}\,\left(\frac{1}{\sqrt{ 2}}\right)^{2n}$ . There's a reason why we wrote the last mess: We know (hopefully!) that $ \arcsin x=\sum^\infty_{n=0}\frac{(2n)!}{2^n(n!)^2(2n+1)}\, x^{2n+1}$ , with convergence radius $|x|<1$ . Well, derivate both sides of this and get: $\frac{1}{\sqrt{1-x^2}}=\sum^\infty_{n=0}\frac {(2n)!}{2^{2n}(n!)^2}\ ,x^{2n}$ , and now input $x=\frac{1}{\sqrt{2}}$ , do a little algebra and get your result. Tonio Ps. This is, imho, a problem that fits the Challenge Problems Section! Another way to view the series is that it is the expansion by the Binomial Theorem of $\left(1 - \frac{1}{2} \right)^{-1/2}$.
{"url":"http://mathhelpforum.com/number-theory/148565-show-sum-sqrt-2-a.html","timestamp":"2014-04-17T00:52:11Z","content_type":null,"content_length":"42275","record_id":"<urn:uuid:700a9acd-aa22-4d70-a125-a8db50dcc06c>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: RAW files can be compressed without losing information. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50c38d2ce4b066f22e1094ef","timestamp":"2014-04-18T13:52:56Z","content_type":null,"content_length":"44112","record_id":"<urn:uuid:564adda2-5e95-4141-8a09-1d94e5a2b6b5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Items by White, Katrin Number of items: 16. Clarke, J., White, K. A. J. and Turner, K., 2013. Approximating optimal controls for networks when there are combinations of population-level and targeted measures available : chlamydia infection as a case-study. Bulletin of Mathematical Biology, 75 (10), pp. 1747-1777. Ward, Z. and White, K. A. J., 2012. Impact of latently infected cells on strain archiving within HIV hosts. Bulletin of Mathematical Biology, 74 (9), pp. 1985-2003. Clarke, J., White, K.A.J. and Turner, K., 2012. Exploring short-term responses to changes in the control strategy for chlamydia trachomatis. Computational and Mathematical Methods in Medicine, 2012, Brown, V. L. and White, K. A. J., 2011. The role of optimal control in assessing the most cost-effective implementation of a vaccination programme: HPV as a case study. Mathematical Biosciences, 231 (2), pp. 126-134. Hartfield, M., White, K. A. J. and Kurtenbach, K., 2011. The role of deer in facilitating the spatial spread of the pathogen Borrelia burgdorferi. Theoretical Ecology, 4 (1), pp. 27-36. Paulley, Y., Delgado-Charro, M. B. and White, K. A. J., 2011. Modelling the formation of a lithium reservoir in the skin; an alternative non-invasive strategy for drug sampling? Therapeutic Drug Monitoring, 33 (4), p. 556. Paulley, Y., Delgado-Charro, M. B. and White, K. A. J., 2010. Modelling formation of a drug reservoir in the stratum corneum and its impact on drug monitoring using reverse iontophoresis. Computational and Mathematical Methods in Medicine, 11 (4), pp. 353-368. Brown, V. and White, K., 2010. The HPV vaccination strategy: could male vaccination have a significant impact? Computational and Mathematical Methods in Medicine, 11 (3), pp. 223-237. Ward, Z. D., White, K. A. J. and van Voorn, G. A. K., 2009. Exploring the impact of target cell heterogeneity on HIV loads in a within-host model. Epidemics, 1 (3), pp. 168-174. Whittle, A., Lenhart, S. and White, K. A. J., 2008. Optimal control of gypsy moth populations. Bulletin of Mathematical Biology, 70 (2), pp. 398-411. White, K. A. J. and Gilligan, C. A., 2006. The role of initial inoculum on epidemic dynamics. Journal of Theoretical Biology, 242 (3), pp. 670-682. White, S. M. and White, K. A. J., 2005. Applications of biological control in resistant host-pathogen systems. Mathematical Medicine and Biology - A Journal of the Ima, 22 (3), pp. 227-245. White, S. M. and White, K. A. J., 2005. Relating coupled map lattices to integro-difference equations: dispersal-driven instabilities in coupled map lattices. Journal of Theoretical Biology, 235 (4), pp. 463-475. Gudelj, I. and White, K. A. J., 2004. Spatial heterogeneity, social structure and disease dynamics of animal populations. Theoretical Population Biology, 66 (2), pp. 139-149. Gudelj, I., White, K. A. J. and Britton, N. F., 2004. The effects of spatial movement and group interactions on disease dynamics of social animals. Bulletin of Mathematical Biology, 66 (1), pp. Beardmore, I. and White, K. A. J., 2001. Spreading disease through social groupings in competition. Journal of Theoretical Biology, 212 (2), pp. 253-269.
{"url":"http://opus.bath.ac.uk/view/person_id/636.html","timestamp":"2014-04-16T16:02:23Z","content_type":null,"content_length":"24301","record_id":"<urn:uuid:cc118aad-86b1-4c84-bd50-a7f7d5e4868a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 742 To treat a burn on your hand, you decide to place an ice cube on the burned skin. The mass of the ice cube is 10.9 g, and its initial temperature is -12.0 °C. The water resulting from the melted ice reaches the temperature of your skin, 29.6 °C. How much heat is absorb... IF I RAN 3/8 OF A MILE HOW MUCH FURTHER WOULD I HAVE TO RUN TO MAKE A MILE At the end of the current year, $19,900 of fees have been earned but not billed to clients. a. What is the adjustment to record the accrued fees? Indicate each account affected, whether the account is increased or decreased, and the amount of the increase or decrease. b. If th... Kathy scored 9 points. that was 18% of her teams score. How many points did the team score? a 2250 kg car is traveling at 20 m/s to the west. the force required to stop the car at 7890 N to the east. How long will it take the car to stop? How far would the car move before stopping? A train is moving parallel and adjacent to a highway with a constant speed of 27 m/s. Ini- tially a car is 29 m behind the train, traveling in the same direction as the train at 38 m/s and accelerating at 2 m/s2. What is the speed of the car just as it passes the train? Answer... A student performs a ballistic pendulum experiment using an apparatus similar to that shown in the figure. Initially the bullet is fired at the block while the block is at rest (at its lowest swing point). After the bullet hits the block, the block rises to its highest positio... There are 100 questions from a SAT test, and they are all multiple choice with possible answers of a,b,c,d,e for each question only one answer is correct. Find the mean number of correct answers for those who make random guesses for all 100 questions? A.5/6 B.5/8 C.? An astronaut mass has a mass of 100kg She recedes from her spacecraft using spurts of gas from a small unit on her back. If the force generated by the gas spurt is 50N calculate her acceleration At the end of the current year, $19,900 of fees have been earned but not billed to clients. a. What is the adjustment to record the accrued fees? Indicate each account affected, whether the account is increased or decreased, and the amount of the increase or decrease. b. If th... 3. A man leaves his home by car and travels 5 km on a road running due East. The driver then turns left and travels 2 km due north to a junction where he joins a road that goes North West and drives a further 2 km. Find his displacement from home by then. A 0.5 kg soccer ball is kicked with a force of 50 Newtons for 0.2 sec. The ball was at rest before the kick. What is the speed of the soccer ball after the kick? A 3kg ball is accelerated from rest to a speed of 10 m/s. What is the ball's change in momentum? A net force of 100 newtons is applied to a 20 kg cart that is already moving at 3 meters per second. The final speed of the cart was 8 m/s. For how long was the force applied? science... PLEASE HELP! Solids have a fixed shape and volume. Their molecules are locked in place. The molecules constantly vibrate, but they cannot move or switch places with other molecules. As a result, a solid retains its shape and size. A 1,400kg car is traveling in a straight line. Its momentum is equal to 2,000 kg. What is the velocity of the car? A beach ball rolling in a straight line toward you at a speed of 0.5 m/sec. Its momentum is 0.25kg m/sec. What is the mass of the beach ball? An 8kg bowling ball is rolling in a straight line toward you. If its momentum is 16kg m/sec, how fast is it traveling? Stephanie casts a shadow of 1.2m and she is 1.8m tall. A wind turbine casts a shadow of 10m at the same time that Stephanie measured her shadow. Draw a diagram of this situation and then calculate how tall the wind turbine is. world geography In Iceland and the British Isles, the ocean creates a __________________ . a. mild temperatures c. hot and humid temperatures b. cold and wet temperatures d. hot and dry temperatures Algebra 1 Algebra 1 6y^-3 Can you show steps on how to do this Algebra 1 HELP I am not sure how to do these 2 problems can someone show me. Simplify the expression Write answers using exponents. 8^6/8^4*8^2 4^5*1/4^2 <---is by the 4 only Describe a business situation, other than what has already been selected by fellow students or selected from the team assignment, where mean and standard deviation can be used in decision making. Describe how calculation of mean and standard deviation can help in making a deci... A 5 kg block is hanging from the top of an elevator if the elevator is accelerating upward at a rate of 3m/s/s, what is the tension in the rope that the block is hanging from? How do you get 105? At the ruins of Caesarea, archaeologists discovered a huge hydraulic concrete block with a volume of 945 cubic meters. The block's dimensions are x meters high by 12x - 15 meters long by 12x - 21 meters wide. What is the height of the block? I have no idea how to do this. ... Im stuck on these few questions that seems to be getting me no where : -Determine the number of mmoles of HCl that did not react with the anatacid. c HCl = 0.1812 c NaOH = 0.1511 volume of HCl added : 75 volume of NaOH added: 29.23,19.58,33.3 - mmoles of HCl neutralized by the... algebra 2 What is the absolute value if 10-7i? in order ti increase self-esteem, experts say you should a:praise any behavior without reservation b:acknowledge completed tasks, especially those that are difficult c:avoid tasks that are easily completed d:set high standards, especially in areas that are not relevant to your... A Boeing 787 is initially moving down the runway at 3.0 m/s preparing for takeoff. The pilot pulls on the throttle so that the engines give the plane a constant acceleration of 1.7 m/s2. The plane then travels a distance of 1900 m down the runway before lifting off. How long d... The Graph above shows the speed of a car traveling in a straight line as a function of time. The car accelerates uniformly and reaches a speed Vb of 4.00 m/s in 8.00 seconds. Calculate the distance traveled by the car from a time of 1.20 to 4.40 seconds. Early childhood I need to see an written example of a summary report on a child for preschool that covers the seven domain. 1.Personal and social development 2. Language and Literacy 3. Mathematical Thinking. 4. Scientific Thinking. 5. Social Studies. 6. the Arts. 7.Physical development, Heal... If Ginny took out a loan for 21,000 in 8/2011 at 6.5% and wants to pay off in 9/2013, and has been making monthly payments of $239.00 how much will she be charged to pay it all? Please help. Thanks how can i rephrase this so that both parts are parallel Lots of my friends think that Prince's music is old, but I think it's not only excellent but also an innovation. Business and Finance Jeff Jones earns $1,200 per week. He is married and claims four withholding allowances. The FICA rate is as follows: Social Security rate is 6.2% on $97,500; Medicare rate is 1.45%. To date his cumulative wages are $6,000. Each paycheck, his employer also deducts $42.50 for he... When a not-for-profit facility receives a contribution from a member of the community, the cost of the capital is inconsequential when deciding how to use the contribution, because it is, in effect, free money. Nelda is fearful of public places and has managed to stay "at home" for the last three years. She and her therapist have developed a strategy. During the first week, she will go to the front door and open it and look outside. If she can remain calm, the goal for the ... ok thank you It had been a confusing afternoon which Martin was embarrassed to share with anyone. About one o'clock, he suddenly realized that he was in a strange place with strange people who called him by name. Martin found himself in a town he had never heard of and wondered where h... Marvin collects things, not cars, or baseball cards, or even beer bottles. Marvin collects everything. He cannot seem to stop picking up "stuff" from the street or dumpsters. If he attempts to pass by a target for collection, he is overwhelmed with tension and must r... use a half-angle identity to find the exact value of tan 105 degrees. Media/Social Science What is the theory of Habermas? How would you explain the theory of communicative rationality? How do you think the thoughts of Habermas apply to the real world? public speaking for communication to take place there has to be? How do you weight probabilities to add up to 1? Example: Chance of rain .25 Chance of person wearing purple hat .1 Chance of person wearing red hat .05 Chance of person wearing black hat .2 Chance of person wearing yellow hat .1 Chance of person wearing white hat .3 So, You ru... choose a product or service category that is at each stage of the product lifecycle--one each for introduction, growth, maturity, and decline. Explain your selection Propose strategies for effectively marketing each product or service you selected. Mr foster places 11 garden lights along an 80 meter path. Each light is placed an equal distance apart. How far away is the sixth garden light from the second? John has a loan of $15,000. How much will his monthly payment be at 6.8% over a 10-year term Algebra II Part 2 Use a graphing calculator to solve the equation -3 cos t=1 in the interval from 0 to 2p.Round to the nearest hundredth algebra II Part 2 The equation h=7 cos{pi/3 t} models the height h in centimeters after t seconds of a weight attached to the end of a spring that has been stretched an then released. (C).Find the times at which the weight is at a height of 1 cm,of 3 cm,and of 5 cm below the rest position for t... Algebra II Part 2 Verify the identity cot{0-pi/2=-tan 0 Algebra II Part 2 Use an angle sum identity to verify the identity cos 2 0=2 cos 0-1 Algebra II Part 2 Use a graphing calculator to solve the equation -3 cos t=1 in the interval from 0 to 2 p.Round your answer to the nearest hundredth Algebra II Part 2 The equation h=7 cos{pi/3 t} models the height h in centimetres after t seconds of a weight attached to the end of a spring that has been stretched and then released. Find the times at which the weight is at a height of 1 cm,of 3 cm,and of 5 cm below the rest position for the ... A worker sits at one end of a 183-N uniform rod that is 2.80 m long. A weight of 107 N is placed at the other end of the rod. The rod is balanced when the pivot is 0.670 m from the worker. Calculate the weight of the worker Chemistry pH What is the pH of a solution that is .60 M H2SO4 (Sulfuric acid) and 1.7 M HCOOH (formic acid). Calculus 1 Find the antiderivative of f(x)=5e^x-2secxtanx showing steps please The current circulation of a particular magazine is 3,000 copies per week. The editor projects a growth rate of g(t) = 4 + 5t^2/3 copies per week after t weeks. a. Find the circulation function based on this projection. b. Find the circulation in 2 years. What is the amount of work produced when 1.0 g of H2O (l) is boiled at atmospheric pressure? Assume the liquid has no volume. Social Studies who was the american commander responsible for catching the philippines Geometry Part 2 At a track meet,50 people ran the 100- meter dash.2 people finished in 11 seconds,5 people finished in 12 seconds,8 people finished in 13 seconds,10 people finished in 14 seconds,21 people finished in 15 seconds,2 people finished in 16 seconds,and 2 people finished in 17 secon... Geometry Part 2 Janine made a cylindrical vase in which the sum of the lateral area and area of one base was about 3000 square centimetres.The vase had a height of 50 centimetres .Find the radius of the vase.Explain your method you would use to find the radius . Geometry Part 2 State whether the transformation of a Preimage and Image circles appears to be a rigid motion?Explain Geometry Part 2 A forest ranger spots a fire from a 28- foot tower.The angle of depression from the tower to the fire is 11.To the nearest foot,how far is the fire from the base of the tower?Show the steps you use to find the solution. Geometry Part 2 A highway makes an angle of 6 with the horizontal.This angle is maintained for a horizontal distance of 5 miles.To the nearest hundredth of a mile, how high does the highway rise in this 5- mile section?Show the steps you use to find the distance. Geometry Part 2 A triangle has side lengths 10,15,and 7.Is the triangle acute,obtuse,or right?Explain. you run 100m in 25's. if you later run the same distance in less time , does your average speed increase or decrease? explain. Calculate volume of 6MHCL required to prepare a 250 ml of a 0.15MHCL solution?? Calculate volume of 6MHCL required to prepare a 250 ml of a 0.15MHCL solution?? Calculate the mass of water that will be produced when 5.0 grams of H2 is reacted with excess O2. Calculate the mass of water that will be produced when 5.0 grams of H2 is reacted with excess O2. Haha thanks and cause i forgot to multiply -3x+2 by 5 Thanks for the help and reiny no thats not the denominator its 2x/5 - 3x+2 greater than or equal to x-1 2x/5 - 3x +2 >= ( greater than or equal) x-1 My answer was x is less than or equal to 7.5 but im not sure computer science You are required to use the Account class to simulate an ATM machine. Create ten accounts in an array with id 0, 1, 2 ...9, and initial balance $50. The system prompts the user to enter an id. If the id is entered incorrectly, ask the user to enter a correct id. Once an id is ... The rate of change of an investment account earning continuous compound interest is given by dA/dt=kA where k is a positive constant. The initial account value was $2500. At the end of the third year, the account value was $4200. Find the particular solution to the differentia... Que te gusta hacer? My choices are (A).A mi tambien (B).No,no me gusta. (C).No,me gusta Tampico. (D).Me gusta montar en monopatin Algebra 1Part2 Divide 3n^2-n/n^2-1/n^2/n+1 Algebra 1 Part 2 Divide 3n^2-n/n^2-1/n^2/n+1 Algebra 1Part2 Divide 3n^2-n/n^2-1/n^2/n+1 Algebra 1 Part2 Simplify the radical expressions: (1).sqrt75+sqrt3 (2).sqrt7(sqrt14+sqrt3) Algebra 1 Part 2 Can you please show me how to write this because I don't follow what your saying Algebra 1 Part 2 Simplify the radical expressions: (1).sqrt75+sqrt3. (2).sqrt7(sqrt14+sqrt3 Algebra 1 Part 2 Simplify the rational expressions.State any excluded values:(1).3x-6/x-2 (2).x-2/x^2+3x-10 Algebra 1Part 2 Divide 3n^2-n/n^2-1/n^2/n+1 Algebra 1Part 2 Can't re-enter because that's how it 's written. Algebra 1Part 2 Divide 3n^2-n/n^2-1/n^2/n+1 Algebra 1 Part 2 What are the minimum,first quartile,median,third Quartile,and maximum of the data set?5 55,62,61,54,68,72,59,61,70. Algebra 1,Part 2 Would the mean be 62.44 and the range be -18 Algebra 1,Part 2 As cars passed a checkpoint ,the following speeds were clocked and recorded. Speed (mph):55,62,61,54,68,72,59,61,70) Find the mean and the range of the data set . What is the factor (x to the third power + 4x to the second power)(2x+8)? The period of a pendulum can be found by using the formula,where L is length of the pendulum in feet and P is the period in seconds.Find the period of a 15-foot pendulum. Round your answer to the nearest hundreth of a second. algebra 1 What is the prime factorization of 20? What is the prime factorization of 20? HCA 220 A recent survey found that 70% of all adults over 50 wear glasses for driving. You randomly select 30 adults over 50, and ask if he or she wears glasses. Decide whether you can use the normal distribution to approximate the binomial distribution. If so, find the mean and stand... An airline reports that if has been experiencing a 15% rate of no-shows on advanced reservations. Among 150 advanced reservations, find the probability that there will be fewer than 20 no-shows. Its D if a sample of a distribution of sample size 26 has a range of 56, then what would be an estimate of the sample standard deviation Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=karen","timestamp":"2014-04-16T10:49:58Z","content_type":null,"content_length":"28744","record_id":"<urn:uuid:1185ae37-6908-43e0-8409-263bd50771f2>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Notation for upperbound power sets. up vote 4 down vote favorite There is a standard notation $\mathrm{ZF}[n]$ for Zermelo Fraenkel set theory with the power set axiom restricted to saying the set of natural numbers has $n$ successive power sets $\beth_0\dots\ Is there a similarly standard notation for the extension of $\mathrm{ZF}[n]$ by an axiom saying every set has an hereditary embedding in $\beth_n$? set-theory proof-theory notation 1 I had not known that ZF[$n$] is standard notation for this theory. In fact, if someone had used this notation without any explanation, my first guess would be that it means ZF with replacement limited to $\Sigma_n$ formulas. – Andreas Blass Feb 23 '13 at 14:07 I suspect the notation $\mathrm{ZF}[n]$ comes from Harvey Friedman, in the context of saying that theory has the strength of order n+2 arithmetic. That is where I got it. I have seen it, likely on FOM, but I can't search it by Google since Google refuses to believe I want the square brackets! It gives me pages where virtually every variant possible of ZF by dropping some axiom has been named with a 0 somehow. – Colin McLarty Feb 23 '13 at 14:35 add comment 1 Answer active oldest votes In modern set theory $H(\kappa)$ denotes the collection of sets $X$ such that $X$ is hereditarily of cardinality strictly less than $\kappa$. up vote 2 down vote So, the statement you are asking for can be conveniently expressed as $V=H(\kappa)$, where $\kappa=\beth_{n}^+$. That could work as a notation. But it seems odd to invoke cardinal successor in a context where we do not have choice, and especially to invoke the cardinal successor of a set, $\beth_n$, which the theory proves cannot have one. – Colin McLarty Feb 23 '13 at 12:57 add comment Not the answer you're looking for? Browse other questions tagged set-theory proof-theory notation or ask your own question.
{"url":"http://mathoverflow.net/questions/122548/notation-for-upperbound-power-sets","timestamp":"2014-04-19T12:45:39Z","content_type":null,"content_length":"53767","record_id":"<urn:uuid:3f9cc41b-53fe-41d6-aa7c-04333fa4308a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Suppose P and Q are equivalent sets and n(P)=17. What is the maximum number of elements in P u Q? What is the minimum? What is the maximum number of elements in P n Q? What is the minimum? Any help is appreciated! Thanks! • one year ago • one year ago Best Response You've already chosen the best response. I think maximum of n (P or Q) = 34 minimum is 0 (the case they are not overlap at all. I mean they are separated) the max of n(P and Q) = 17. the min =0. Best Response You've already chosen the best response. Yes. For example if P = {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17} and Q = {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r} then n(PuQ) = 34. But if P = Q then n(PnQ) = 17 Best Response You've already chosen the best response. Thanks a lot, for both "works" and check my work Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51195faee4b0e554778c7398","timestamp":"2014-04-17T15:56:40Z","content_type":null,"content_length":"32691","record_id":"<urn:uuid:93587799-0f12-4a35-82cb-3893f5ecc476>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
How does the work of a pure mathematician impact society? up vote 35 down vote favorite First, I will explain my situation. In my University most of the careers are doing videos to explain what we do and try to attract more people to our careers. I am in a really bad position, because the people who are in charge of the video want me to explain what a pure mathematician does and how it helps society. They want practical examples, and maybe naming some companies that work with pure mathematicians, and what they do in those companies. All this in only 5 or 10 minutes, so I think that the best I can do is give an example. Another reason that I am in a bad position: In my University we have the career "Mathematical engineering" and they do mostly applications and some research in numerical analysis and optimization. I know that pure mathematics is increasing its importance in society every year. Many people in my country think that mathematics has stagnated over time and now only engineers develop science. I think that the most practical thing I can do is give some examples of what we are doing with mathematics today (since 2000). If some of you can help me, I need the following: 1. A subject in mathematics that does not appear in (*). Preferably dynamical systems, logic, algebraic geometry, functional analysis, p-adic analysis or partial differential equations. 2. A research topic in that subject. 3. Practical applications of that research and the institution that made the application. Extra 1. If you know an institution (not a University) that contracts with pure mathematicians and you know what they do there, please tell me also. Extra 2. If you have a very good short phrase explaining "what a mathematician does" or "how mathematics helps society" I will appreciate it too. Thanks in advance. soft-question advice Riffing on Ricardo's comment (which used to be here), if I was in your position I'd mention instances of past useless pure mathematics that became very useful only years (sometimes a lot of 15 years) after their invention. Examples include the RSA code (basically no applications before internet commerce), whatever kind of Fourier transform drives MRI machines, the least squares method and asteroid orbit prediction, the kind of algebraic geometry now being applied in chemistry and biology, all of theoretical computer science (invented before computers, mostly) ... – Gunnar Magnusson Jul 19 '13 at 3:13 6 ... the point being that pure mathematics is one of those things that doesn't benefit society today. But if you give it a couple of decades, very limited parts of it might be quite useful to society tomorrow. – Gunnar Magnusson Jul 19 '13 at 3:14 [This is a comment I removed earlier.] This is just a short comment to relay my honest, yet useless, opinion. It does not help in the least with your predicament. It appears that impact 10 assessments like the one you describe are increasingly required for guaranteeing funding for research in mathematics. However, this feels like a losing battle, as most contemporary mathematicians will recognize that modern pure mathematics is not widely geared towards applications. Any wider societal impact, if it comes, will likely be rare, far in the future, and completely unpredictable. – Ricardo Andrade Jul 19 '13 at 3:32 6 crosspost math.stackexchange.com/questions/447063/… – Will Jagy Jul 19 '13 at 3:34 7 In case the need for extended meta-discussion shoudl arise, please use meta.mathoverflow.net/questions/511/… – quid Jul 19 '13 at 20:53 show 12 more comments closed as primarily opinion-based by Yemon Choi, David White, Bill Johnson, Mark Sapir, Andy Putman Jul 19 '13 at 22:18 Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise.If this question can be reworded to fit the rules in the help center, please edit the question. 15 Answers active oldest votes Turing's thesis and the development of the modern computer. John von Neumann's contribution to the war effort (and that of Turing and Hilton, for example). Fractals and modern computer imaging. Wavelets and data compression. Topological data analysis. x-ray diffraction. Number theory and cryptography. Linear programming, control theory, telecommunications, and up vote quadro-copters. Differential geometry and automotive design. Maybe it is currently out of favor, but the NSA is the largest employer of mathematicians in the world. To expand upon that 17 down thought, the general data analysis that occurs in government and industry that helps determine buying habits, consumer behavior, and other large scale human behaviors. Many of the applications you list end up getting represented as a graph. In my opinion graph theory is the most applicable area of pure math, and it certainly doesn't fall under the OP's (*). So I'm voting +1 here because it's the closest of all the answers to just saying "graph theory." The vast majority of data analysis involves some kind of graph algorithm which was originally developed for publication in a pure math journal. Scheduling and logistics are another application which would have fit in your answer. Logistics firms hire plenty of mathematicians. Microsoft Research too. – David White Jul 19 '13 at 20:41 8 Are you claiming graph theory has had more applications than analysis or Linear algebra? The whole of engineering and most of physics uses real and functional analysis and tonnes of linear algebra. – Piyush Grover Jul 19 '13 at 20:49 add comment Here are two recent speakers from our departments lecture series on applied mathematics (the Wing Lectures at U. Rochester). We've had a lot of great lecturers, but these two stick out to me as having an impact on society. Adrien Treuille -- he works in computer graphics, and his research purely in this areas includes algorithms realistic modeling of crowds, and real time fluid mechanics (e.g. with basically no delay, they can add digital trails of flames behind a race cars on TV). Also, he can construct initial data to make (digital) smoke form specified shapes at specified times. But, what's even cooler, is that he's collaborated with biologists on an interactive game (called FoldIt) for finding the optimal conformation of proteins. Players get points for moving the protein into better conformations. Humans are way better at this than computers, and they've actually published papers based on conformations discovered by players; in one case, the answer they found had eluded scientists working in the field for at least 10 years! They call this "crowd source science". They've also created another game for engineering shapes with RNA, and the players of that game have discovered things as well. up vote Gunnar Carlsson -- he is an algebraic topologist who originally worked in K-theory, but shifted to using algebraic topology to understand data. Specifically he was one of (the?) pioneers of 17 down persistent homology, which is a way of using topology to understand data. The really great thing about it is that it discovers structure for you--instead of fitting the data to a model, vote persistent homology discovers the model for you. For instance, they used PH to find the most commonly occurring 9-bit patterns in black and white images, which in theory would allow better image compression that JPEG (but in practical, JPEG is highly optimized, so it would take a lot of work to benefit from this discovery). In another example, they analyzed genomic information from cancer patients and linked it up with survival information; PH discovered the threshold for the expression of a certain gene at which survival drops significantly. Besides discovering structure in data, PH is also able to incorporate data collected at different times and from different sources (perhaps a reflection that the method is "metric free" --I'm not sure about that). A good reference to check is their paper in Nature. Also, Tony DeRose from Pixar gave a really great talk on the methods they've developed in computer graphics. I don't remember the details so well (maybe because I was distracted by the entertaining presentation :) so here's a link to a good article. I wa going to mention Topological Data Analysis which is very pure in its background and intuitions but is very useful in new areas of data analysis. Brendan beat me to it in mentioning 5 Gunnar Carlsson and his team. This is exciting modern maths and deserves being talked about to general audiences. There is also a paper on the discovery of new classes of breast cancer using TDA. Check out Topological Data Analysis with a web search. – Tim Porter Jul 19 '13 at 13:40 add comment The National Academies of Sciences recent report, The Mathematical Sciences in 2025 (NAS link here), has a chapter entitled "Connections Between the Mathematical Sciences and Other Fields," and details many such connections, as indicated below: up vote 16 down vote The whole report could be useful to you. It can be read online free. add comment General Relativity is always one of my primary examples how really advanced math is important in daily life: Who would have though in 19th century that Differential geometry and Mikowski Geometry would be important to describe our universe? However, GPS does not work without this. up vote 10 down vote And in the moment, things get further, as the Standard model of particle physics is a differential geometric model. 2 Well, Gauss seemed to think so, didn't he? – Marius Kempe Jul 19 '13 at 11:43 Well, he just constructed this as an example of a negatively curved space; but I would consider this as a pure academic interest in the first place. – Kofi Jul 19 '13 at 12:02 I'm not an expert, but isn't General Relativity an example of theoretical physics? It involved maths, but Einstein was not a "pure" mathematician - is the theory of General Relativity considered a work of "pure" math? – mindplay.dk Jul 19 '13 at 13:38 1 No, of course not, but it needed a lot of work on the side of pure mathematics to give the necessary foundations. I would say it was pure math until it happened to be relevant for Physics; this might be an example why you can say "who knows what this might be good for" when working on some seemingly abstract topic. – Kofi Jul 19 '13 at 15:16 I'm under the impression that Gauss seriously considered the possibility of the universe 'being' a non-Euclidean geometry... – Marius Kempe Jul 19 '13 at 19:25 show 1 more comment Personally I prefer the term "theoretical" mathematics rather than "pure." But just to give you an answer to Extra 1: Microsoft Research employs mathematicians trained in theoretical fields: for example topologists (e.g. Mike Freedman, Zhenghan Wang, Kevin Walker) at Microsoft Station Q in Santa Barbara are collaborating with physicists in an attempt to realize up vote 8 topological quantum computers. down vote add comment Compressive sensing and time frequency analysis might fit your bill. Suppose you have a signal (say a recording of someone's speech) that you want to transmit. In order to do this, you break the signal into a discrete set of pieces in such a way that from the pieces alone you can reconstruct the entire signal. Then you can send the discrete pieces of information and the receiver can reconstruct the signal. up vote 7 Abstractly, this amounts to asking "When can you reconstruct a function $f \in L^2(\mathbb{R}^d)$ if all you know are the values of $f$ on some lattice, or other discrete set?" You need down vote some conditions on $f,$ and some conditions on the point set, but surprisingly such a reconstruction is possible. Although I have largely billed it as an applied problem, there are many rich connections to areas of pure math, starting with Fourier and time frequency analysis, but leading into dynamics and operator algebras as well. add comment Linear algebra might be useful here. A classical research topic in linear algebra is the spectrum of matrices with non-negative entries (a matrix could be described maybe as an abstraction of geometric operations like rotations, reflections etc). Google's PageRank algorithm has been widely described, and is heavily based on these kind of aspects of linear up vote 7 algebra. down vote add comment Maybe not quite what you asked for, but the report Measuring the Economic Benefits of Mathematical Science Research in the UK (2.5MB) http://www.cms.ac.uk/files/Submissions/ up vote 5 down article_EconomicBenefits.pdf may be relevant. There are a few good pure maths examples in there. add comment See: http://www.whydomath.org/ Plenty of pure mathematics keeps turning into "applied" mathematics as time goes by. That is the gist of how math impacts society. I would stay away from the more philosophical answers such as "Math helps you to think logically" etc., since your audience would probably not appreciate that. Some of the more apparent uses of high level mathematics: 1). Theoretical advances in control theory that lets us fly UAVs, supersonic planes etc: Lockheed Martin/Boeing/etc. often employ mathematicians (pure and applied) to work on control aspects up vote 4 down vote 2). Theoretical advances in communication theory that has revolutionized the telecom business: Companies such as Qualcomm,Samsung etc. employ many PhDs who work in theory. 3). Pure Mathematicians such Edward Belbruno (NASA/Princeton) helped devise new ways of interplanetary travel using exotic properties of the three-body problem. add comment An applicable area of research that relates to PDE is inverse problems, for example the Calderon problem (see e.g. http://www.math.washington.edu/~gunther/publications/Papers/ calderoniprevised.pdf ). Applications include medical imaging (electrical impedance tomography), discovering oil under the sea, and finding cracks in concrete blocks. up vote 1 down vote Fields of mathematics related to inverse problems include PDE, microlocal analysis, integral transforms and Bayesian statistics. There is also numerical analysis. Medical imaging and imaging concrete blocks are under current research at least in universities of Helsinki and eastern Finland, respectively. add comment Clifford algebras have a wide range of applications inside and outside of Mathematics, including Differential Geometry, Computer Vision, Robotics, Theoretical Physics, Computer Science. up vote 1 See for instance: http://www.amazon.com/Geometric-Computing-Clifford-Algebras-Applications/dp/3642074421 down vote add comment Of course there are many companies hiring pure mathematicians. But they do not work as pure mathematicians, they become programmers, consultants, or statisticians… That is the truth up vote 1 down and prospective students deserve to know it. 1 I'm afraid that the truth is not acceptable in the circumstances of user80586. – André Henriques Jul 19 '13 at 22:09 Maybe, maybe not, it depends. Who wants to use the video? For which occasions? – The User Jul 19 '13 at 22:16 add comment One example is Latin squares and its application in design experiment. You can search in the web for more information about it and its application in everywhere. Another example is Line Group. The line groups describe the symmetry of quasi-one dimensional crystals. For more details, you can see the book: "Line Groups in Physics". Also, You can see the very interesting book with name: up vote 1 down vote " I' explosion des Mathematiques " that is in French language. Finally, I give a sweet example that maybe your audiences love it. The number of divisors of a small number is very important. it is a pure number theoretic problem. But, suppose a company want to packing its production like as chocolate or biscuit. They prefer to use number 4, 6 or 12. Why? for better sharing the chocolates or biscuits in a package among some 1 I don't know about this --- 'mint slice' biscuits here always used to have (still?) 13 in a packet, perhaps precisely to make sharing interesting. – Scott Morrison♦ Jul 20 '13 at 7:46 Dear Scott, I do not know 13 (and also mint slice), I know 12+1 and since 1 is very small rather than 12, you can drop one biscuit. – Shahrooz Jul 20 '13 at 22:01 add comment From What is Pure Mathematics? : Finance and cryptography are current examples of areas to which pure mathematics is applied in significant ways. up vote 0 down vote While Pure Mathematics may not have immediate, direct applications, there are those that graduate with Pure Math degrees that do various kind of work given the skills developed in completed the courses, at least at the U. of Waterloo where I studied Combinatorics & Optimization and Computer Science. add comment I'm not a mathematician (more of a general science geek) but I ran across this question by chance, and I feel that mathematician John Nash deserves a mention. up vote -3 He had important direct and indirect positive impact on society, both through his mathematical models of game theory and the role of money in society, as well as aspects of evolutionary down vote psychology, where he personally demonstrated that an individual with severe mental illness (paranoid schizophrenic) can function and make useful contributions to society. 2 The last part (about his personal circumstance) is utterly irrelevant to the question at hand, which asks about "the work of a pure mathematician". Also, game theory did not begin with Nash, nor is his fixed-point result the last word – Yemon Choi Jul 19 '13 at 18:38 While I'm sure his work was neither the first nor the last word on game theory, the question was how said word impacts society - if you think personal circumstance is of no relevance to society (?!) then please accept my humble apologies, and I shall go ahead and delete my reply promptly. – mindplay.dk Jul 24 '13 at 20:41 add comment Not the answer you're looking for? Browse other questions tagged soft-question advice or ask your own question.
{"url":"http://mathoverflow.net/questions/137114/how-does-the-work-of-a-pure-mathematician-impact-society","timestamp":"2014-04-17T04:05:51Z","content_type":null,"content_length":"122014","record_id":"<urn:uuid:11fe43c2-e91f-4649-800c-dadcd8758356>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Number theory, divisibility by 22 September 12th 2006, 07:01 AM Number theory, divisibility by 22 Hi guys, I'm taking an advanced algebra course and this is one of the question in my assignment. Q: Decide which of the following integers are divisble by 22. I don't know if I can just use the long division to do it or I have to apply the divisibility by 11. Thanks for your help. :) September 12th 2006, 07:07 AM Hi guys, I'm taking an advanced algebra course and this is one of the question in my assignment. Q: Decide which of the following integers are divisble by 22. I don't know if I can just use the long division to do it or I have to apply the divisibility by 11. Thanks for your help. :) Thus, it is equivalent to saying that it is divisible by 2 and by 11. Note all are even. Thus, check those that are divisible by 11, how? Check the alternate sum. For example, Which is divisible. September 12th 2006, 07:41 AM Just dividing by 11 also is not rocket science. September 12th 2006, 08:23 AM September 12th 2006, 02:23 PM September 12th 2006, 04:30 PM The formal definition of divisibility is that, a divides b means there exists a k such as, For the integers. Thus, if we take k=0 then it answers they questions.
{"url":"http://mathhelpforum.com/number-theory/5444-number-theory-divisibility-22-a-print.html","timestamp":"2014-04-21T16:02:20Z","content_type":null,"content_length":"8569","record_id":"<urn:uuid:d1d28769-a111-48e4-aa64-9d3948d89a49>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Bitwise OR Aggregate Function May 31 2008 Bitwise OR Aggregate Function A few months ago I was working on an application which used bits to store some flags in one number. In this approach the bit value in the number indicates whether a flag is on (1) or off (0). In my case a flag was tied with a column of a front-end table. If the flag was off then the column didn’t appear on the front-end. flag : flag[3] flag[2] flag[1] flag[0] state : off on off on bit value: 0 1 0 1 0101[2] (binary) = 5[10] (decadic), so the number 5 will be stored in the database. In this case, the columns tied with the flags flag[0] and flag[2] will appear on the front-end and the columns tied with the flags flag[1] and flag[3] won’t. We can use the BITAND function to find out whether the flag is set or not. The flag flag[i] in the number X is set iff BITAND(X, 2^i) = 1. I had to merge some tables vertically, in that application. The appearance of a column in the merged table depended on the result of the bitwise OR operations. If the result of the operation table[1] _flag[i] OR table[2]_flag[i] … OR table[n]_flag[i] was equal to 1, then the column column[i] appeared in the merged table. I had to find all columns visible in the merged table. There is no BITOR function in the Oracle database for the integer types. But it can be easily implemented: BITOR(N[1], N[2]) = N[1] + N[2] – BITAND(N[1], N[2]). This is common approach to compute the bitwise OR. The idea is simple: In exactly one (arbitrary) number, we have to unset those bits, which are set in both numbers and then simply use the addition operation. OR 0101 + 0101 + 0100 ---- ---- ---- We unset the last bit in exactly one number, because it is the only one common bit for both numbers: BITAND(1001, 0101) = 1 The next step to be done is implementation of bitwise OR aggregate function using user-defined aggregates interface: CREATE OR REPLACE TYPE bitor_impl AS OBJECT bitor NUMBER, STATIC FUNCTION ODCIAggregateInitialize(ctx IN OUT bitor_impl) RETURN NUMBER, MEMBER FUNCTION ODCIAggregateIterate(SELF IN OUT bitor_impl, VALUE IN NUMBER) RETURN NUMBER, MEMBER FUNCTION ODCIAggregateMerge(SELF IN OUT bitor_impl, ctx2 IN bitor_impl) RETURN NUMBER, MEMBER FUNCTION ODCIAggregateTerminate(SELF IN OUT bitor_impl, returnvalue OUT NUMBER, flags IN NUMBER) RETURN NUMBER CREATE OR REPLACE TYPE BODY bitor_impl IS STATIC FUNCTION ODCIAggregateInitialize(ctx IN OUT bitor_impl) RETURN NUMBER IS ctx := bitor_impl(0); RETURN ODCIConst.Success; END ODCIAggregateInitialize; MEMBER FUNCTION ODCIAggregateIterate(SELF IN OUT bitor_impl, VALUE IN NUMBER) RETURN NUMBER IS SELF.bitor := SELF.bitor + VALUE - bitand(SELF.bitor, VALUE); RETURN ODCIConst.Success; END ODCIAggregateIterate; MEMBER FUNCTION ODCIAggregateMerge(SELF IN OUT bitor_impl, ctx2 IN bitor_impl) RETURN NUMBER IS SELF.bitor := SELF.bitor + ctx2.bitor - bitand(SELF.bitor, ctx2.bitor); RETURN ODCIConst.Success; END ODCIAggregateMerge; MEMBER FUNCTION ODCIAggregateTerminate(SELF IN OUT bitor_impl, returnvalue OUT NUMBER, flags IN NUMBER) RETURN NUMBER IS returnvalue := SELF.bitor; RETURN ODCIConst.Success; END ODCIAggregateTerminate; The last step is definition of the bitwise OR aggregate function. This definition is tied with the object bitor_impl, that implements the aggregate function. I implemented the ODCIAggregateMerge method in this object, therefore I can allow parallel execution by using clause PARALLEL_ENABLE. CREATE OR REPLACE FUNCTION bitoragg(x IN NUMBER) RETURN NUMBER AGGREGATE USING bitor_impl; The aggregate function in action: SQL> DROP TABLE bitor_test; Table dropped SQL> CREATE TABLE bitor_test(table_name varchar2(31), flags number); Table created SQL> INSERT INTO bitor_test VALUES ('table1', 5); -- 0101 1 row inserted SQL> INSERT INTO bitor_test VALUES ('table2', 1); -- 0001 1 row inserted SQL> INSERT INTO bitor_test VALUES ('table3', 9); -- 1001 1 row inserted SQL> INSERT INTO bitor_test VALUES ('table4', 12); -- 1100 1 row inserted SQL> COMMIT; Commit complete SQL> SELECT bitoragg(flags) FROM bitor_test; Simple calculation shows us the correctness of this result: 5[10] OR 1[10] OR 9[10] OR 12[10] = 0101[2] OR 0001[2] OR 1001[2] OR 1100[2] = 1101[2] = 13[10] Tags: SQL Holger Bartnick says: July 24th, 2008 2:44 pm thanks a lot. this is exactly what i needed. Andrei Latyshau says: December 16th, 2008 8:12 pm Great Article! thank you very much for info! Jefwork says: November 16th, 2010 11:28 pm Yes, 2 years later, this helps me a lot. Now looking for a bitcount function to count the number of On-bits in a given number. Radino’s blog » Bitcount aggregate function says: November 17th, 2010 2:27 am [...] a blog post after ages . Recently I received a new feedback on my two years old article on bitwise or aggregate function, where a visitor is looking for a bitcount aggregate function. This was a nice exercise for me. [...] Mark Mitchell says: April 15th, 2011 6:48 pm a few months later and another satisfied blog reader. thank you. Scott says: August 26th, 2011 4:02 am We found this and used it to great success. Thanks much! Georg Egger says: April 9th, 2012 2:20 pm This made my day! Thanks a lot! Jorge says: July 15th, 2012 5:09 pm Right what i needed! Toxeh says: November 19th, 2012 2:25 pm Not work correctly if any value is null You need replace SELF.bitor := SELF.bitor + VALUE – BITAND (SELF.bitor, VALUE); SELF.bitor := SELF.bitor + NVL(VALUE,0) – BITAND (SELF.bitor, NVL(VALUE,0)); Anton Pryamostanov says: March 11th, 2013 11:44 am This was extremely usefull – allowed to replace 1 big bitor with 30 arguments (each being max(bit) over partition – thus 30 windows) with 1 aggregate. bedorlan says: May 9th, 2013 5:50 pm Exactly what i needed. Thanks!
{"url":"http://radino.eu/2008/05/31/bitwise-or-aggregate-function/","timestamp":"2014-04-20T06:05:32Z","content_type":null,"content_length":"38770","record_id":"<urn:uuid:579fc5e6-c3a3-49cc-8d08-a259322ecf3f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving Two Step Linear Equations Video | MindBites Solving Two Step Linear Equations About this Lesson • Type: Video Tutorial • Length: 8:57 • Media: Video/mp4 • Use: Watch Online & Download • Access Period: Unrestricted • Download: MP4 (iPod compatible) • Size: 83 MB • Posted: 07/23/2010 This lesson is part of the following series: Basic Beginning Algebra and Geometry Series (30 lessons, $44.55) Basic Algebra Help 3.5 Hours (14 lessons, $24.75) When you first get started in Algebra, one of the first things you learn to do is solve equations. This video solves equations that require two steps. It shows you how to know which step to do first and how to check to see if your answer is correct. Equations in Algebra get to be much longer and 'more complicated' than these. This is where you start so that you will have the foundation to move ahead. If you are absent from school when your teacher explains this process, this video will help you 'catch up' quickly. Whew! That's a good feeling. Being lost and getting behind is no fun! Supplementary Files • Once you purchase this lesson you will have access to these files: About this Author 30 lessons Welcome! I'm so glad you are here! Math help is here for you when you need it. I believe that using these Algebra and Geometry videos will help you understand the basics of Algebra and Geometry. Some students try very hard and still struggle to pass math. They start off strong but things quickly begin to fall apart. That happens as soon as the student becomes lost. Teenagers who find themselves in this position often let it "get away from them" before they seek help. Because Math is always a class of stepping stones, it rarely gets better without help. I urge you to seek help from your child's teacher first. Always. These Algebra and Geometry videos can help too. You can watch... Recent Reviews This lesson has not been reviewed. Please purchase the lesson to review. This lesson has not been reviewed. Please purchase the lesson to review. Lesson Outline: Very basic Algebra video for the beginning equation solver. No equations have more than two steps. This is the place to start if you are just learning algebra. Get it Now and Start Learning Embed this video on your site Copy and paste the following snippet: Link to this page Copy and paste the following snippet:
{"url":"http://www.mindbites.com/lesson/8394-solving-two-step-linear-equations","timestamp":"2014-04-21T02:03:55Z","content_type":null,"content_length":"50455","record_id":"<urn:uuid:26987511-7e7e-4355-b5ad-91ab46f5f9c3>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
need help factoring... can someone please help me (answer provided) May 7th 2008, 09:19 PM #1 May 2008 need help factoring... can someone please help me (answer provided) My teacher gave us the answer to this problem but I am having trouble showing my work. $<br /> 5m^3(m^3-1)^2-3m^5(m^3-1)^3<br />$ when i work it out step by step i get I pretty much get stuck right there... the answer i should get is: $<br /> m^3(m-1)^2(m^2+m+1)^2(5+3m^2-3m)^5$ If anyone can figure out how could you list the steps that come to the conclusion? I can deduct what is going on without grammatical explanation, I just need to see the steps. the answer doesn't seem to be correct, the highest power of m in the answer is 19 while the highest power of m in the question is 16 your working is correct to where it is now but i just don't see how you are going to get the answer provided by your teacher Thank you so much for your reply. ugh, now i am really pissed. :/ i spent like an hour and a half trying to figure that problem out. worst part about it is that the back of the book has that solution listed also. I did notice thought that i typed one part incorrectly. the answer should be: rather than: do you see any way that this may make a difference? Thank you so much for your reply. ugh, now i am really pissed. :/ i spent like an hour and a half trying to figure that problem out. worst part about it is that the back of the book has that solution listed also. I did notice thought that i typed one part incorrectly. the answer should be: ${\color{red} m^3(m-1)^2(m^2+m+1)^2(5+3m^2-3m^5)}$ rather than: do you see any way that this may make a difference? Better ! Notice that the degree of the red thing is 3+2+4+5=14 My teacher gave us the answer to this problem but I am having trouble showing my work. $<br /> 5m^3(m^3-1)^2-3m^5(m^3-1)^3 <br />$<< the degree is max(3+3, 5+9)=14 so it's ok when i work it out step by step i get I pretty much get stuck right there... the answer i should get is: $<br /> m^3(m-1)^2(m^2+m+1)^2(5+3m^2-3m)^5$ If anyone can figure out how could you list the steps that come to the conclusion? I can deduct what is going on without grammatical explanation, I just need to see the steps. I've pointed out in red the important things May 7th 2008, 10:15 PM #2 Mar 2008 http://en.wikipedia.org/wiki/Malaysia now stop asking me where is malaysia... May 8th 2008, 01:44 PM #3 May 2008 May 8th 2008, 01:51 PM #4
{"url":"http://mathhelpforum.com/algebra/37595-need-help-factoring-can-someone-please-help-me-answer-provided.html","timestamp":"2014-04-18T12:25:10Z","content_type":null,"content_length":"44891","record_id":"<urn:uuid:1d55a804-75d3-4bf9-bf2d-75eee00cba2b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
ks :: (Fractional t, Integral a1, Ord t) => (a -> t) -> a1 -> [a] -> (t, t, t)Source Kolmogorov-Smirnov statistic for a set of data relative to a (continuous) distribution with the given CDF. Returns 3 common forms of the statistic: (K+, K-, D), where K+ and K- are Smirnov's one-sided forms as presented in Knuth's Semi-Numerical Algorithms (TAOCP, vol. 2) and D is Kolmogorov's undirected version. In particular, • K+ = sup(x -> F_n(x) - F(x)) * K- = sup(x -> F(x) - F_n(x)) * D = sup(x -> abs(F_n(x) - F(x))) ksTest :: (Floating a1, RealFrac a1, Factorial a1, Unbox a1) => (a -> a1) -> [a] -> a1Source ksTest cdf xs Computes the probability of a random data set (of the same size as xs) drawn from a continuous distribution with the given CDF having the same Kolmogorov statistic as xs. The statistic is the greatest absolute deviation of the empirical CDF of XS from the assumed CDF cdf. If the data were, in fact, drawn from a distribution with the given CDF, then the resulting p-value should be uniformly distributed over (0,1]. data KS d a whereSource KS distribution: not really a standard mathematical concept, but still a nice conceptual shift. KS n d is the distribution of a random variable constructed as a list of n independent random variables of distribution d. The corresponding CDF instance implements the K-S test for such lists. For example, if xs is a list of length 100 believed to contain Beta(2,5) variates, then cdf (KS 100 (Beta 2 5)) is the K-S test for that distribution. (Note that if length xs is not 100, then the result will be 0 because such lists cannot arise from that KS distribution. Somewhat arbitrarily, all lists of "impossible" length are grouped at the bottom of the ordering encoded by the CDF instance.) The KS test can easily be applied recursively. For example, if d is a Distribution of interest and you have 100 trials each with 100 data points, you can test it by calling cdf (KS 100 (KS 100 d)). KS :: !Int -> !(d a) -> KS d [a] Distribution d a => Distribution (KS d) [a] CDF d a => CDF (KS d) [a] Eq (d a) => Eq (KS d [a]) Show (d a) => Show (KS d [a])
{"url":"http://hackage.haskell.org/package/ks-test-0.1/docs/Math-Statistics-KSTest.html","timestamp":"2014-04-18T18:23:23Z","content_type":null,"content_length":"8512","record_id":"<urn:uuid:afb5e869-5035-4a25-a818-01729a0f0494>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Unsteady-State Evaporation in an Infinite Tube This Demonstration shows the unsteady-state evaporation of a liquid in bulk of vapor in an infinite tube, as governed by the following equation: , . Here is the diffusion coefficient, is the time, is the mole fraction of compound in the gaseous phase, is the nondimensional media molar velocity, and is the vertical position in the tube. The boundary conditions are at , at . The problem is solved here using a shooting technique. R. B. Bird, W. E. Stewart, and E. N. Lightfoot, Transport Phenomena , 2nd ed., New York: John Wiley and Sons. D. S. Sophianopoulos and P. G. Asteris, "Interpolation Based Numerical Procedure for Solving Two-Point Nonlinear Boundary Value Problems," International Journal of Nonlinear Sciences and Numerical Simulations (1), 2004 pp. 67–78.
{"url":"http://demonstrations.wolfram.com/UnsteadyStateEvaporationInAnInfiniteTube/","timestamp":"2014-04-16T21:53:33Z","content_type":null,"content_length":"44832","record_id":"<urn:uuid:dfc158e1-c212-4e79-9817-3f8b0457d1d3>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Bachelor of Science in Mathematics A degree in Mathematics adds up to success in a variety of careers. The mathematics discipline covers a very diverse spectrum of topics including: • Algebra • Analysis • Statistics • Applied Mathematics Because of this, the bachelor’s of science degree program allows for a great deal of customization for students. After completing the Calculus sequence (I, II and II), a few Computational Science courses, a Physics Lab, and an Introduction to Advanced Mathematics Course, our majors can choose from a large selection of interesting and varied classes to complete their degree. More information can be found here. Expected Outcomes A Mathematics major can lead to a wide variety of interesting and challenging careers. Past graduates are employed as statisticians, actuaries, analysts, and engineers for major companies. Others work in the fields of finance, computers, or even cryptography. Many choose to continue their education and have been very successful in graduate school. More information can be found here. Our mathematics majors are very active in and around the university. They • are involved in the Mathematics Club, which hosts game nights and invited speakers. • have the opportunity to work one-on-one with professors on in-depth research projects. • often become paid tutors for mathematics classes. • can join honorary societies like Pi Mu Epsilon • travel to conferences to meet other mathematics majors and watch or give presentations • compete in individual and team mathematics competitions Mathematics majors can also apply to receive one of our many dedicated scholarships. More information can be found here.
{"url":"http://www.clarion.edu/457733/","timestamp":"2014-04-17T01:00:26Z","content_type":null,"content_length":"17510","record_id":"<urn:uuid:a91d0f8d-7a6b-404e-abdc-282d13aacd24>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Log-Likelihood Ratio Calculation for Iterative Decoding on Rayleigh Fading Channels Using Padé Approximation Journal of Applied Mathematics Volume 2013 (2013), Article ID 970126, 10 pages Research Article Log-Likelihood Ratio Calculation for Iterative Decoding on Rayleigh Fading Channels Using Padé Approximation Department of Management Science, Faculty of Engineering, Tokyo University of Science, 1–3 Kagurazaka, Shinjuku-ku, Tokyo 162–8601, Japan Received 22 March 2013; Accepted 9 July 2013 Academic Editor: D. R. Sahu Copyright © 2013 Gou Hosoya and Hiroyuki Yashima. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Approximate calculation of channel log-likelihood ratio (LLR) for wireless channels using Padé approximation is presented. LLR is used as an input of iterative decoding for powerful error-correcting codes such as low-density parity-check (LDPC) codes or turbo codes. Due to the lack of knowledge of the channel state information of a wireless fading channel, such as uncorrelated fiat Rayleigh fading channels, calculations of exact LLR for these channels are quite complicated for a practical implementation. The previous work, an LLR calculation using the Taylor approximation, quickly becomes inaccurate as the channel output leaves some derivative point. This becomes a big problem when higher order modulation scheme is employed. To overcome this problem, a new LLR approximation using Padé approximation, which expresses the original function by a rational form of two polynomials with the same total number of coefficients of the Taylor series and can accelerate the Taylor approximation, is devised. By applying the proposed approximation to the iterative decoding and the LDPC codes with some modulation schemes, we show the effectiveness of the proposed methods by simulation results and analysis based on the density evolution. 1. Introduction In recent years, iterative decoding techniques based on message passing algorithm such as turbo decoding [1] or belief-propagation (BP) decoding [2–4] have been attracted by their significant performance which attain close to the Shannon limit. The BP decoding algorithm, a well-known iterative decoding algorithm for LDPC codes [2, 3], has been widely studied for the binary erasure channel or the additive white Gaussian noise (AWGN) channel [2–8]. The algorithms firstly derive channel log-likelihood ratios (LLR) where the messages in the decoder are initialized to these LLR values. To exhibit good performance with BP decoding, this channel LLR should be obtained with high accuracy, but it becomes complicated for some channel models such as wireless fading channel [9]. In this study, we focus on a calculation of channel LLR over the uncorrelated flat Rayleigh fading channels where the discrete-time component transmitted signal is input to a band-limited channel; that is, [10]. Here , , , and denote a channel output, a fading gain, a channel input, and an additive white Gaussian noise (AWGN) with variance at time , respectively. Hereinafter we drop the subscript . If at each received bit position is known to the receiver, we call this case known channel state information (CSI). If at each received bit position is unknown to the receiver, we call this case unknown CSI. For a known CSI case, channel LLR can be easily calculated using the channel outputs , , and . However, for an unknown CSI case which is more practical than known CSI and is our main interest, a calculation of the channel LLR is rather complex due to an integration of . The studies of wireless fading channels with the LDPC codes or turbo codes were presented in [11–20] with several modulation schemes [9, 10, 21] such as binary modulation (binary phase shift keying (BPSK)) or nonbinary modulations. In [14] for BPSK, Hou et al. have studied designing irregular LDPC codes [6] using density evolution [7, 8] and have shown that these codes can approach the Shannon limit. But they have used the following simple linear approximation for a calculation of the channel LLR: where denotes the expectation of the channel gain . Although the previous approximation is simple and is easy to implement, it is inaccurate that degradation in the decoding performance compared with the true LLR [19] can be seen. Yazdani and Ardakani [19, 20] have also proposed a linear LLR approximation whose performance is almost identical to the true LLR: where is obtained by maximizing Here is given by where and denotes the complementary error function [9]. However, an optimization of using (3) and (4) requires for each channel parameter , so that it needs large complexity to implement. Recently Asvadi et al. [11] have applied Taylor approximation of order to the true LLR function such that where denotes the coefficient of Taylor series of order . They have derived both linear and nonlinear approximations with small orders. For a linear approximation (Taylor series of order ), it is given by For a nonlinear approximation (Taylor series of order ), it is given by From the previous approximations, one can obtain accurate LLR without optimizing complicated functions such as in [19, 20]. To move our attentions to nonbinary modulations, which is more practical case, the LLR calculations are performed bitwise [15, 21]. The previous work by Yazdani and Ardakani [20], which is an extension of [19], has devised the LLR approximation method, but it becomes complex to evaluate LLR due to the increment of the number of parameters for the optimization. To fit the true LLR functions, the authors in [11] have modified the approximation functions of Taylor series of order 3. This modification is not easy to replicate and is required for each parameter of the channels. Moreover it is well known that the Taylor approximation quickly becomes inaccurate as the variable leaves the derivative point. To overcome these problems, we devise a new LLR approximation using Padé approximation [22] on the uncorrelated flat Rayleigh fading channels with unknown CSI for BPSK and 8-PAM. Padé approximation expresses the original function by rational form of two polynomials with the same total number of coefficients of the Taylor series, and it can accelerate the Taylor approximation. Generally Padé approximation is accurate not only at the derivative point but also at the wide range of intervals of variables. We show by simulation results and analysis based on the density evolution that our method can approximate LLR function with high accuracy and can yield almost the same decoding performance as the true LLR. The Padé approximation is a generalization of the Taylor approximation, and the proposed method exhibits slightly better performance than the method using Taylor approximation [11]. Moreover we design irregular LDPC codes based on our LLR approximation function. This paper is organized as follows: Section 2 gives the channel model, LLR calculation method, and LDPC codes. In Section 3, we briefly review Taylor and Padé approximations, and then we present the proposed LLR calculation by Padé approximation. Numerical results are shown in Section 4, and Section 5 concludes the paper. 2. Preliminaries 2.1. Channel Model and LLR Calculation We here consider the following discrete-time channel model: where and represent the channel input, output, and noise, respectively, and , denote a set of transmitted symbols and that of real numbers, respectively. (As mentioned in Section 1, we drop a time subscript for , , , and .) Moreover is the channel gain with an uncorrelated flat Rayleigh distribution by its probability density function (pdf) , and is the white Gaussian noise with mean 0 and variance . Using the bit-interleaved coded modulation (BICM) scheme [10, 15, 21] for a transmission, an information bit sequence is mapped to the codeword (bit sequence) of length by error-correcting codes, and then it is partitioned into blocks denoted by , , of length . Hereinafter we drop the subscript of both and to simplify discussions. This block is mapped to transmit a signal in a Gray-labeled -ary signal constellation of size . The signal constellation for BSPK and 8-PAM is depicted in Figure 1. For the fading channel, two cases can be considered which depends on the knowledge of at the receiver. 2.1.1. Known CSI For a known CSI case, we can use channel fading gain for each bit position . The channel LLR is given by where denotes the th bit of block which is mapped to and is a set of blocks which satisfies for . Moreover a base of logarithm takes a natural number , and is given by For BPSK, (9) is reduced to The calculation of the previous equation is not a difficult task. 2.1.2. Unknown CSI For an unknown CSI case, we cannot use channel fading gain for each received bit position. The channel LLR is given by where is given by For BPSK, (12) is given by where and . The calculation of (12) is so complicated that several works have tried to reduce the computational complexity by approximations [11, 13, 14, 19, 20]. For the -ary PAM, in (13) becomes where . Using (15), the LLR in (12) can be evaluated. The previous equations are so complicated to implement that several works have tried to reduce the computational complexity by approximations. Notice that for the fading channels with some modulations, the log-sum approximation was used for bitwise linear approximation. This approximation is only efficient for a known CSI case, since an integration of fading factor for a calculation of LLR is not needed. But for an unknown CSI case, an integration of fading factor is needed. Moreover it is effective only for a high signal-to-noise (SNR) region, where the sum in (9) is dominated by a single large term. 2.2. LDPC Codes We here consider binary LDPC codes. An LDPC code is represented by the Tanner graph which consists of the variable nodes and the check nodes. These nodes are incident with the edges. Let and denote the maximum number of edges incident to the variable nodes and check nodes, respectively. Let and be variable node degree distribution and check node degree distribution where and denote fractions of the number of edges incident to the variable node and check node of degrees in the Tanner graph of the code, respectively. An LDPC code is specified by , , and . The rate of the codes is given by where denotes the number of check nodes and is given by . An ensemble of LDPC codes [2] is denoted by . Combined with the BP decoding algorithm, LDPC codes with optimized by density evolution [5, 7, 8] can attain high performance which is close to the theoretical limit (Shannon limit). The iterative threshold of an ensemble of LDPC codes is defined as the maximum standard deviation on the channel in (8) such that where denotes the message error probability in iteration of the BP decoding algorithm. is calculated recursively by the density evolution [5, 7, 8] which keeps track the message error probability of the BP decoding algorithm from the pdf of channel LLR. Iterative threshold is sometimes measured by or signal-to-noise ratio (SNR) such that (for BPSK) or (for 8-PAM), respectively, where and denote the average energy per information bit and one-sided power spectral density of the additive white Gaussian noise (AWGN). 3. LLR Approximation Based on Padé Approximation Before approximating the true LLR function in (12), we briefly explain the Taylor approximation and then describe the Padé approximation. 3.1. Brief Review of Taylor and Padé Approximations Let be the original function, and let be th derivative of . Let and be closed and open intervals between and , respectively. Definition 1. Suppose that has derivatives at point and has a derivative at where , . Assume that , are required to be continuous on and is required to exist on . The Taylor polynomial of order for at point is then defined by The remainder term, which is a difference between true value of the function and its Taylor series of polynomial, is given by For some and , one can approximate the original function by in (16). However, this function quickly becomes inaccurate as leaves , even though is large. The approximation in (16) can often be accelerated by rearranging it into a ratio of two series using Padé approximation. It generalized the Taylor approximation with the same total number of coefficients of two series. Before describing the Padé approximation, we rewrite (16) as follows: Definition 2. Suppose that is approximated by the Taylor series in (18). The Padé approximation of order , , , , is given by where and are determined, so that the coefficients of the terms of in (19) are equal to those of in (18). From Definition 2, the polynomials , , and satisfy the equation Equation (20) tells that all the coefficients of of and in left-hand side of (20) are equal. We can express these relations by the following simultaneous equation for : Notice that the terms in left-hand side of (20) are included in in right-hand side of the equation. These terms are not necessary for the evaluation of and . We have already evaluated the coefficients in (18) by Taylor series. Moreover we assume that and are normalized, so we set . This normalization is valid since the Padé approximation in (19) is of a rational form. Substituting and into (20), we can obtain coefficients and . Note that the Taylor approximation and the Padé approximation are equivalent to each other if . Therefore we can see that the Padé approximation is a generalization of the Taylor series approximation. 3.2. Applying Padé Approximation to LLR Function 3.2.1. LLR Calculation for BPSK By applying the Padé approximation to the true LLR function for BPSK in (14), we can obtain the approximated function. To fit the true LLR function, we have searched the approximated function for several pairs of and found that Padé approximation of order is more accurate than Taylor series of order around 0 in (7) which is previously known as the best approximated one [11]. The proposed LLR approximation is given as follows: where , , , and , The previous approximation is obtained by the Taylor series of order : where Then (21) becomes Substituting in (24) and into (26), we get , and . We then compare the accuracy of the approximated LLR functions. Figures 2(a) and 3(a) show LLR values for the uncorrelated flat Rayleigh fading channel with unknown CSI for and , respectively. LLR values in these figures are evaluated by the true LLR in (14), Taylor series of orders and (“Taylor3” and “Taylor5”) in (7) and (24), respectively, the Padé approximation of order (“Pade23”) in (22), and linear approximation (“Ex”) in (1). From these figures, Padé approximation is almost identical to the true LLR for various channel output . This may be contributory to the fact that the order of Pade23 is larger than that of Taylor3 (). However, we can see that Taylor5 () is inaccurate as becomes large. Therefore large is not an answer for an accurate LLR approximation, especially for using the Taylor approximation. To compare the LLR approximations in detail, Figures 2(b) and 3(b) show the absolute differences between true LLR and approximated LLRs (Taylor3, Taylor5, and Pade23). We only show the case of , since the true LLR function is odd symmetric; that is, . From Figures 2 and 3, accuracy of Pade23 and Taylor5 are good especially for small . But accuracy of them are quite different for large , that is, Pade23 shows the best accuracy, but Taylor5 shows the worst accuracy. Next we consider analysis based on the density evolution for different LLR calculation methods. We derive the pdf of LLR assuming that is transmitted. From (10), we have Averaging (27) over by an integration, we obtain Then the density of the LLR function can be expressed in a parametric form [4]. For in (22), this is given by where denotes a derivative of the function with . Equation (29) is a case of the proposed approximation function, but one can obtain the pdf of the other LLR functions. For example, replacing with in (14), we can obtain the pdf of the true LLR function . 3.2.2. LLR Calculation for 8-PAM We demonstrate Padé approximation for bitwise LLR of 8-PAM constellation with Gray labeling on the Rayleigh fading channel without CSI (SNR = 7.91 (dB) ()) in Figure 4. The coefficients of LLR approximation functions of Taylor3 and Padé approximation are listed in Table 1. The derivative points for each bit LLR are chosen where these functions take for . Notice that we can omit the coefficients for the case (bits 2 and 3), since these LLR functions are even functions; that is, we can derive from for . The orders of Padé approximation for each bit are different since each bit LLR function is distinct. For bit 1, this point is . For bit 2, these points are (2 points). So we have two LLR approximation functions for bit 2. We chose the order pairs of Padé approximation and (denoted by “Pade47” and “Pade24”) for and 2, respectively. The true LLR function for bit 2 is an even function, so we switch two LLR approximation functions on . Since these approximation functions intersect on , the resulting function becomes continuous. For bit 3, these points are , (4 points). For and , we use Padé approximation of orders and (denoted by “Pade41” and “Pade34”), respectively. The true LLR function for bit 3 is also an even function, so we switch two LLR approximation functions, whose derivative points are (bit 3 (a) in Table 1), on the intersection point . But two LLR approximation functions (bit 3 (a) and (b) in Table 1) do not intersect on any . Therefore we searched two switch points between the interval of to minimize the loss of accuracy, and we set these points for Taylor approximation on , . We take the function (bit 3 (a)) for and take the function (bit 3 (b)) for . Between , we take weighted mean of two approximation functions (bit 3 (a) and (b)) which is shown in Figure 4(d). Likewise, we switch LLR functions for Padé approximation on , as shown in Figure 4(e). From Figure 4(c), we can easily see switch points for Taylor3, but for Padé approximation we cannot see these points explicitly. This is because accuracy of Padé approximation is higher than that of Taylor3 for wide range of variables. Notice that for bit 3, it is not necessary to consider weighted mean of two approximation functions if we use Padé approximation of higher order pairs. In this case, we found that Padé approximation of orders at point provides almost the same accuracy as the combination of and which is shown in Figure 4(f) (denoted by “Pade45”). But the accuracy of Padé approximation of for large is not so good compared with the combination of Padé approximations and . Also note that the previous work in [11] have modified Taylor3 functions to fit the true LLR for and 3. But this is not easy to replicate, so we only show the original form of Taylor3. 4. Numerical Results and Discussion In order to compare the proposed LLR approximations, we use LDPC codes with BP decoding and show results by simulations and analysis based on the density evolution. We only show the case where CSI is unknown to the receiver. 4.1. Results for BPSK By using the density evolution [5, 7, 8], Table 2 shows iterative thresholds of , , and LDPC code ensembles whose rates are, respectively, , , and for different LLR calculation methods. Evaluating preciously, the thresholds of three methods are almost identical especially for true LLR and Pade23, but there exists a little gap between Taylor3 and the other methods. Table 3 shows degree distribution profiles and their iterative thresholds for (a) threshold optimized and (b) rate optimized irregular LDPC codes. These profiles and their corresponding thresholds are evaluated based on the true LLR, Taylor3, and Pade23, respectively. From Table 3(a) for fixed rate, the threshold of Pade23 is almost the same as that of true LLR and is slightly better than that of Taylor3. From Table 3(b) for fixed threshold, the rate of the code by Pade23 is almost the same as that by true LLR and is slightly higher than that by Taylor3. From these thresholds, they are close to the Shannon limit; that is, for rate half code, it is ( [dB]). 4.2. Results for 8-PAM Figure 5 shows bit error rate (BER) of LDPC codes with for the uncorrelated flat Rayleigh fading channel with 8-PAM. The number of transmitted codewords is . BERs of true LLR and two Padé approximations (combination of orders (4, 1) and (3, 4) and orders (4, 5)) are almost the same, and that of Taylor3 is slightly higher than those of three methods. Table 4 shows iterative thresholds of LDPC code ensemble for different LLR calculation methods with 8-PAM. The threshold of Padé approximation is almost identical to that of true LLR (inferior to 0.02dB), and it is better than that of Taylor3. Notice that threshold of Taylor3 in [11] was SNR = 7.86 (dB) and it is better than that of Padé approximation. But the method in [11] performed several modifications to fit the true functions for the LLR functions of bits 2 and 3, it is not easy to replicate the approximations. Padé approximation is better than linear approximation in [20] (SNR = 7.88 (dB)). The threshold of Padé approximation of orders for bit 3 LLR function is SNR = 7.88 (dB) which is the same as the linear approximation. 5. Conclusion In this paper, we apply the Padé approximation, which is a generalization of the Taylor approximation, to the LLR function on the uncorrelated flat Rayleigh fading channels with unknown CSI. Using Padé approximation, we can accelerate the accuracy of the LLR function that it is accurate not only around the derivative point but also at the other intervals of the variable. From the simulation results and analysis based on the density evolution, our method can yield almost the same decoding performance as the true LLR function and is slightly better than the conventional approximation method (Taylor approximation of order ). Moreover we derive some irregular LDPC code profiles whose iterative thresholds are close to the Shannon limit. To apply Padé approximation, other modulations (e.g., 16-QAM) or other channels (Rician fading) are remained for further works. The authors are grateful for anonymous reviewers for their thorough and conscientious reviewing which improves the quality of this paper. One of the authors, Gou Hosoya, would like to thank Dr. M. Kobayashi at the Shonan Institute of Technology and Dr. H. Yagi at the University of Electro-Communications for their valuable comments and discussions. This research is partly supported by JSPS KAKENHI Grant nos. 25820166 and 25420386. 1. C. Berrou, A. Glavieux, and P. Thitimajshima, “Near shannon limit error-correcting coding and decoding: turbo-codes,” in Proceedings of the IEEE International Conference on Communications (ICC '93), pp. 1064–1070, Geneva, Switzerland, May 1993. View at Publisher · View at Google Scholar · View at Scopus 2. R. G. Gallager, Low-Density Parity-Check Codes, MIT Press, 1963. 3. D. J. C. MacKay, “Good error-correcting codes based on very sparse matrices,” IEEE Transactions on Information Theory, vol. 45, no. 2, pp. 399–431, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 4. T. J. Richardson and R. L. Urbanke, Modern Coding Theory, Cambridge University Press, Cambridge, UK, 2008. View at Publisher · View at Google Scholar · View at MathSciNet 5. S. Y. Chung, On the construction of some capacity-approaching coding schemes [Ph.D. thesis], M.I.T., Cambridge, Mass, USA, 2000. 6. M. G. Luby, M. Mitzenmacher, M. A. Shokrollahi, and D. A. Spielman, “Improved low-density parity-check codes using irregular graphs,” IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 585–598, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 7. T. J. Richardson and R. L. Urbanke, “The capacity of low-density parity-check codes under message-passing decoding,” IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 599–618, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 8. T. J. Richardson, M. A. Shokrollahi, and R. L. Urbanke, “Design of capacity-approaching irregular low-density parity-check codes,” IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 619–637, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 9. J. G. Proakis and M. Salehi, Digital Communications, McGraw-Hill, 5th edition, 2005. 10. A. Guillén i Fàbregas, A. Martinez, and G. Caire, “Bit-interleaved coded modulation,” Foundations and Trends in Communications and Information Theory, vol. 5, no. 1-2, pp. 1–153, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 11. R. Asvadi, A. H. Banihashemi, M. Ahmadian-Attari, and H. Saeedi, “LLR approximation for wireless channels based on Taylor series and its application to BICM with LDPC codes,” IEEE Transactions on Communications, vol. 60, no. 5, pp. 1226–1236, 2012. View at Publisher · View at Google Scholar · View at Scopus 12. E. K. Hall and S. G. Wilson, “Design and analysis of turbo codes on Rayleigh fading channels,” IEEE Journal on Selected Areas in Communications, vol. 16, no. 2, pp. 160–174, 1998. View at Publisher · View at Google Scholar · View at Scopus 13. G. Hosoya, M. Hasegawa, and H. Yashima, “LLR calculation for iterative decoding on fading channels using Padé approximation,” in Proceedings of the International Conference on Wireless Communications and Signal Processing (WCSP '12), CTS 1569651991, pp. 1–6, Huanshan, China, October 2012. 14. J. Hou, P. H. Siegel, and L. B. Milstein, “Performance analysis and code optimization of low density parity-check codes on Rayleigh fading channels,” IEEE Journal on Selected Areas in Communications, vol. 19, no. 5, pp. 924–934, 2001. View at Publisher · View at Google Scholar · View at Scopus 15. J. Hou, P. H. Siegel, L. B. Milstein, and H. D. Pfister, “Capacity-approaching bandwidth-efficient coded modulation schemes based on low-density parity-check codes,” IEEE Transactions on Information Theory, vol. 49, no. 9, pp. 2141–2155, 2003. View at Publisher · View at Google Scholar · View at MathSciNet 16. N. Inaba, H. Yashima, T. Kuroda, and T. Tsubouchi, “Error performances of convolutional coded modulation using the maximum likelihood metric and an approximated metric over Rayleigh fading channel,” IEICE Transactions on Fundamentals of Electronics, vol. J78-A, no. 10, pp. 1397–1399, 1995 (Japanese). 17. N. Iviyani and J. Weber, “Analysis of random regular LDPC codes on Rayleigh fading channels,” in Proceedings of the 27th Symposium on Information Theory in the Benelux, Noordwijk, the Netherlands, June 2006. 18. J. Lin and W. Wu, Performance Analysis of LDPC Codes on Rician Fading Channels, Higher Education Press and Springer, 2006. 19. R. Yazdani and M. Ardakani, “Linear LLR approximation for iterative decoding on wireless channels,” IEEE Transactions on Communications, vol. 57, no. 11, pp. 3278–3287, 2009. View at Publisher · View at Google Scholar · View at Scopus 20. R. Yazdani and M. Ardakani, “Efficient LLR calculation for non-binary modulations over fading channels,” IEEE Transactions on Communications, vol. 59, no. 5, pp. 1236–1241, 2011. View at Publisher · View at Google Scholar · View at Scopus 21. G. Caire, G. Taricco, and E. Biglieri, “Bit-interleaved coded modulation,” IEEE Transactions on Information Theory, vol. 44, no. 3, pp. 927–946, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 22. C. Brezinski, History of Continued Fractions and Padé Approximants, vol. 12, Springer, Berlin, Germany, 1991. View at Publisher · View at Google Scholar · View at MathSciNet
{"url":"http://www.hindawi.com/journals/jam/2013/970126/","timestamp":"2014-04-20T19:35:27Z","content_type":null,"content_length":"438740","record_id":"<urn:uuid:0a431ccb-0370-412d-8ff1-9bcbcb6a52f7>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Tetrahedron Formation. Replies: 0 Tetrahedron Formation. Posted: Dec 20, 2012 10:20 PM If I break a stick into 6 pieces, what is the probability that they can form a tetrahedron? In general, if I break a stick into (n*n+n)/2 pieces, what is the probability that they can form an n-simplex? I know that the longest piece must be no more than 0.5 of the lengh of the original unbroken stick. Thanks in advance.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2421920&messageID=7942136","timestamp":"2014-04-18T16:16:20Z","content_type":null,"content_length":"13781","record_id":"<urn:uuid:79b6b896-13fa-4072-ac11-cc1a8a4b4830>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
.25 liters equals how many ounces You asked: .25 liters equals how many ounces 8.45350567525 US fluid ounces the volume 8.45350567525 US fluid ounces Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/.25_liters_equals_how_many_ounces","timestamp":"2014-04-19T22:43:49Z","content_type":null,"content_length":"53388","record_id":"<urn:uuid:862522c5-86fa-4741-8a23-081ca495c533>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Estimation of class transitions Currently, I am studying three mixed variables in which I expect dissimilar classes for each of the three variables. It is not unlike example 8.2 in the Mplus manual (only with continuous variables). I can do this in Mplus, but I am wondering if it is also possible in OpenMx. So I need to define separate classes for each variable, but the definition of the model is a puzzle to me. Is it possible (yet)? If so, how do I define the matrix of class probabilities and how do I bind the six classes and their objectives together?
{"url":"http://openmx.psyc.virginia.edu/print/803","timestamp":"2014-04-20T04:00:47Z","content_type":null,"content_length":"8188","record_id":"<urn:uuid:cfbbcf70-d31c-453d-a56e-d78303215b7d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
12/30/2013 -- Happy Derangement Day! 12/30/2013 — Happy Derangement Day! 30 December 2013, 9:00 am Today we celebrate a Derangement Day! Usually I call days like today a permutation day because the digits of the day and month can be rearranged to form the year, but there’s something extra special about today’s date: The numbers of the month and day are a derangement of the year: that is, they are a permutation of the digits of the year in which no digit remains in its original place! Derangements pop up in some interesting places, and are connected to many rich mathematical ideas. The question “How many derangements of n objects are there?” is a fun and classic application of the principle of inclusion-exclusion. Derangements also figure in to some calculations of e and rook polynomials. So enjoy Derangement Day! Today, it’s ok to be totally out of order. One Comment 1. Alan Hochbaum says: Don’t think this is right, but felt like I was getting warm? Tepid? • MrHonner on Balancing Act Filed under Appreciation, Numbers Comment (RSS) | Permalink
{"url":"http://mrhonner.com/archives/11680","timestamp":"2014-04-18T05:29:52Z","content_type":null,"content_length":"45421","record_id":"<urn:uuid:3a8e0b40-ed25-48ec-80fc-f868ad7c9c27>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Practical Poisson distribution question October 2nd 2013, 01:01 AM #1 Nov 2011 Practical Poisson distribution question The problem: The number of ships X that arrive to a certain harbor in one day is Poisson distributed with E(X) = 2. The harbor can handle no more than 3 ships per day. The rst 3 ships to arrive each day are served. Other ships are redirected to other harbours a) What number of ships to arrive on single day has the highest probability? What is the probability that on a given day, one or more ships will have to be redirected? b) What is the expected number of ships to be served on a given day? c) How big must the harbour be to make the probability that all ships that arrive will be served, at least 90%? I found a to be pretty straight forward, and got the right answer. However, I am struggling on b and c. Re: Practical Poisson distribution question Hey Nora314. Hint: You are given a distribution in terms of a rate per day. What event corresponds to a non re-direction? What about a re-direction? (In terms of P(X < x) or P(X > x)). October 2nd 2013, 01:40 AM #2 MHF Contributor Sep 2012
{"url":"http://mathhelpforum.com/advanced-statistics/222503-practical-poisson-distribution-question.html","timestamp":"2014-04-20T12:50:03Z","content_type":null,"content_length":"34264","record_id":"<urn:uuid:5e67e0a9-1c4f-4f8d-acf2-449572b894d5>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
IMA Postdoc Seminar (May 18, 2010) Speaker:Ning Jiang (Courant Institute of Mathematical Sciences at New York University) Title:Kinetic-Fluid Boundary Layers and Applications to Hydrodynamic Limits of Boltzmann Equation. Abstract:In a bounded domain with smooth boundary (which can be considered as a smooth sub-manifold of R3), we consider the Boltzmann equation with general Maxwell boundary condition---linear combination of specular reflection and diffusive absorption. We analyze the kinetic (Knudsen layer) and fluid (viscous layer) coupled boundary layers in both acoustic and incompressible regimes, in which the boundary layers behave significantly different. The existence and damping properties of these kinetic-fluid layers depends on the relative size of accommodation number and Kundsen number, and the differential geometric property of the boundary (the second fundamental form.) As applications, first we justify the incompressible Navier-Stokes-Fourier limit of the Boltzmann equation with Dirichlet, Navier, and diffusive boundary conditions respectively, depending on the relative size of accommodation number and Kundsen number. Using the damping property of the boundary layer in acoustic regime, we proved the convergence is strong. The second application is that we derive and justified the higher order acoustic approximation of the Boltzmann equation. This is a joint work with Nader Masmoudi.
{"url":"http://ima.umn.edu/seminar/pdoc/2009-2010/may1810.html","timestamp":"2014-04-16T20:09:52Z","content_type":null,"content_length":"1914","record_id":"<urn:uuid:4b80d188-055d-4551-879d-52f1d3de16e5>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: COMMUTATION OF AN ELECTROMAGNETIC PROPULSION AND GUIDANCE SYSTEM Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A method of commutating a motor includes calculating an adjustment electrical angle, and utilizing the adjustment electrical angle in a common set of commutation equations so that the common set of commutation equations is capable of producing both one and two dimensional forces in the motor. A method of commutating a motor comprising:calculating an adjustment electrical angle; andutilizing the adjustment electrical angle in a common set of commutation equations so that the common set of commutation equations is capable of producing both one and two dimensional forces in the motor. The method of claim 1, wherein the adjustment electrical angle is determined from one or more measured position coordinates of a platen of the motor and desired motor forces in one or more The method of claim 1, further comprising utilizing the adjustment electrical angle in the common set of commutation equations so that the two dimensional forces in the motor include Maxwell forces. The method of claim 1, further comprising utilizing the adjustment electrical angle in the common set of commutation equations so that the common set of commutation equations is capable of producing three dimensional forces in the motor. The method of claim 4, further comprising utilizing the adjustment electrical angle in the common set of commutation equations so that the three dimensional forces in the motor include Maxwell The method of claim 1, further comprising utilizing a winding phase current in combination with the adjustment electrical angle in the common set of commutation equations. BACKGROUND [0001] The embodiments described herein relate to a method and system for commutation of an electromagnetic propulsion and guidance drive, in particular for a magnetically levitated material transport BRIEF DESCRIPTION OF RELATED DEVELOPMENTS [0002] A schematic plan view of a conventional substrate processing apparatus is shown in FIG. 1 . As can been seen, the processing modules of the apparatus in FIG. 1 are placed radially around the transport chamber of the processing apparatus. A transport apparatus, which may be a conventional two or three axis of movement apparatus, for example, a robot, is centrally located in the transport chamber to transport substrates between processing modules. As can be realized, throughput of the processing apparatus is limited by the handling rate of the transport apparatus. In addition, a conventional robot requires a considerable number of active components, including joints, arms, motors, encoders, etc. A conventional robot generally has a limited number of degrees of freedom, and providing power and control for the robot generally requires breeching the envelope of the transport chamber. The disclosed embodiments overcome these and other problems of the prior art. SUMMARY OF THE EXEMPLARY EMBODIMENTS [0003] The disclosed embodiments are directed to a method of commutating a motor including calculating an adjustment electrical angle, and utilizing the adjustment electrical angle in a common set of commutation equations so that the common set of commutation equations is capable of producing both one and two dimensional forces in the motor. In another embodiment, a method of commutating a motor includes calculating an adjustment electrical angle, and entering the adjustment electrical angle into commutation equations for commutating motor windings to produce forces in the motor in at least one dimension, wherein the adjustment electrical angle is determined so that commutation equations for producing forces in the motor in but one of the at least one dimension are common with commutation equations for simultaneously producing forces in the motor in two of the at least one dimension. In another embodiment, an apparatus for commutating a motor includes circuitry for calculating an adjustment electrical angle, and an amplifier operable to utilize the adjustment electrical angle in a common set of commutation equations so that the common set of commutation equations is capable of producing both one and two dimensional forces in the motor. In still another embodiment, a motor has windings commutated by a controller, where the controller includes circuitry for calculating an adjustment electrical angle, and an amplifier operable to utilize the adjustment electrical angle in a common set of commutation equations so that the common set of commutation equations is capable of producing both one and two dimensional forces in the In yet a further embodiment, a substrate processing apparatus has a controller for commutating a motor including circuitry for calculating an adjustment electrical angle, and an amplifier operable to utilize the adjustment electrical angle in a common set of commutation equations so that the common set of commutation equations is capable of producing both one and two dimensional forces in the BRIEF DESCRIPTION OF THE DRAWINGS [0008] The foregoing aspects and other features of the present invention are explained in the following description, taken in connection with the accompanying drawings, wherein: [0009]FIG. 1 is a schematic plan view of a prior art substrate processing apparatus; [0010]FIG. 2A is a schematic plan view of a substrate processing apparatus incorporating features of the disclosed embodiments; [0011]FIG. 2B is a schematic diagram of a controller of the substrate processing apparatus; [0012]FIG. 3 is a schematic cross-sectional perspective view of a representative motor configuration suitable for practicing the disclosed embodiments; [0013]FIG. 4 shows another motor configuration suitable for practicing the disclosed embodiments; [0014]FIG. 5 shows a schematic of an exemplary wye configuration of a winding set; [0015]FIG. 6 shows a schematic of an exemplary delta configuration of a winding set; FIGS. 7A-7D show force vectors acting between a forcer and platen that provide propulsion in the x direction resulting from Lorentz forces, and guidance in the y direction resulting from Lorentz and Maxwell forces; [0017]FIG. 8 shows a solution process for the embodiments of FIGS. 7A-7D; FIGS. 9A-9D show force vectors acting between a forcer and platen that provide propulsion in the x direction resulting from Lorentz forces, and guidance in the y direction resulting from Maxwell [0019]FIG. 10 shows a solution process for the embodiments of FIGS. 9A-9D; FIGS. 11A-11D show force vectors acting on a platen that provide propulsion in the x direction and guidance in the y direction resulting from Lorentz forces; FIG. 12 shows a solution process for the embodiments of FIGS. 11A-11D; [0022]FIG. 13A is a schematic perspective view of a linear propulsion system with motor(s) having a configuration in accordance with another exemplary embodiment FIGS. 13B1-13B2 are schematic views of respective magnet array(s) in accordance with different exemplary embodiments; FIG. 13C shows an alternate arrangement of individual windings for use with the disclosed embodiments; FIG. 13D1-13D2 are schematic views of respective winding arrangements in accordance with different exemplary embodiments; FIG. 14 shows the orientation of the winding sets of the three dimensional motor configuration embodiments where a represents the direction of a force F and b represents the direction of a force F [0027]FIG. 15 shows a diagram of a solution process for performing commutation to produce propulsion in the x-direction and lift in the z-direction using Lorentz forces, and a guidance component in the y-direction with Lorentz and Maxwell forces; [0028]FIG. 16 shows a diagram of a solution process for performing commutation to produce propulsion components in the x and z-directions with Lorentz forces and a guidance component in the y-direction with Maxwell forces; [0029]FIG. 17 shows a diagram of a solution process for performing commutation to produce propulsion components in the x and z directions and a guidance component in the y-direction, all with Lorentz forces; FIGS. 18A-18D show various force diagrams for an open loop stabilization method applied to different degrees of freedom; [0031]FIG. 19 shows a general block diagram of motor commutation applicable to the disclosed embodiments And FIGS. 20A-20D are yet other schematic cross-sectional views of a motor, showing respective force vectors acting between forcer and platen generating propulsion and guidance forces in accordance with yet another exemplary embodiment. DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS [0033]FIG. 2A shows a schematic plan view of an exemplary substrate processing apparatus 10 suitable for practicing the embodiments disclosed herein. Although the presently disclosed embodiments are described with reference to substrate processing, it should be understood that the disclosed embodiments may have many alternate forms, including any system for magnetically transporting objects. In addition, any suitable size, shape or type of elements or materials could be used. The disclosed embodiments relate to a propulsion and guidance system for a magnetically levitated material transport platform. Motor force equations and motor commutation equations, including expressions for calculation of motor control parameters based on specified propulsion and guidance forces, are provided for both two dimensional and three dimensional motor configurations. The disclosed embodiments include adjusting an electrical angle used to drive a common set of commutation functions with an electrical angle offset so that the same motor commutation functions may be used for producing at least a one dimensional propulsion force in the x-direction, two dimensional forces including a propulsion force in the x-direction and a guidance force in the y-direction, and three dimensional forces including propulsion forces in both the x-direction and a z-direction and a guidance force in the y-direction. In other words, by adjusting the electrical angle with the electrical angle offset, at least one, two, and three dimensional forces may be produced in the motor using a common set of commutation In particular, motor force equations, motor commutation equations, and motor control parameter calculations are provided for two dimensional motor configurations to produce propulsion in the x-direction by Lorentz forces, with guidance in the y-direction by Lorentz and Maxwell forces. Another embodiment includes motor force equations, motor commutation equations, and motor control parameter calculations to produce propulsion in the x-direction by Lorentz forces, with guidance in the y-direction by Maxwell forces for two dimensional motor configurations. Still another embodiment includes motor force equations, motor commutation equations, and motor control parameter calculations to produce propulsion in the x-direction with guidance in the y-direction primarily utilizing Lorentz forces for two dimensional motor configurations. Similarly, for three dimensional motor configurations, motor force equations, motor commutation equations, and motor control parameter calculations are provided to produce propulsion in the x-direction, lift in the z-direction by Lorentz forces, with guidance in the y-direction by Lorentz and Maxwell forces. Additional embodiments include motor force equations, motor commutation equations, and motor control parameter calculations for three dimensional motor configurations to provide propulsion in the x-direction and lift in the z-direction by Lorentz forces, with guidance in y-direction by Maxwell forces. Yet another embodiment includes motor force equations, motor commutation equations, and motor control parameter calculations for three dimensional motor configurations that provide propulsion in the x-direction, lift in the z-direction, and guidance in the y-direction all utilizing Lorentz forces. Further embodiments include motor force equations, motor commutation equations, and motor control parameter calculations for phase commutation with open loop stabilization, including open loop roll stabilization, open loop pitch stabilization with discrete forces, and open loop pitch stabilization with distributed forces. Turning again to FIG. 2A , the substrate processing apparatus 10 may include a number of load ports 12, an environmental front end module (EFEM) 14, load locks 16, a transport chamber 18, one or more processing modules 20, and a controller 200. The EFEM 14 may include a substrate transport apparatus (not shown) capable of transporting substrates from load ports 12 to load locks 16. The load ports 12 are capable of supporting a number of substrate storage canisters, for example conventional FOUP canisters or any other suitable substrate storage device. The EFEM 14 interfaces the transport chamber 18 through load locks 16. The EFEM 14 may further include substrate alignment capability, batch handling capability, substrate and carrier identification capability or otherwise. In alternate embodiments, the load locks 16 may interface directly with the load ports 12 as in the case where the load locks have batch handling capability or in the case where the load locks have the ability to transfer wafers directly from a FOUP to the lock. In alternate embodiments, other load port and load lock configurations may be provided. Still referring to FIG. 2A , the processing apparatus 10 may be used for processing semiconductor substrates, for example, 200, 300, 450 mm wafers, flat panel displays, or any other suitable substrate. Each of the processing modules may be capable of processing one or more substrates, for example, by etching, plating, deposition, lithography, or any other substrate processing technique. At least one substrate transport apparatus 22 is integrated with the transport chamber 18. In this embodiment, processing modules 20 are mounted on both sides of the transport chamber 18, however, in other embodiments processing modules 20 may be mounted on one side of the chamber, may be mounted opposite each other in rows or vertical planes, may be staggered from each other on the opposite sides of the transport chamber 18, or stacked in a vertical direction relative to each other. The transport apparatus 22 generally includes a single carriage 24 positioned in the transport chamber 18 for transporting substrates between load locks 16 and processing modules 20 or among the processing chambers 20. In alternate embodiments, multiple carriages may be utilized in a transport apparatus. Moreover, the transport chamber 18 may be capable of being provided with any desired length and may couple to any desired number of processing modules 20. The transport chamber 18 may also be capable of supporting any desired number of transport apparatus 22 therein and allowing the transport apparatus 22 to reach any desired processing module 20 on the transport chamber 18 without interfering with each other. The transport chamber 18 in this embodiment has a generally hexahedron shape though in alternate embodiments the chamber may have any other suitable shape. The transport chamber 18 has longitudinal side walls 18S with ports formed therethrough to allow substrates to pass into and out of the processing modules 20. The transport chamber 18 may contain different environmental conditions such as atmospheric, vacuum, ultra high vacuum, inert gas, or any other, throughout its length corresponding to the environments of the various processing modules connected to the transport chamber. While a single transport chamber 18 is shown, it should be understood that any number of transport chambers may be coupled together in any configuration to accommodate substrate processing. It should also be understood that the transport chamber may extend inside one or more of the processing modules 20, load locks 16, or even load ports 12, or one or more of the processing modules 20, load locks 16, or load ports 12 may have its own transport chamber coupled to transport chamber 18, allowing the transport mechanism to enter or otherwise deliver substrates to the processing modules. The transport apparatus 22 may be integrated with the transport chamber 18 to translate the carriage 24 along an x-axis extending between the front of the chamber 18F and the back of the chamber 18B. The transport apparatus may also provide guidance along a y-axis perpendicular to the x-axis. In other embodiment, the transport apparatus 22 may translate the carriage along the x-axis and along a z-axis extending out from the surface of the page, orthogonal to the x and y axes, and provide guidance along the y-axis. The carriage 24 may transport substrates by themselves, or may include other suitable mechanisms for substrate transport. For example, carriage 24 may include one or more end effectors for holding one or more substrates, an articulated arm, or a movable transfer mechanism for extending and retracting the end effectors for picking or releasing substrates in the processing modules or load locks. In some embodiments the carriage 24 may be supported by linear support or drive rails which may be mounted to the side walls 18S, which may include the floor or top of the transport chamber and may extend the length of the chamber, allowing the carriage 24 to traverse the length of the chamber. The transport chamber 18 may have a number of transport zones 18', 18'' which allow a number of transport apparatus to pass over each other, for example, a side rail, bypass rail or magnetically suspended zone. The transport zones may be located in areas defined by horizontal planes relative to the processing modules. Alternately, the transport zones may be located in areas defined by vertical planes relative to the processing modules. Turning to FIG. 2B , controller 200 may include a CPU 210 with at least one computer readable medium 215 having programs for controlling operations of controller 200. A multiplexer 220 and other drive electronics 225 including amplifiers 230 for driving windings as described below may also be utilized. The drive electronics 225 and the computer readable medium 215 may provide hardware and programs in any combination for implementing the functions and equations described below and for implementing the commutation functions according to the disclosed embodiments. It should be understood that circuitry in the context of the disclosed embodiments includes hardware, software or programs, or any combination of the two. An interface 235 may be included for receiving commands related to transport apparatus position or a force to be applied. Commands may be received from a user, from sensors associated with the transport apparatus, other controllers within the substrate processing apparatus, or from a control system controlling a number of substrate processing apparatus. Controller 200 drives the windings as described below resulting in the application of various forces. Thus, controller 200 drives the windings to actively produce desirable combinations of propulsion, lift and guidance forces for open and closed-loop control. The disclosed embodiments may employ one or more linear drive systems which may simultaneously drive and suspend the transport apparatus such that the transport apparatus may be horizontally and vertically independently movable. Thus, multiple transport apparatus may be capable of passing each other and capable of transporting substrates independent of each other. In some embodiments each transport apparatus may be driven by a dedicated linear drive motor. The disclosed embodiments may, in addition or alternately, employ one or more rotary drive systems which may simultaneously drive and suspend the transport apparatus such that the transport apparatus may be horizontally and vertically independently movable. [0052]FIG. 3 is a schematic cross-sectional perspective view of a motor configuration suitable for practicing the disclosed embodiments. The motor may be oriented as desired (the configuration shown may be as seen from a top view for example. In the exemplary embodiment shown in FIG. 3 , there is schematically illustrated an exemplary linear propulsion and guidance system 320 in accordance with the disclosed embodiments, such as may be suitable for driving transport apparatus 22. Generally the linear propulsion and guidance system 320 may include a forcer with a winding set 322 which drives platen 324 (for example in the X-direction indicated by arrow X). In some embodiments, platen 324 may be supported in the z-direction by a suitable mechanism or structure (not shown). In this embodiment, winding set 322 may be mounted on the outside of or within a side wall 330 (which may include a top, side, or floor of the transport chamber 18) and is isolated from the chamber and from the platen 324 by the side wall 330 (a portion of which 332 may be interposed between forcer 322 and platen 324. In other embodiments, the windings of the motor may be located inside the transport chamber 18. The platen 324 may include for example one or more magnets 334 for interfacing the platen 324 with winding set 322. As may be realized, in alternate embodiments, the permanent magnets may be located on the stator and the windings may be located on the driven platen. A sensor 336, for example, a magneto-resistive or hall effect sensor, may be provided for sensing the presence of the magnets in platen 324 and determining proper commutation. Additionally, sensors 336 may be employed for fine position determination of platen 324. A position feedback device 340 may be provided for accurate position feedback. Device 340 may be inductive or optical for example. In the instance where it is inductive, an excitation source 342 may be provided which excites a winding or pattern 346 and inductively couples back to receiver 344 via coupling between the pattern 346. The relative phase and amplitude relationship may be used to determine the location of platen 324. An identification tag 347, such as an IR tag may be provided that may be read by reader 348, provided at an appropriate station to determine platen identification by station. In other embodiments the winding set 322 may be mounted to the platen 324 while the one or more magnets 334 may be mounted on the outside of or within the side wall 330 (which may include a top, side, or floor of the transport chamber 18). The one or more magnets 334 may be isolated from the chamber and from the platen 324 by the side wall 330. In other embodiments, the one or more magnets 334 may be located inside the transport chamber 18. [0056]FIG. 4 is a schematic view that shows another motor configuration in accordance with another exemplary embodiment. A top view of an exemplary rotary propulsion and guidance system 410 is shown in accordance with the disclosed embodiments, also suitable for driving transport apparatus 22. Rotary propulsion and guidance system 410 includes a stator 415 with a winding set 422 which drives a platen 425, in the form of a rotor, in the tangential direction (indicated by arrow T in FIG. 4 ). The stator and rotor illustrated in the exemplary embodiment may be considered to define for example a three-dimensional motor (T forces as well as X, Y forces) that comprises a number (three are shown for example purposes) of two-dimensional motor segments as will be described in greater detail below. The platen or rotor 425 may be supported in the z-direction (perpendicular to the plane of the page) by a suitable mechanism or structure. In this embodiment, the stator 415 may be mounted on the outside of or within a side wall 330 ( FIG. 3 ) which may include a top, side, or floor of the transport chamber 18 and may be isolated from the chamber and from the platen or rotor 425 by the side wall 330. In other embodiments, the stator 415 may be located inside the transport chamber 18. Magnets may be distributed on the platen 425 in any suitable configuration. As may be realized, in alternate embodiments the windings may be on the rotor and permanent magnets on the stator. The disclosed embodiments include at least two magnets 430 for interfacing the platen 425 with winding set 422. One or more sensors 435, for example, a magneto-resistive or hall effect sensor, may be provided for sensing the presence of the magnets 430 in platen 425 and determining proper commutation. Additionally, sensors 435 may be employed for fine position determination of platen 425. In other embodiments the winding set 422 may be mounted to the platen 425 while the magnets 430 may be mounted to the stator 415. The one or more stator mounted magnets 430 may be isolated from the chamber and from the platen 425 by the side wall 330. In other embodiments, the magnets 430 may be located inside the transport chamber 18. [0059]FIG. 5 shows a schematic of an exemplary wye configuration of winding set 322, 422 (see FIGS. 3 and 4). Winding set 322, 422 may include for example three phases, phase 0 (350), phase 1 (355), and phase 2 (360) driven by amplifier circuitry 365. [0060]FIG. 6 shows a schematic of an exemplary delta configuration of winding set 322, 422 that may include for example the three phases driven by amplifier circuitry 370. FIGS. 7A-7D show force vectors acting between forcer 321, 422 (see also FIGS. 3 and 4) and platen 324, 425 designated by arrows that provide propulsion in the x direction resulting in the exemplary embodiment from Lorentz forces, and guidance in the y direction resulting from Lorentz and Maxwell forces. While FIGS. 7A-7D are shown in the context of the linear propulsion device 320, the force vectors are also applicable to the rotary propulsion device 410. FIGS. 7A and 7B show force vectors acting between forcer 321, 422 and platen 324, 425 that essentially result in a propulsion force being applied in the x-direction with associated Maxwell component in the y-direction, while FIGS. 7C and 7D show force vectors acting on platen 324, 425 that specifically result in guidance forces being applied in the y direction for different operating characteristics. For the linear propulsion device 320, the applied forces provide control of the relative position of the platen 324 with respect to the forcer 321 in the x-direction as well as the gap between the platen 324 and the forcer 321 in the y-direction. For the rotary device 410, the applied forces provide control of the relative rotational position of the platen 425 in the tangential T direction (see FIG. 4 , generally corresponding to the x-direction shown in FIG. 3 ), defined in this embodiment as a rotational direction in the plane of the x and y axes, and control of the gap between the platen 425 and the stator 415. The platen 324, 425 of the embodiments of FIGS. 7A-7D may be composed of ferromagnetic materials. The one or more permanent magnets 334, 430 may include permanent magnets 710 of alternating polarity. For this exemplary embodiment, the motor force equations, motor commutation equations, and expressions for calculation of motor control parameters based on specified propulsion and guidance forces are for example as follows: Motor Force Equations: θ π θ π ##EQU00001## =Total force produced in x-direction (N)F =Total force produced in y-direction (N)F =Force produced by phase j in x-direction, j=0, 1, 2 (N)F =Force produced by phase j in y-direction, j=0, 1, 2 (N)i =Current through phase j, j=0, 1, 2 (N)K =Phase force constant in x-direction (N/A)K L=Lorentz phase force constant in y-direction (N/A)K M=Maxwell phase force constant in y-direction (N/A )x=Position in x-direction (m)y=Position in y-direction (m)θ=Electrical angle (rad) The motor commutation equation may be expressed for example as: =I sin [θ(x)-Δ+(2π/3)j], j=0, 1, 2 (1.3) where I and Δ control the magnitude and orientation of the motor force vector and:I=Amplitude of phase current (A)Δ=Electrical angle offset (rad) As may be realized from examination of (1.3), in the example the disclosed embodiments include adjusting the electrical angle θ using the electrical angle offset Δ so that a guidance force along the y-axis may be generated concurrently with, but controllable independently from, the propulsion force along the x-axis. Thus, by adjusting the electrical angle θ with the electrical angle offset Δ, the same motor commutation equation for producing a pure propulsion force may be used to produce both a propulsion force and a guidance force that are substantially independently controllable from each other. Sinusoidal phase currents in accordance with Equation (1.3) can be generated using space vector modulation (SVM), such as for the wye winding configuration, to optimize utilization of bus voltage The resulting motor forces in the x and Y directions may be expressed for example as: (y)cos(Δ) (1.4) M(y)] (1.5) The substantially independent motor control parameters I and Δ in (1.4) and (1.5) may be derived for example as: δΔ δ ≧ ##EQU00002## For purposes of the disclosed embodiments , all arc tangent functions (a tan) described herein may also be interpreted as a four quadrant inverse tangent functions (a tan 2). [Please fix the language as needed.] Inequality (1.18) imposes a constraint for the desired forces F and F . This means, in order to have a solution I and Δ, such constraint is satisfied. Considering (1.13), (1.14) and (1.15), inequality (1.18) may be rewritten as: ≧ ##EQU00003## The constraint (1.19) means that, in the exemplary embodiment, given a desired force along x-direction, there may be a minimum physical limit for the force along y-direction. The platen 324 and forcer 321 in the embodiments of FIGS. 7A-7D are generally held substantially parallel to each other, for example, utilizing any suitable structures, systems or techniques. FIGS. 7A-7D show the force vectors producing the propulsion component in the x-direction and the guidance component in the y-direction for different position dependent currents applied to the windings Phase 0, Phase 1, Phase 2. As may be realized, the Maxwell force between the winding set 322, 422 and the ferromagnetic platen 324, 425 is attractive, hence an additional mechanism may be employed to produce a force in the opposite direction. This can be achieved, for example, by utilizing another winding set of the same type in a mirror configuration (not shown). In the exemplary embodiment there may be some coupling between the propulsion and guidance forces due to the constraint (1.19). For example, the current employed to produce some specified propulsion force may generate some guidance force. The additional winding set of similar type, disposed in a mirror configuration may also be used to balance the additional guidance force if desired, and thus resulting in substantially decoupled forces in the X-direction and Y-direction respectively for substantially any desired magnitude. [0073]FIG. 8 shows the solution process 800 for this two dimensional embodiment. The solution process 800 may be implemented in any combination of hardware or software. A measured x position coordinate 805 of the platen 324, 425 may be retrieved from position feedback device 340 ( FIG. 3 ) and provided to electrical angle determination circuitry 810. Electrical angle determination circuitry or program 810 factors the measured x position coordinate 805 by 2π and a pitch of the winding set 322 (FIG. 3B) to determine electrical angle θ A measured y position coordinate 820 may be retrieved from position feedback device 340 ( FIG. 3 ) and provided to a phase force constant determination block 825 where predetermined phase force constants for the x direction and for Lorentz and Maxwell forces in the y-direction are determined. The results and the desired forces in the x and y direction 827 are applied to control parameter circuitry or programs 830 that implement equations (1.22) and (1.23) to yield control parameters I and Δ. Electrical angle θ 815 and control parameters I and Δ are applied to commutation function 835 which implements equation (1.3) to provide commutation current i for each winding phase. FIGS. 9A-9D show force vectors acting between forcer 321 (see also FIG. 3 ) and platen 324, designated by arrows that provide propulsion in the x direction utilizing Lorentz forces and guidance in the y direction utilizing Maxwell forces for different position dependent currents applied to the windings Phase 0, Phase 1, Phase 2. More specifically, FIGS. 9A and 9B show force vectors acting on platen 324, 425 that essentially result in a maximum propulsion force being applied, while FIGS. 9C and 9D show force vectors acting on platen 324, 425 that specifically result in guidance in the y direction. Similar to embodiments of FIGS. 7A-7D, the platen 324, 435 of the embodiments of FIGS. 9A-9D may be composed of ferromagnetic materials, the permanent magnets 334, 430 may include permanent magnets 910 of alternating polarity and the winding set 322, 422 may have three phases. In this embodiment a force vector is produced in the x-y plane, including a propulsion component in the x-direction and a guidance component in the y-direction. In the linear propulsion embodiment 320 this enables control of the relative position of the ferromagnetic platen 324 with respect to the forcer 321 in the x-direction as well as the gap between the platen 324 and the forcer 321 in the y-direction. In the rotary propulsion embodiment 410, the applied forces provide control of the relative rotational position of the platen 425 in the x-direction, defined in this embodiment as a rotational direction in the plane of the x and y axes, and control of the gap between the platen 425 and the stator 415. As mentioned above, in the linear propulsion embodiment 320 platen 324 may be supported in the z-direction by a suitable mechanism or structure. In the rotary propulsion embodiment 410, the platen or rotor 425 may be supported in the z-direction (perpendicular to the plane of the page) by a suitable mechanism or structure. The embodiments of FIGS. 9A-9D utilize Lorentz forces for propulsion and Maxwell forces for guidance. It may be assumed that the Lorentz component along the y-direction may be negligible compared to the Maxwell component. As mentioned above, because the Maxwell force between the windings and the ferromagnetic platen is attractive, an additional mechanism (not shown) may be used to produce a force in the opposite direction. This can be achieved, for example, by utilizing another winding set of similar type in a mirror configuration. As noted before, decoupling between the propulsion and guidance forces to produce any desired propulsion force, may be effected as desired with the additional winding set of similar type disposed in a mirror configuration. The motor force equations for the embodiments of FIGS. 9A-9D may be expressed for example as: θ π ##EQU00004## =Total force produced in x-direction (N)F =Total force produced in y-direction (N)F =Force produced by phase j in x-direction, j=0, 1, 2 (N)F =Force produced by phase j in y-direction, j=0, 1, 2 (N)i =Current through phase j, j=0, 1, 2 (N)K =Phase force constant in x-direction (N/A)K =Phase force constant in y-direction (N/A )x=Position in x-direction (m)y=Position in y-direction (m)θ=Electrical angle (rad) The motor commutation equation may be for example: =I sin [θ(x)-Δ+(2π/3)j], j=0, 1, 2 (2.3) where I and Δ are control parameters and:I=Amplitude of phase current (A)Δ=Electrical angle offset (rad) It should be noted that equation (2.3) is the same as (1.3) and that adjusting the electrical angle θ using the electrical angle offset Δ produces a guidance force along the y-axis and a propulsion force along the x-axis. Thus, by adjusting the electrical angle θ with the electrical angle offset Δ, the same motor commutation equation for producing a pure propulsion force may be used to produce both a propulsion force and a guidance force that may be substantially decoupled from each other as previously described. With the winding set 322 in a wye-configuration, sinusoidal phase currents in accordance with Equation (2.3) may be generated using space vector modulation. The resulting motor forces are: (y)cos(Δ) (2.4) (y) (2.5) and motor force coupling of the propulsion and guidance forces may be represented as: Δ≧ ##EQU00005## The independent control parameters I and Δ for particular forces in the x and y directions may be derived as: = {square root over (F (y)])} (2.8) Δ=a cos {F (y)]} (2.9) [0083]FIG. 9A shows exemplary vectors 915, 920, 925, 930 that result in a force along the x-axis from driving the phases with an electrical angle (θ) of 0, an electrical angle offset (Δ) of 0, a current through phase 0 of 0, a current through phase 1 of I sin(2π/3), and a current through phase 2 of I sin(4π/3), where the total force produced in the x-direction (F ) is 1.5IK , and the total amount of force in the y-direction (F ) is 1.5I [0084]FIG. 9B shows exemplary vectors 932, 934, 936, 938, 940, 942 that result in a force along the x-axis from driving the phases with an electrical angle (θ) of π/2, an electrical angle offset (Δ) of 0, a current through phase 0 of I, a current through phase 1 of I sin(7π/6), and a current through phase 2 of I sin(11π/6), where the total force produced in the x-direction (F ) is 1.5IK , and the total amount of force in the y-direction (F ) is 1.5I [0085]FIG. 9c shows exemplary vectors 944, 946, 948, 950, 952, 954 that result in a force along the y-axis from driving the phases with an electrical angle (θ) of 0, an electrical angle offset (Δ) of π/2, a current through phase 0 of -I, a current through phase 1 of I sin(π/6) and a current through phase 2 of I sin(5π/6) I sin(π/6), where the total force produced in the x-direction (F ) is 0, and the total amount of force in the y-direction (F ) is 1.5I [0086]FIG. 9D shows exemplary vectors 956, 958, 960, 962 that result in a force along the y-axis from driving the phases with an electrical angle (θ) of π/2, an electrical angle offset (Δ) of π/2, a current through phase 0 of 0, a current through phase 1 of I sin(2π/3) and a current through phase 2 of I sin(4π/3), where the total force produced in the x-direction (F ) is 0, and the total amount of force in the y-direction (F ) is 1.5I [0087]FIG. 10 shows a diagram of an exemplary solution process 1000 for performing commutation to produce the propulsion component in the x-direction with Lorentz forces and the guidance component in the y-direction with Maxwell forces as described above. The solution process 1000 may be implemented in any combination of hardware or software. A measured x position coordinate 1005 of the platen 324, 425 may be retrieved from position feedback device 340 ( FIG. 3 ) and provided to electrical angle determination circuitry or program 1010. Electrical angle determination circuitry or program 1010 factors the measured x position coordinate 1005 by 2π and a pitch of the winding set 322 (FIG. 3B) to determine electrical angle θ 1015. A measured y position coordinate 1020 may be retrieved from position feedback device 340 ( FIG. 3 ) and provided to a phase force constant determination block 1025 where predetermined phase force constants for the x and y directions are obtained. The results and the desired forces in the x and y direction 1027 are applied to control parameter circuitry or programs 1030 that implement equations (2.8) and (2.9) to yield control parameters I and Δ. Electrical angle θ 1215 and control parameters I and Δ are applied to commutation function 1035 which implements equation (2.3) to provide commutation current i for each winding phase. FIGS. 11A-11D are other schematic cross sectional views of a representative motor showing force vectors acting between forcer 321 (see FIG. 3 ) and platen 324, designated by arrows, that result in propulsion in the x-direction with guidance in the y-direction utilizing Lorentz forces. More specifically, in accordance with another exemplary embodiment FIGS. 11A and 11B show force vectors acting between forcer 321, 422 and platen 324, 425 that primarily result in propulsion in the x-direction, while FIGS. 11C and 11D show force vectors acting between forcer 321, 422 and platen 324, 425 that specifically result in guidance in the y-direction. In the embodiments of FIGS. 11A-11D, platen 324, 425 (see FIGS. 3-4) may be composed of a non-ferromagnetic material and the permanent magnets 334, 430 may include permanent magnets 1110 of alternating polarity. In the linear propulsion embodiment 320 the forcer 321 may also be composed of a non-ferromagnetic material. In the rotary propulsion embodiment 410, the stator 415 may be composed of a non-ferromagnetic material. FIGS. 11A-11D show different force diagrams for different driving characteristics used to energize phases 0, 1, and 2. The embodiments produce a force vector in the x-y plane, including a propulsion component in the x-direction and a guidance component in the y-direction. In the linear propulsion embodiment 320 the relative position of the platen 324 with respect to the forcer 321 in the x-direction as well as the gap between the platen 324 and the forcer 321 in the y-direction are controlled in an independent manner. As mentioned above, the platen 324 and forcer 321 may be controlled to remain substantially parallel to each other, for example, utilizing any suitable mechanism. Similarly, in the rotary propulsion embodiment 410, the applied forces provide control of the relative rotational position of the platen 425 in the x-direction, defined in this embodiment as a rotational (e.g. tangential) direction in the plane of the x and y axes, and control of the gap between the platen 425 and the forcer 422. As may be realized, the gap with respect to the whole stator is a two-dimensional quantity (vector), whereas the gap with respect to an individual forcer segment is a scalar that can be controlled by the given forcer segment (although the other forcers are also contributing. The platen or rotor 425 may be supported in the z-direction (perpendicular to the plane of the page) by a suitable mechanism or structure. The embodiments utilize Lorentz forces produced by applying position-dependent currents to the windings subject to the magnetic field of the permanent magnets. The following motor force equations may be utilized: θ π θ π ##EQU00006## =Total force produced in x-direction (N)F =Total force produced in y-direction (N)F =Force produced by phase j in x-direction, j=0, 1, 2 (N)F =Force produced by phase j in y-direction, j=0, 1, 2 (N)i =Current through phase j, j=0, 1, 2 (N)K =Phase force constant in x-direction (N/A)K =Phase force constant in y-direction (N/A)x=Position in x-direction (m)y=Position in y-direction (m)θ=Electrical angle (rad) In the exemplary embodiment, motor commutation equation may before example: =I sin [θ(x)-Δ+(2π/3)j], j=0, 1, 2 (3.3) where I and Δ control the magnitude and orientation of the motor force vector, respectively. More specifically:I=Amplitude of phase current (A); andΔ=Electrical angle offset (rad) It should be noted that equations (3.3), (2.3), and (1.3) are the same. Thus, similar to the embodiments above, adjusting the electrical angle θ using the electrical angle offset Δ produces a guidance force along the y-axis and a propulsion force along the x-axis. Thus, by adjusting the electrical angle θ with the electrical angle offset Δ, the same motor commutation equation for producing a pure propulsion force may be used to produce both a propulsion force and a guidance force that are substantially decoupled from each other. With the winding set 322, 422 in a wye configuration, sinusoidal phase currents in accordance with Equation (3.3) can be generated using space vector modulation. The following motor forces are the result: (y)cos(Δ) (3.4) (y)sin(Δ) (3.5) and the values of the independent control parameters I and Δ may be derived from: = {square root over ([F (y)]2)}{square root over ([F (y)]2)}/1.5 (3.6) Δ=a tan [F (y)] (3.7) FIG. 11A shows exemplary vectors 1115, 1120, 1125, 1130 that result in a force along the x-axis from driving the phases with an electrical angle (θ) of 0, an electrical angle offset (Δ) of 0, a current through phase 0 of 0, a current through phase 1 of I sin(2π/3), and a current through phase 2 of I sin(4π/3), where the total force produced in the x-direction (F ) is 1.5IK , and the total amount of force in the y-direction (F ) is 0. FIG. 11B shows exemplary vectors 1132, 1134, 1136, 1138, 1140, 1142 that result in a force along the x-axis from driving the phases with an electrical angle (θ) of π/2, an electrical angle offset (Δ) of 0, a current through phase 0 of I, a current through phase 1 of I sin(7π/6), and a current through phase 2 of I sin(11π/6), where the total force produced in the x-direction (F ) is 1.5IK , and the total amount of force in the y-direction (F ) is 0. FIG. 11C shows exemplary vectors 1144, 1146, 1148, 1150, 1152, 1154 that result in a force along the y-axis from driving the phases with an electrical angle (θ) of 0, an electrical angle offset (π) of π/2, a current through phase 0 of -I, a current through phase 1 of I sin(7π/6), and a current through phase 2 of I sin(5π/6), where the total force produced in the x-direction (F ) is 0, and the total amount of force in the y-direction (F ) is 1.5IK FIG. 11D shows exemplary vectors 1156, 1158, 1160, 1162 that result in a force along the y-axis from driving the phases with an electrical angle (θ) of π/2, an electrical angle offset (Δ) of π/2, a current through phase 0 of 0, a current through phase 1 of I sin(2π/3), and a current through phase 2 of I sin(4π/3), where the total force produced in the x-direction (F ) is 0, and the total amount of force in the y-direction (F ) is 1.5IK . In alternate embodiments, generally to that shown in FIGS. 11A-11D, ferromagnetic materials may be avoided to eliminate Maxwell-type force affects. FIG. 12 shows a diagram of a solution process 1200 for performing commutation to produce propulsion and guidance components as described above. The solution process 1200 may be implemented in any combination of hardware or software. A measured x position coordinate 1205 of the platen 324, 425 may be retrieved from position feedback device 340 ( FIG. 3 ) and provided to electrical angle determination circuitry 1210. Electrical angle determination circuitry 1210 factors the measured x position coordinate 1205 by 2π and a pitch of the winding set 322 (FIG. 3B) to determine electrical angle θ 1215. A measured y position coordinate 1220 may be retrieved from position feedback device 340 ( FIG. 3 ) and provided to a phase force constant determination block 1225 where predetermined phase force constants for the x and y directions are obtained. The results and desired forces in the x and y direction 1227 are applied to control parameter circuitry or programs 1230 that implement equations (3.6) and (3.7) to yield control parameters I and Δ. Electrical angle θ 1215 and control parameters I and Δ are applied to commutation function 1235 which implements equation (3.3) to provide commutation current i for each winding phase. Referring now to FIGS. 20A-20D, there is shown schematic cross-sectional views of a motor in accordance with another exemplary embodiment, illustrating force vectors, acting between forcer 321' and platen 324', for different reactant force conditions (e.g. maximum propulsion FIGS. 20A-20B and maximum guidance FIGS. 20C-20D) and different electrical positions between forcer and platen (e.g. θ=0, θ=π/2). The motor configuration in the exemplary embodiment shown in FIGS. 20A-20D, may be generally similar to that shown in FIGS. 3, 4 and 7, 9 and 11A-11D and described before (and similar features are similarly numbered). In the exemplary embodiment, the magnet array 2010 on the platen may be, for example, mounted on ferromagnetic backing material, and hence the motor may employ both Lorentz and Maxwell forces. In alternate embodiments, the magnet array may be disposed without magnetic material backing. In the exemplary embodiment shown, the winding arrangement of forcer 321' may have the phase (e.g. phase 1, phase 2, phase 3) spaced for example at about π/3 electrical intervals. As may be realized, the commutation equation (in the exemplary embodiment having the general =I sin [θ(x)-Δ+(π/3)i], i=0, 1, 2) may be utilized in a manner similar to that described previously (see for example (1.1)-(1.23)) in order to generate similar force vectors as in the exemplary embodiments shown in FIGS. 7A-7D, 9A-9D, and 11A-11D and previously described. In alternate embodiments, the winding phases may be arranged in any other suitable electrical intervals. [0103]FIG. 13A is a schematic perspective view of a linear propulsion system having a number of three dimensional motors (two three-dimensional motors are shown for example purposes). An exemplary linear propulsion system is shown that provides propulsion along the x-axis using Lorentz forces, lift along the z-axis using Lorentz forces, and guidance along the y-axis using Lorentz and Maxwell forces. It should be understood that the rotary motor embodiment of FIG. 4 may also be adapted for three dimensional applications. The embodiment in FIG. 13A includes winding sets 1310, 1320 positioned on one side of a transport apparatus 1305, and winding sets 1315, 1325 positioned on an opposing side of transport apparatus 1305. The winding sets 1310, 1315, 1320, 1325 are driven by amplifier 1330. Amplifier 1330 may be a multi-channel amplifier capable of driving each of the individual windings 1365 of winding sets 1310, 1315, 1320, 1325 separately or in groups. Winding sets 1310 and 1325 may have the same orientation and may be oriented 90 degrees from winding sets 1315 and 1320. The transport apparatus 1305 includes magnet platens 1335, 1340. Magnet platens 1335, 1340 may be arranged as an array of magnets and may extend along a length of opposing sides 1345, 1350, respectively, of transport apparatus 1305. In one embodiment, the array of magnets may be arranged with alternating north poles 1355 and south poles 1360 facing the winding sets. A position feedback system, for example, a suitable number of position sensors (e.g. Hall effect sensors 1390, 1395 may be provided for sensing the location, for example, the x, y, and z coordinates of the transport apparatus 1305. Other suitable sensor systems may be utilized. FIGS. 13B1-13B2 show respectively different exemplary arrangements 1370, 1370' of the array of magnets that may be used with the disclosed embodiments. In the exemplary embodiment shown in FIG. 13B1, the rows of magnets may be staggered or offset with alternating rows having the N and S polarities facing outward. In the exemplary embodiment shown in FIG. 13B2, the magnets may be arrayed in alternating polarities along rows that may be angled as desired relative to the X-direction. Other magnet arrangements may also be used. FIG. 13C shows an exemplary arrangement of the individual windings 1365 such as may be arranged in winding sets 1310, 1320, 1315, 1325 (see FIG. 13A ). In this arrangement, alternating winding sets 1365A, 1365B may have a 90 degree offset orientation. In the exemplary embodiment shown, the winding orientations may be aligned respectively with the X and Z axes. Referring now to FIG. 13D1, there is shown a schematic view of a winding arrangement in accordance with another exemplary embodiment. In the exemplary embodiment shown in FIG. 13D1, two winding segments 1365A', 1365B' are illustrated, for example purposes, such as may be used for winding sets 1310, 1320, 1315, 1325 in FIG. 13A . In alternate embodiments there may be more or fewer winding segments. In the exemplary embodiments, the winding segments may have what may be referred to as a generally trapezoidal configuration with the windings pitched at a desired angle to the X,Z axes. The windings of segments 1365A', 1365B' may for example have symmetrically opposing pitch, respectively generating forces Fa, Fb as shown in FIG. 13D1. In the exemplary embodiment, the windings may be overlapped. In alternate embodiments, the windings may have any desired configuration. FIG. 13D2 shows another exemplary arrangement of individual winding for use with the disclosed embodiments. In FIG. 13D2 individual windings 1380 and 1385 may be oriented 90 degrees from each other and are positioned in an overlapping arrangement. Other suitable arrangements of windings are also contemplated. Referring again to FIG. 13A the motor force equations may be expressed for example as: quadrature θ π ##EQU00007## θ π ##EQU00008## cos [θ (x,z)+(2π/3)j], j=0, 1, 2 (4.4) cos [θ (x,z)+(2π/3)j], j=0, 1, 2 (4.5) utilizing the following nomenclature =Total force produced in a-direction (N)F =Total force produced in b-direction (N)F =Total force produced in x-direction (N)F =Total force produced in y-direction (N)F =Total force produced in z-direction (N)F =Force produced by phase j of winding set A in a-direction, j=0, 1, 2 (N)F =Total force produced by winding set A in y-direction (N)F =Force produced by phase j of winding set A in y-direction, j=0, 1, 2 (N)F =Force produced by phase j of winding set B in b-direction, j=0, 1, 2 (N)F =Total force produced by winding set B in y-direction (N)F =Force produced by phase j of winding set B in y-direction, j=0, 1, 2 (N)I =Amplitude of phase current for winding A (A)I =Amplitude of phase current for winding B (A)i =Current through phase j of winding set A, j=0, 1, 2 (N)i =Current through phase j of winding set B, j=0, 1, 2 (N)K =Phase force constant of winding set A in a-direction (N/A)K =Phase force constant of winding set B in b-direction (N/A)K =Lorentz phase force constant of winding set A in y-direction (N/A)K =Maxwell phase force constant of winding set A in y-direction (N/A =Lorentz phase force constant of winding set B in y-direction (N/A)K =Maxwell phase force constant of winding set B in y-direction (N/A )x=Position in x-direction (m)y=Position in y-direction (m)z=Position in z-direction (m)α=Angular orientation of winding set A (rad)γ=Angular orientation of winding set B (rad)Δ =Electrical angle offset for winding set A (rad)Δ =Electrical angle offset for winding set B (rad)θ =Electrical angle for winding set A (rad)θ =Electrical angle for winding set B (rad)R A=Phase resistance of winding set A (Ohms)R B=Phase resistance of winding set B (Ohms)β=Y-direction force balance factor between winding sets A and B (no units)FIG. 14 shows the orientation of the winding sets of the three dimensional motor configuration embodiments where a represents the direction of force F and b is the direction of force F The following motor commutation equations for example may be utilized: sin [θ +(2π/3)j], j=0, 1, 2 (4.6) sin [θ +(2π/3)j], j=0, 1, 2 (4.7) where I[A] , Δ , I , Δ control magnitudes and orientations of force vectors produced by winding sets A and B. It should be noted that equations (4.6) and (4.7) are similar to (3.3), (2.3), and (1.3) above. Thus, by adjusting the electrical angle θ , θ with the electrical angle offset Δ , Δ , the same motor commutation equations may be used for producing a one dimensional propulsion force in the x-direction, two dimensional forces including a propulsion force in the x-direction and a guidance force in the y-direction that may be substantially decoupled, and in this embodiment, three dimensional forces including propulsion forces in both the x-direction and a z-direction and a guidance force in the y-direction that may be substantially decoupled from each other. In other words, by adjusting the electrical angle with the electrical angle offset, at least one, two, and three dimensional substantially independently controllable forces may be produced in the motor using a common set of commutation equations. Sinusoidal phase currents in accordance with equations (4.4) and (4.5) can be generated, for example, for wye winding configurations using space vector modulation. The resulting motor forces may be expressed for example as: ) (4.8) ) (4.9) cos(γ) (4.10) sin(γ) (4.11) (y)+- I (y)] (4.12) In embodiments using displaced trapezoidal windings (see FIG. 13D1): (y), K (y), K (y) (4.13) )cos(α), F )sin(α) (4.14) while in embodiments using orthogonal linear windings (see FIG. 13D2): α=0, γ=π/2F , F The independent control parameters I , I and Δ , Δ for the winding sets may be for example: ) (4.16) ) (4.17) ) (4.18) ) (4.19) where F[a] sin γ-F cos γ)/(cos α sin γ-sin α cos γ) (4.20) sin α-F cos α)/(sin α cos γ-cos α sin γ) (4.21) The solution for (4.16) to (4.19) includes finding I , Δ , I , and Δ , given the desired forces F , F and F . This can be achieved for example by imposing the following "force balancing condition:" β Δ Δ ##EQU00009## are the y -direction force contributions of winding sets A and B, respectively. The parameter β represents the relative force contribution between the two winding sets along the y-direction. If for example β= 1, then both winding sets have equal contributions for the y-force component. It is assumed that β is known at any point in time and it does not have to be constant. In the exemplary embodiment, the motor control parameters may thus be expressed for example as: β δΔ β δ ≧ δ ≧ β β β β β β ##EQU00010## The motor force coupling of the propulsion and guidance forces are represented as: ≧β ≧ββ ##EQU00011## Referring still to FIG . 13A, in a magnetically levitated material transport system having a propulsion system according to the exemplary embodiment, there may be another winding set (1310, 1325, 1320, 1315) on opposite sides of the guidance system that may generate y-forces with opposing signs and the controller may effect control of opposing windings as desired (and as previously described) to substantially decouple Y-forces from X and Z forces. [0119]FIG. 15 shows a diagram of a solution process 1500 for performing commutation to produce the propulsion in the x-direction and lift in the z-direction using Lorentz forces, and the guidance component in the y-direction with Lorentz and Maxwell forces as described above. The solution process 1500 may be implemented in any combination of hardware or software. Measured x and z position coordinates 1505 may be retrieved from receiver 1395 ( FIG. 13A ) and provided to position transform circuitry 1510 which translates the x and z position coordinates into a and b positions (FIG. 14). The results are provided to electrical angle determination circuitry 1515. Electrical angle determination circuitry 1515 factors the a and b positions by 2π and the pitch of the windings to determine electrical angles θ and θ . A measured y position coordinate 1520 may be retrieved from sensors (similar to sensors 1390 1395 FIG. 13A), and provided to a phase force constant determination block 1525 where predetermined phase force constants for the A and B winding sets in the a and b directions are obtained. In addition, Lorentz and Maxwell phase force constants for winding sets A and B in the y-direction are obtained. Desired forces in the x and z direction 1530 are applied to circuitry or program 1535 implementing equations (4.20) and (4.21) that translates the x and z direction forces into forces in the a and b directions. The forces in the a and b directions, the results of the phase force constant determination block 1525, and a desired force in the y direction 1537 are applied to control parameter circuitry or program 1540 implementing equations (4.56) through (4.59) to yield control parameters I , I and Δ , Δ for winding sets A and B. Electrical angles θ and θ and control parameters I , I and Δ , Δ for winding sets A and B are applied to commutation function 1545 which implements equations (4.6) and (4.7) to provide commutation currents i and i for each winding phase j of winding sets A and B. The embodiment of FIG. 13A may also be designed in a manner that provides propulsion in the x-direction and lift in the z-direction by Lorentz forces, and guidance in the y-direction by Maxwell forces. As noted before, the following motor force equations may be defined for example as: θ π θ π ##EQU00012## =Phase force constant of winding set A in y-direction (N/A =Phase force constant of winding set B in y-direction (N/A Also, the following motor commutation equations may be used: sin [θ +(2π/3)j], j=0, 1, 2 (5.4) sin [θ +(2π/3)j], j=0, 1, 2 (5.5) As noted above, (5.4) and (5.5) are the same as (4.6) and (4.7), respectively, and are similar to (3.3), (2.3), and (1.3).) By adjusting the electrical angle(s) θ , θ of the winding sets with the electrical angle offset(s) Δ , Δ , the same motor commutation equations may be used for producing at least one, two, and three dimensional forces that are substantially decoupled from each other. As with other embodiments described herein, sinusoidal phase currents in accordance with Equations (5.4) and (5.5) may be generated using space vector modulation for example for wye winding configuration the winding sets. The motor forces for example may be as follows: ) (5.6) ) (5.7) cos(γ) (5.8) sin(γ) (5.9) (y)] (5.10) The motor force coupling of the propulsion and guidance forces are represented as: Δ Δ ≧ ##EQU00013## For embodiments utilizing displaced trapezoidal windings: (y), K (y) (5.13) )cos(α), F )sin(α) (5.14) while for embodiments using orthogonal linear windings α=0, γ=π/2F , F The independent control parameters I , I and Δ , I for the winding sets A and B may be derived as: = {square root over (F (y)])} (5.16) = {square root over (F (y)])} (5.17) =a cos {F (y)]} (5.18) =a cos {F (y)]} (5.19) where F[a] sin γ-F cos γ)/(cos α sin γ-sin α cos γ) (5.20) sin α-F cos α)/(sin α cos γ-cos α sin γ) (5.21) [0129]FIG. 16 shows a diagram of a solution process 1600 for performing commutation to produce propulsion components in the x and z-directions with Lorentz forces and a guidance component in the y-direction with Maxwell forces as described above. The solution process 1600 may be implemented in any combination of hardware or software. Measured x and z position coordinates 1605 may be retrieved from receiver 1395 ( FIG. 13A ) and provided to position transform circuitry or program 1610 which translates the x and z position coordinates into a and b positions (FIG. 14). The results are provided to electrical angle determination circuitry 1615. Electrical angle determination circuitry 1615 factors the a and b positions by 2π and the pitch of the windings to determine electrical angles θ and θ . A measured y position coordinate 1620 may be retrieved from sensors (similar to sensor 1390, 1395 FIG. 13A) and provided to a phase force constant determination block 1625 where predetermined phase force constants for the A and B winding sets in the a, b, and y directions are obtained. Desired forces in the x and z direction 1630 are applied to circuitry or program 1635 implementing equations (5.20) and (5.21) that translates the x and z direction forces into forces in the a and b directions. The forces in the a and b directions, the results of the phase force constant determination block 1625, and a desired force in the y-direction 1637 are applied to control parameter circuitry or program 1640 implementing equations (5.16) through (5.19) to yield control parameters I , I and Δ , Δ for winding sets A and B. Electrical angles θ and θ and control parameters I , I and Δ, Δ for winding sets A and B are applied to commutation function 1645 which implements equations (5.4) and (5.5) to provide commutation currents i and i for each winding phase j of winding sets A and B. The embodiment of FIG. 13A may also be designed in a manner that provides three dimensional forces supplying propulsion in the x-direction, lift in the z-direction, and guidance in the y-direction utilizing Lorentz forces. In this case the motor force equations may before example expressed as: θ π θ π θ π θ π ##EQU00014## The motor commutation equations may be for example as follows sin [θ +(2π/3)j] (6.5) sin [θ +(2π/3)j] (6.6) where again j =0, 1 and 2 represent phases 0, 1 and 2, respectively, and I , Δ , I , Δ are independent parameters to control magnitudes and orientations of force vectors produced by winding sets A and B. As with the other embodiments, (6.5) and (6.6) are the same as (5.4) and (5.5), and (4.6) and (4.7), respectively, and are similar to (3.3), (2.3), and (1.3). By adjusting the electrical angle(s) θ , θ with the electrical angle offset(S) Δ , Δ , the same motor commutation equations may be used for producing at least one, two, and three dimensional forces decoupled from each other. Sinusoidal phase currents in accordance with Equations (6.6) and (6.7) can be generated using space vector modulation for example for wye winding configuration. The following motor force equations result: ) (6.7) cos(γ) (6.8) sin(γ) (6.9) ) (6.10) ) (6.11) (y)sin(.DE- LTA. )] (6.12) In an exemplary embodiment using displaced trapezoidal windings (see FIG. 13D1: (y) (6.13) - +F )sin(α) (6.14) while in an exemplary embodiment using orthogonal linear windings (see FIG. 13D2): α=0, γ=π/2F To solve I , Δ , I and Δ in terms of F , F and F the force balance condition may be employed for example: The parameter β may be known using certain criteria, for example as described below. The control parameters may thus be defined for example as: ββ Δ ββ β Δ β ##EQU00015## Where for example sin γ-F cos γ)/(cos α sin γ-sin α cos γ) (6.19) sin α-F cos α)/(sin α cos γ-cos α sin γ) (6.20) [0139]FIG. 17 shows a diagram of a solution process 1700 for performing commutation to produce substantially decoupled propulsion components in the x and z directions and a guidance component in the y-direction, all with Lorentz forces for example as described above. The solution process 1700 is similar to the solution process of FIG. 16 and may be implemented in any combination of hardware or software. Measured x and z position coordinates 1705 may be retrieved from sensors (similar to sensors 1390, 1395 FIG. 13A ) and provided to position transform circuitry or program 1710 which translates the x and z position coordinates into a and b positions (FIG. 14). The results are provided to electrical angle determination circuitry 1715 which factors the a and b positions by 2π and the pitch of the windings to determine electrical angles θ and θ . A measured y position coordinate 1720 may be retrieved from receiver 1395 and provided to a phase force constant determination block 1725 where predetermined phase force constants for the A and B winding sets in the a, b, and y directions are obtained. Desired forces in the x and z direction 1730 are applied to circuitry 1735 implementing equations (6.22) and (6.23) that translates the x and z direction forces into forces in the a and b directions. The forces in the a and b directions, the results of the phase force constant determination block 1725, and a desired force in the y-direction 1737 are applied to control parameter circuitry or program 1740 implementing equations (6.19) through (6.22) to yield control parameters I , I and Δ , Δ for winding sets A and B. Electrical angles θ and θ and control parameters I , I and Δ , Δ for winding sets A and B are applied to commutation function 1745 which implements equations (5.4) and (5.5) to provide commutation currents i and i for each winding phase j of winding sets A and B. The selection of the parameter β described in the embodiments above may be obtained by different optimization criteria. Depending on the types of forces involved, different criteria can be used. For example if only Lorentz forces are present, the force ratio criterion may be more appropriate. If the effect of back-electromotive forces (BEMF) is relevant, then the force ratio can be modified to account for that. If Maxwell forces are relevant as well, then the selection of β can be based on the ratio of phase amplitude currents. Additional criterion can be based on the powers consumed by the windings. The various criteria are explained below. Assuming that only Lorentz forces are present, for example as in the embodiments described above, one possible criterion is to select β such that the contributions of winding sets A and B are chosen taking into account their maximum rated current and consequently their maximum rated forces along y-direction. This criterion may be expressed for example as: Using the (6.16) condition, β ##EQU00017## A generalization of the criterion (7.1) may be obtained by taking into account the effect of BEMF, which limits the maximum possible phase current amplitudes to account for the bus or supply voltage ) being finite. The maximum phase current amplitudes for winding sets A and B may be expressed for example in terms of the bus voltage, phase resistance, BEMF and motor speed as: ρ ωρ ω ##EQU00018## .sup.Max=Maximum rated amplitude of phase current for winding A (A)I .sup.Max=Maximum rated amplitude of phase current for winding B (A)ω =Mechanical angular speed for winding set A (rad/sec)ω =Mechanical angular speed for winding set B (rad/sec)ρ=0.5 for a wye-wound winding set and ρ ##EQU00019## for a delta -wound winding set. Using (7.5) and (7.6) in (7.2) and (7.3), β may be computed as: βρ ωρ ω ##EQU00020## which provides a criterion for a speed dependent In the embodiments where Maxwell and Lorentz forces are present the relations between forces and currents are non-linear. In this situation it may be desired to establish a criterion based on the phase current amplitude ratios (rather than force ratios, see (7.1)) as described below. The effect of BEMF can be included in the calculation of I[A] .sup.Max and I .sup.Max as described above. The currents I and I are the solutions (6.18) and (6.20) or (4.57) and (4.59). The term β can be obtained from (7.8) after substitution of the appropriate solution.In alternate embodiments, the phase current amplitude ratio may be convenient when the current-force relationship is linear, (e.g., when Lorentz forces are dominant) because it distributes the gap control force to the winding that is currently less utilized to provide propulsion. By way of example, considering that winding A provides force in the x-direction and winding B in the z-direction, when the system does not move in the x-direction and accelerates in the z-direction, winding A would provide larger portion of y-direction force. Conversely, if the system accelerates in the x-direction and does not apply much force in the z-direction, winding B would be providing larger portion of the y-direction force. By way of example, an additional criterion that may be used: , P is the total power at the winding set d (d=A or B) and P .sup.Max is the maximum rated power for winding set d (d=A or B). Referring now also to FIGS. 18A-18D, phase commutation may be employed for example to achieve closed loop position control (see FIG. 18A) with open loop stabilization effects. Commutation may be performed that achieves open loop roll stabilization (see FIG. 18B), open loop pitch stabilization (see FIG. 18C) with discrete forces, and open loop pitch stabilization with distributed forces (see FIG. 18D). FIGS. 18A-18D are schematic end and side elevation views of propulsion system in accordance with another exemplary embodiment having a three dimensional motor formed from a number of two dimensional windings similar to that shown in FIG. 13D. In the exemplary embodiment 1810, 1815 there are two motors (shown for example purposes), one on the left hand side and the other on the right hand side of platform 1805. The motors may be wired together, (e.g., they are not controlled independently). As may be realized as associated benefit of phase commutation with open-loop stabilization may be that because the motors are wired together, the complexity of the controller hardware may be reduced. Referring to FIG. 18A, and the equations below, for phase commutation for open loop stabilization the following nomenclature may be used for example: =Total force in z-direction (N)F L=Force in z-direction produced by left motor (N)F R=Force in z-direction produced by right motor (N)I=Amplitude of phase current (A)i =Current through phase j, j=0, 1, 2 (A)K=Force constant (N/A)M =Moment about x-axis (Nm)p=Motor pitch (corresponding to electrical angle change of 2π) (m)R =Rotation about x-axis (rad)Δ=Electrical angle offset used for control purposes (rad)θ=Electrical angle used for commutation purposes (rad)The motor force equations may be expressed for example as: θ π θ π ##EQU00023## and the motor commutation equation may for example be i[j] =I sin [θ(z)+Δ+(2π/3)j-π/2], j=0, 1, 2 (8.3) where I is a constant and Δ is a control parameter.The resulting motor forces may for example be: L=1.5IK sin(Δ) (8.4) R=1.5IK sin(Δ) (8.5) R=3IK sin(Δ) (8.6) and hence the control parameter Δ may be established as Δ=a sin [F /(3KI)] (8.7) Referring to FIG. 18B, in the exemplary embodiment the equations for open loop roll stabilization, (in the case of roll about the X axis for example) may be used: θ π θ Δ π θ π θ Δ π ##EQU00024## =Electrical angle for left motor (rad)θ =Electrical angle for right motor (rad)Δ =Electrical angle offset corresponding to displacement due to roll for left motor (rad)Δ =Electrical angle offset corresponding to displacement due to roll for right motor (rad)and the following motor commutation equation may be used: =I sin [θ(z)+Δ+(2π/3)j-π/2], j=0, 1, 2 (8.10) where I is a constant and Δ is a control parameter.The resulting motor forces and moment may for example be: L=1.5IK sin(Δ-Δ ) (8.11) R=1.5IK sin(Δ-Δ ) (8.12) R=1.5IK[sin(Δ-ΔL)+sin(Δ-Δ.- sub.R)] (8.13a) =1.5IK[sin Δ(cos Δ +cos Δ )-cos Δ(sin Δ +sin Δ )] (8.13b) /2[sin(Δ-.DE- LTA. )] (8.14a) /2[sin Δ(cos Δ -cos Δ )-cos Δ(sin Δ -sin Δ )] (8.14b) Considering in the example shown pure roll of the platform , i.e., rotation with respect to the x-axis, Δ =3IK sin(Δ)cos(Δ ) (8.15) ) (8.16) As may be realized , in the exemplary embodiment (e.g. of open-loop stabilization) the roll is expected to be small. Hence, for example |Δ | small the equation may be expressed as: =3IK sin(Δ) (8.17) where M[x] is a stabilization moment providing roll stiffness that depends on K, d , p, I and Δ. (It should be noted that, in the exemplary embodiment, amplitude commutation may not provide a stabilization moment (Δ=π/2M =0).)Hence, the control parameter Δ may be established as: Δ=a sin [F /(3KI)] (8.19) Similar as in Equation (8.7). Alternatively, amplitude I and phase Δ may be calculated together, in the exemplary embodiment, to keep roll stiffness constant. As another alternative, amplitude I may be used to produce Maxwell forces for guidance control in the y-direction. Referring now to FIG. 18C, there is shown a schematic side view of the motor illustrating forces and moments operating on the platform for open loop pitch stabilization with discrete forces. In the exemplary embodiment, motor 1815 may include a number of discrete motors 1815A, 1815B (e.g. there are two motors (winding sets) or motor (winding) segments shown in the figure for example purposes) along the side of the platform. The locations of the motors 1815A, 1815B (e.g. one at the front portion and another one at the rear portion, is merely exemplary. The motor windings may be wired together (connected) such as for common commutation control. In other words, they are not controlled independently. In the exemplary embodiment, the motor force equations may be expressed for example θ π θ Δ π θ π θ Δ π ##EQU00025## F=Force in z-direction produced by front motor (N)F R=Force in z-direction produced by rear motor (N)θ =Electrical angle for front motor (rad)θ =Electrical angle for rear motor (rad)Δ =Electrical angle offset corresponding to displacement due to pitch for front motor (rad)Δ =Electrical angle offset corresponding to displacement due to pitch for rear motor (rad)The following exemplary motor commutation equation may be used: =I sin [θ(z)+Δ+(2π/3)j-π/2], j=0, 1, 2 (8.22) where I is a constant and Δ is a control parameter Accordingly, he resulting motor forces and moment may be expressed for example as: F=1.5IK sin(Δ-Δ ) (8.23) R=1.5IK sin(Δ-Δ ) (8.24) )+sin(Δ-.DE- LTA. )] (8.25a) =1.5IK[sin Δ(cos Δ +cos Δ )-cos Δ(sin Δ +sin Δ )] (8.25b) /2[sin(Δ-.DE- LTA. )] (8.26a) /2[sin Δ(cos Δ -cos Δ )-cos Δ(sin Δ -sin Δ )] (8.26b) =Moment about y-axis (Nm)Considering in the example shown pure pitch of the platform, e.g., rotation with respect to the y-axis only, as illustrated in FIG. 18C, Δ =3IK sin(Δ)cos(Δ ) (8.27) ) (8.28) Similar to the approach of open loop roll stabilization discussed before , in the exemplary embodiment illustrated of open loop pitch stabilization, the pitch of the platform is expected to be small.Thus, |Δ |=smalland accordingly =3IK sin(Δ) (8.29) =Rotation about y-axis (rad)and M is a stabilization moment providing pitch stiffness that depends on K, d , p, I and Δ. (As mentioned above, it should be noted that in the exemplary embodiment amplitude commutation may not provide stabilization moment (Δ=π/2M The control parameter Δ may thus be established as: Δ=a sin [F /(3KI)] (8.31) Similar as in Equation (8.7). Similar to what was previously described, in the alternative, amplitude I and phase Δ may be calculated together in the exemplary embodiment to keep pitch stiffness constant. As another alternative, amplitude I can be used to produce Maxwell forces for guidance control in y-direction.FIG. 18D shows another schematic side view of the motor 1815' in accordance with another exemplary embodiment, illustrating open loop pitch stabilization with distributed forces. In the embodiment illustrated, motor 1815' may be a single motor or winding set distributed substantially continuously along the side of the platform. In the exemplary embodiment the force distribution may be expressed for example as: (x)=1.5KI sin [Δ-Δ (x)]=1.5KI[sin Δ cos Δ (x)-cos Δ sin Δ (x)] (8.32) =Distribution of z-force (N/m)Δ =Electrical angle offset corresponding to displacement due to pitch (rad) Considering small pitch angle and, therefore, |Δ (x)]=1.5KI{- sin(Δ)-[2πR cos(Δ)/p]x} (8.33) The total force and moment may be expressed for example as: ∫ ≈ ∫ Δ π Δ Δ ∫ ≈ ∫ Δ π Δ π Δ ##EQU00026## where M[y] is a stabilization moment providing pitch stiffness that depends on K, d , p, I and Δ. (As described before, the amplitude commutation may not provide stabilization moment (Δ=π/2M The control parameter may be established as: Δ=a sin [F )] (8.36) Similar to embodiments above, in the alternative, amplitude I and phase Δ may be calculated together in the exemplary embodiment to keep pitch stiffness constant. As another alternative, amplitude I can be used to produce Maxwell forces for guidance control in the y-direction. In alternate embodiments, applicable equally to all of the roll and pitch stabilization cases similar to those previously described, this mechanism can be used to control stiffness in a closed loop manner, provided that roll or pitch measurements are available for feedback use. This nonetheless would keep the controls hardware simple on the motor amplifier side. [0159]FIG. 19 shows a general block diagram of an integrated motor commutation system 1900 for combined three dimensional control (e.g. propulsion in X, Z directions and guidance in Y direction) applicable to the disclosed embodiments (see also FIG. 13A ). The system arrangement illustrated in FIG. 19 is generally similar to the control system arrangements illustrated in a more specific manner in FIGS. 15-17, and similar features are similarly numbered. In the exemplary embodiment illustrated, the control system 1900 may perform commutation of the windings of forcer 2115 to effect control of the platen or transport apparatus 2135 (similar to apparatus 1305 in FIG. 13A) in the X and Z directions (e.g. propulsion and lift respectively, see FIG. 13A ) using Lorentz forces, and in the Y-direction (e.g. guidance) using Lorentz and Maxwell forces in a manner similar to that described before (see also FIG. 15 ). In alternate embodiments, and as also described previously, the control system may be arranged to effect commutation of forcer windings for three dimensional control of the transport using for example Lorentz forces for propulsion, lift and guidance, or using Lorentz forces for propulsion and lift and Maxwell forces for guidance. In the exemplary embodiment, in a manner generally similar to that described before, position feedback information from sensors 2190 (similar to sensors 1390, 1395 in FIG. 13A ), such as X and Z position, may be communicated to the position transform 1915. In the exemplary embodiment, position transform 1915 may include suitable electrical angle determination circuitry capable of determining the corresponding electrical angles θ (e.g. for A, B winding segments of the forcer 2115; (see also FIGS. 13C, 13D1-13D2)). Position feedback information, such as Y position, may be communicated for example to the force constant determination block 1925 suitably arranged to determine the force constant parameters for the forcer winding sets (e.g. forcer winding sets A, B). As may be realized, control system 1900 may be communicably connected to, or may include suitable command processor(s) (not shown) arranged to identify desired X, Y, Z forces (e.g. F , F , F ) desired for effecting transport commands. As seen in FIG. 19 , in the exemplary embodiment, the desired force parameter 1930 may be communicated to force transform 1935, that may be suitably programmed to translate x, y, z, direction forces to forces corresponding to the winding reference frame (e.g. F , F , F , in the example where the forcer 2115 has A, B windings). The system may include commutation parameter determination circuitry or program 1940 arranged to determine the commutation parameters such as I , I , Δ , Δ in a manner similar to that previously described and commutation equation determination program or circuitry 1945 defines the resultant commutation equations communicated to the current loop 2130 that implements the commutation equations to provide currents (e.g. i , i , j=0, 1, 2) for the station/forcer windings thereby effecting desired three dimensional control of the transport. In alternate embodiments, the control system may have any other desired arrangement. The embodiments disclosed above provide sets of motor force equations, motor commutation equations, and expressions for calculation of motor control parameters based on specified propulsion and guidance forces, for both two dimensional and three dimensional motor configurations. The disclosed embodiments include adjusting an electrical angle used to drive a common set of commutation functions with an electrical angle offset so that the same motor commutation functions may be used for producing at least a one dimensional propulsion force in the x-direction, two dimensional forces including a propulsion force in the x-direction and a guidance force in the y-direction, and three dimensional forces including propulsion forces in both the x-direction and a z-direction and a guidance force in the y-direction. In addition, motor force equations, motor commutation equations, and motor control parameter calculations are provided for phase commutation with open loop stabilization, including open loop roll stabilization, open loop pitch stabilization with discrete forces, and open loop pitch stabilization with distributed forces. It should be understood that the foregoing description is only illustrative of the invention. Various alternatives and modifications can be devised by those skilled in the art without departing from the invention. Accordingly, the present invention is intended to embrace all such alternatives, modifications and variances which fall within the scope of the appended claims. Patent applications by Christopher Hofmeister, Hampstead, NH US Patent applications by Jairo Terra Moura, Marlboro, MA US Patent applications by Martin Hosek, Lowell, MA US Patent applications by BROOKS AUTOMATION, INC. User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20090001907","timestamp":"2014-04-18T06:21:38Z","content_type":null,"content_length":"141001","record_id":"<urn:uuid:d4637ccd-3628-4cdf-bfae-c3aace2d0323>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Hersh's unfruitful attack on logicism and formalism Edwin Mares Edwin.Mares at vuw.ac.nz Tue Sep 29 23:20:02 EDT 1998 At 22:11 29/09/98 -0400, Simpson wrote: >Reuben Hersh writes: > > I see that you can restate the goals of formalism and logicism > > .... why not say that your are updating the goals? >I'm *not* updating the goals, at least not intentionally. My >statement of the goals was intended to be reasonably accurate. >Let's take Frege. You say that Frege's goal was to reduce mathematics >to logic. I say that Frege's goal was to investigate *the extent to >which* mathematics is reducible to logic. From the scientific point >of view, these are two ways of saying the same thing. But my way is >better, because it's more fruitful. You must dismiss Frege's work as >a failure, while I can build on Frege's genuine and remarkable I'm not sure I agree with either of these interpretations of Frege. Frege wanted to reduce arithmetic to logic to prove that arithmetical statements are analytically true. But he didn't think that this was true of geometry, say, on which he seems to have taken a rather Kantian line. Thus, it would seem that he neither wanted to reduce all of maths to logic nor even tried to find out which parts of maths outside arithmetic were so reducibe. Ed Mares Department of Philosophy Victoria University of Wellington P.O. Box 600 Wellington, New Zealand Ph: 64-4-471-5368 Theorem 1. Every horse has an infinite number of legs. (Proof by intimidation.) Proof. Horses have an even number of legs. Behind they have two legs and in front they have fore legs. This makes six legs, which is certainly an odd number for a horse. But the only number that is both odd and even is infinity. Therefore horses have an infinite number of legs. Joel E. Cohen, "On the Nature of Mathematical Proofs" More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-September/002232.html","timestamp":"2014-04-20T08:20:48Z","content_type":null,"content_length":"4342","record_id":"<urn:uuid:942eb84f-8134-412e-aa0a-1b8f21c1979b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
Click here to go to the NEW College Discussion Forum Discus: SAT/ACT Tests and Test Preparation: October 2003 Archive: ANTOHER MATH QUESTION An integer t is to be chosen at random from a set of 20 different integers. Which of the following must be equal to 1/2? I. The probability that t is greater than the median of the integers II. The probabilitiy that t is greater than the average of the integers III. The probabiltity that t is odd plz sum1 explain HERES ANOTHER, ITS A QC In 1995 there were 1.2 million people who became naturalized citzens of the US. This was an increase of more than 100 percent over p, the number of people who became naturalized citzens in 1994 Col A. Col B. .7 million What is the median in a even series of number? When there is an even number of values in the data, the median is the average of the middle two values. Half the numbers in the data set are supposed to be less than or equal to the median, and half the numbers in the data set are supposed to be greater than or equal to the median. Imagine the following set -the numbers are NOT consecutive, simply different: What could you say about average and odd numbers? For the other one: If p + 100%p = 2p If p was 700,000 in 1994, the population would be at least 2p+1 or 1,400,001. Since the 1995 population is 1,200,000, the p of 1994 had to be smaller than 600,000. B is bigger. Consider the median: Say I have 1,2,3,4 - 2.5 would be my median. The probability of t being > than 2.5 is 1/2 The Average: Say I have 5,6,7,8,9,10. Average 7.5. Again 1/2 NOTE: Even though I'm only considering 4/6 intergers each time, It would hold true for 20 integers also. As for the probability being odd, I'm assuming 0 is neither odd nor even. Hence its <1/2 And as for the second one. B. I'm sorry about the first post. Didn't consider an even number of nos. Edited for even number of nos. :| What are the odds that NONE is an acceptable answer on the SAT test? The options were probably, 1,2,3, 1 and 2, 2 and 3 or something like that. Anyhow, please read my answer again. I am afraid that your example of 5 integers was poorly chosen. 1-5 is NOT representative of the problem at hand. You need to use ALL the elements of the stated problem and need an even pool of numbers. The answer is -without a doubt- A. Simultaneous posts Ahh..pool is the word I was looking for. Even number of numbers..sheesh Report an offensive message on this page E-mail this page to a friend
{"url":"http://www.collegeconfidential.com/discus/messages/69/30059.html","timestamp":"2014-04-17T21:31:13Z","content_type":null,"content_length":"18087","record_id":"<urn:uuid:aea931d0-4e9b-4137-8b45-266328359e4d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: July 2010 [00296] [Date Index] [Thread Index] [Author Index] Re: Why Evaluate[Defer[1+1]] does not return 2 ? • To: mathgroup at smc.vnet.net • Subject: [mg110942] Re: Why Evaluate[Defer[1+1]] does not return 2 ? • From: "David Park" <djmpark at comcast.net> • Date: Tue, 13 Jul 2010 05:26:15 -0400 (EDT) I think that Defer is intended for something like the following: {PasteButton[1 + 1], PasteButton[Defer[1 + 1]]} Evaluate, use the second button, and then evaluate the pasted result. David Park djmpark at comcast.net From: Nasser M. Abbasi [mailto:nma at 12000.org] I am a little confused about Defer: r = Defer[1 + 1] Out[106]= 1 + 1 Out[109]= 1 + 1 But help says: "Defer[expr] returns an object which remains unchanged until it is explicitly supplied as Mathematica input, and evaluated using Shift+Enter, Evaluate in Place, etc. " Isn't typing Evaluate[r] the same as Evaluate in Place mentioned above? causes expr to be evaluated even if it appears as the argument of a function whose attributes specify that it should be held unevaluated." Isn't "r" above is the object in question whose value is 1+1? Actually, typing "r" itself, (same as typing Evaluate[r]) does return In[117]:= r Out[117]= 1 + 1 I think I know what is the problem now, I am going by that "r" is the expression whose value is "1+1". I think this is the problem. "r" is not the expression, it is "1+1" which is the expression. "r" is just another name for Defer[1+1]. Am i getting close?
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jul/msg00296.html","timestamp":"2014-04-19T14:33:53Z","content_type":null,"content_length":"26472","record_id":"<urn:uuid:1e562ab6-cc79-496e-bbd2-c994c7d9f53e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
High Rise Development Across Australia With the recent spate of high-rise proposals announced for Adelaide, I did a bit of research to find how we compare with the rest of Australia. In the past such comparisons would ahve been embarrassing, but now I think we are very competitive. Specially when our smaller population and economic importance is taken into account. I am specially amazed at the fact that Adelaide has more proposed and approved high rise buildings than either Brisbane or Perth. U/C: 14 Approved: 35 Proposed: 69 U/C: 18 Approved: 72 Proposed: 22 U/C: 16 Approved: 14 Proposed: 12 U/C: 15 Approved: 21 Propsoed: 7 U/C: 3 Approved: 13 Proposed: 18 Gold Coast: U/C: 15 Approved: 18 Proposed: 22 U/C: 2 Approved: 2 Propsoed: 4 U/C: 4 Approved: 1 Propsoed: 1 U/C: 0 Approved: 1 Propsoed: 0 U/C: 4 Approved: 3 Proposed: 5 Plus a stack load of office refurbs. Current U/C projects I can think off. SA Water Octagon Apartments Tivoli Apartments Infinity Waters Newport Quays Apartments Place On Brougham crawf wrote:Plus a stack load of office refurbs. Current U/C projects I can think off. SA Water Octagon Apartments Tivoli Apartments Infinity Waters Newport Quays Apartments Place On Brougham Yes 3 U/C Tivoli is not high-rise, and neither is infinity waters. Furthermore the Newport Quay 12 level towers have not begun construction. And Place on Brougham is not classified as U/C, as it's a refurbishment of an existing building. And did i say anything about hi-rises?, I said all the current projects for Adelaide. How tall does a building have to be, to be classified as a High-Rise. Because really by Australian standards Santos is a low-mid rise building. And i was talking about the 5st+ buildings being built at Newport Quays. crawf wrote:And did i say anything about hi-rises?, I said all the current projects for Adelaide. And i was talking about the 5st+ buildings being built at Newport Quays. why did you post it in this thread then? it clearly says hi-rise. if you were to include projects of 5lvls or whatever, im sure Perth and other cities would have a hell of a lot more up there. To make Adelaide not look so dead and to list the current projects for Adelaide (mostly for people not from SA) Crawf just openly revealed his shame of Adelaide by trying to 'compensate' projects in a pathetic attempt of appeasement. Sorry to be harsh, but truth be told, I saw right through that one. I knew your 'optimism' would rub off. dont you even realise that by trying to add insignificant projects to SA's list, that would add dozens upon dozens to those of the other states? of course Adelaide is going to have less projects u/c. we havnt even begun our boom, whereas Perth and Brisbane are well and truly into theirs. crawf wrote:To make Adelaide not look so dead and to list the current projects for Adelaide (mostly for people not from SA) An intelligent analysis of the results would indicate to you, that cities such as Perth and Brisbane are undergoing their boom at the moment. How could we have as many high-rise buildings under construction, when even you have mentioned in the past that our mining boom has not even begun yet?
{"url":"http://www.sensational-adelaide.com.au/forum/viewtopic.php?f=1&t=818","timestamp":"2014-04-16T17:13:56Z","content_type":null,"content_length":"36118","record_id":"<urn:uuid:01a7d4c4-a72c-4e85-bc73-6b2e353315cf>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Readiness Test Score Interpretation Guidelines The MRT is scored on a 20-point scale. If WebAssign reports your score as a percentage, divide by 5 to obtain your score. For example, 60% translates to a score of 12 points. Score Recommendation Ready to take MATH 021 or MATH 019 14 - 20 Consult an advisor if you took the AP Calculus test Consult an advisor if uncertain whether to take MATH 021 or 019 6 - 13 Ready to take MATH 019 If going on to MATH 021, take MATH 010 for preparation, NOT MATH 019 Take MATH 010 if going on to MATH 021 0 - 5 Take MATH 009 if going on to MATH 019 If not going on to MATH 019 or 021, consult an advisor Any score MATH 017 or STAT 051 Consult an advisor to ensure that you meet the requirements of your degree Questions? Talk to your academic advisor or contact math@uvm.edu.
{"url":"http://www.uvm.edu/~cems/mathstat/MRT/guidelines.php","timestamp":"2014-04-17T09:44:25Z","content_type":null,"content_length":"1603","record_id":"<urn:uuid:ed796711-6792-440d-ac9b-1e5c311d1c7c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 1 - 2 of 2 1. CJM 2006 (vol 58 pp. 476) Apolar Schemes of Algebraic Forms This is a note on the classical Waring's problem for algebraic forms. Fix integers $(n,d,r,s)$, and let $\Lambda$ be a general $r$-dimensional subspace of degree $d$ homogeneous polynomials in $n+1$ variables. Let $\mathcal{A}$ denote the variety of $s$-sided polar polyhedra of $\Lambda$. We carry out a case-by-case study of the structure of $\mathcal{A}$ for several specific values of $ (n,d,r,s)$. In the first batch of examples, $\mathcal{A}$ is shown to be a rational variety. In the second batch, $\mathcal{A}$ is a finite set of which we calculate the cardinality.} Keywords:Waring's problem, apolarity, polar polyhedron Categories:14N05, 14N15 2. CJM 2002 (vol 54 pp. 417) Slim Exceptional Sets for Sums of Cubes We investigate exceptional sets associated with various additive problems involving sums of cubes. By developing a method wherein an exponential sum over the set of exceptions is employed explicitly within the Hardy-Littlewood method, we are better able to exploit excess variables. By way of illustration, we show that the number of odd integers not divisible by $9$, and not exceeding $X$, that fail to have a representation as the sum of $7$ cubes of prime numbers, is $O(X^{23/36+\eps})$. For sums of eight cubes of prime numbers, the corresponding number of exceptional integers is $O(X^{11/36+\eps})$. Keywords:Waring's problem, exceptional sets Categories:11P32, 11P05, 11P55
{"url":"http://cms.math.ca/cjm/kw/Waring's%20problem","timestamp":"2014-04-16T10:18:59Z","content_type":null,"content_length":"27811","record_id":"<urn:uuid:145507f3-4c00-43ac-9ede-7fbe40b085da>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
Queensgate, WA Science Tutor Find a Queensgate, WA Science Tutor ...Questions missed will be explored and the reasoning behind correct answers explained. The student needs to have a previously unmarked copy of "The Real ACT Prep Guide 3rd Edition” containing 5 practice tests, bringing it to each session s/he has with me. ACT math is quite straightforward (compared to SAT), much like typical school math. 22 Subjects: including physical science, algebra 1, algebra 2, geometry ...Additionally, I have taken undergraduate and graduate level Biostatistics courses with success. I have a Ph.D. in Immunology. Genetics was part of my required coursework for both my undergraduate and graduate degrees. 17 Subjects: including physics, statistics, physical science, algebra 1 ...In addition to these subjects, I am familiar with the Social Sciences and I also enjoy History and Geography. I recently have taught an ESL/Naturalization class at the Chinese Information Services Center in Seattle as part of an internship for my Master's in Education Program in TESOL that I am currently completing at Seattle University. I enjoy tutoring because of its one-on-one 12 Subjects: including psychology, reading, English, writing ...I played the Viola in my high school chamber orchestra and volunteered as a student counselor for a summer music camp to help develop fundamental musical abilities for K-8 students. I offer a wide range of academic services and I can work with the the pupil to form a way to learn that best empha... 34 Subjects: including biology, reading, physical science, physics ...We'll talk first about what you want to achieve and anything that might get in the way. I will probably ask to see examples of your writing and tests you have taken, just to confirm your self-assessment. If factors outside school affect you, we'll explore those, too. 30 Subjects: including geology, GRE, zoology, botany Related Queensgate, WA Tutors Queensgate, WA Accounting Tutors Queensgate, WA ACT Tutors Queensgate, WA Algebra Tutors Queensgate, WA Algebra 2 Tutors Queensgate, WA Calculus Tutors Queensgate, WA Geometry Tutors Queensgate, WA Math Tutors Queensgate, WA Prealgebra Tutors Queensgate, WA Precalculus Tutors Queensgate, WA SAT Tutors Queensgate, WA SAT Math Tutors Queensgate, WA Science Tutors Queensgate, WA Statistics Tutors Queensgate, WA Trigonometry Tutors Nearby Cities With Science Tutor Adelaide, WA Science Tutors Avondale, WA Science Tutors Clearview, WA Science Tutors Houghton, WA Science Tutors Inglewood, WA Science Tutors Juanita, WA Science Tutors Kennard Corner, WA Science Tutors Kingsgate, WA Science Tutors Maltby, WA Science Tutors North City, WA Science Tutors Queensborough, WA Science Tutors Thrashers Corner, WA Science Tutors Totem Lake, WA Science Tutors Wedgwood, WA Science Tutors Woodinville Science Tutors
{"url":"http://www.purplemath.com/queensgate_wa_science_tutors.php","timestamp":"2014-04-18T04:07:40Z","content_type":null,"content_length":"24167","record_id":"<urn:uuid:6004c5a5-e04c-4b17-8013-7af7857edbe7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: prove the identity: [\left( \cos2x \over \sin x \right) = \left( \cot^2x-1 \over \csc x \right)\] Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f050956e4b075b56652108b","timestamp":"2014-04-20T21:00:03Z","content_type":null,"content_length":"326670","record_id":"<urn:uuid:f79b99b4-db3d-451f-9c1d-8b7228d5b318>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Expression tree computer science refers to the process of visiting each node in a tree data structure , exactly once, in a systematic way. Such traversals are classified by the order in which the nodes are visited. The following algorithms are described for a binary tree , but they may be generalized to other trees as well. Traversal methods Compared to linear data structures linked lists and one dimensional , which have only one logical means of traversal, tree structures can be traversed in many different ways. Starting at the root of a binary tree, there are three main steps that can be performed and the order in which they are performed define the traversal type. These steps (in no particular order) are: performing an action on the current node (referred to as "visiting" the node), traversing to the left child node, and traversing to the right child node. Thus the process is most easily described through To traverse a non-empty binary tree in preorder, perform the following operations recursively at each node, starting with the root node: 1. Visit the root. 2. Traverse the left subtree. 3. Traverse the right subtree. (This is also called Depth-first traversal.) To traverse a non-empty binary tree in inorder, perform the following operations recursively at each node, starting with the root node: 1. Traverse the left subtree. 2. Visit the root. 3. Traverse the right subtree. To traverse a non-empty binary tree in postorder, perform the following operations recursively at each node, starting with the root node: 1. Traverse the left subtree. 2. Traverse the right subtree. 3. Visit the root. Finally, trees can also be traversed in level-order, where we visit every node on a level before going to a lower level. This is also called Breadth-first traversal. In this binary search tree, • Preorder traversal sequence: F, B, A, D, C, E, G, I, H • Inorder traversal sequence: A, B, C, D, E, F, G, H, I □ Note that the inorder traversal of this binary search tree yields an ordered list • Postorder traversal sequence: A, C, E, D, B, H, I, G, F • Level-order traversal sequence: F, B, G, A, D, I, C, E, H Sample implementations print node.value if node.left ≠ null then preorder(node.left) if node.right ≠ null then preorder(node.right) if node.left ≠ null then inorder(node.left) print node.value if node.right ≠ null then inorder(node.right) if node.left ≠ null then postorder(node.left) if node.right ≠ null then postorder(node.right) print node.value All sample implementations will require stack space proportional to the height of the tree. In a poorly balanced tree, this can be quite considerable. We can remove the stack requirement by maintaining parent pointers in each node, or by threading the tree. In the case of using threads, this will allow for greatly improved inorder traversal, although retrieving the parent node required for preorder and postorder traversal will be slower than a simple stack based algorithm. To traverse a threaded tree inorder, we could do something like this: while hasleftchild(node) do node = node.left if (hasrightchild(node)) then node = node.right while hasleftchild(node) do node = node.left node = node.right while node ≠ null Note that a threaded binary tree will provide a means of determining whether a pointer is a child, or a thread. See threaded binary trees for more information. Level order traversal Level order traversal is a traversal method by which levels are visited successively starting with level 0 (the root node), and nodes are visited from left to right on each level. This is commonly implemented using a queue data structure with the following steps (and using the tree below as an example): Step 1: Push the root node onto the queue (node 2): New queue: 2- - - - - - - - - - Step 2: Pop the node off the front of the queue (node 2). Push that node's left child onto the queue (node 7). Push that node's right child onto the queue (node 5). Output that node's value (2). New queue: 7-5- - - - - - - - - Output: 2 Step 3: Pop the node off the front of the queue (node 7). Push that node's left child onto the queue (node 2). Push that node's right child onto the queue (node 6). Output that node's value (7). New queue: 5-2-6- - - - - - - - Output: 2 7 Step 4: Pop the node off the front of the queue (node 5). Push that node's left child onto the queue (NULL, so take no action). Push that node's right child onto the queue (node 9). Output that node's value New queue: 2-6-9- - - - - - - - Output: 2 7 5 Step 5: Pop the node off the front of the queue (node 2). Push that node's left child onto the queue (NULL, so take no action). Push that node's right child onto the queue (NULL, so take no action). Output that node's value (2). New queue: 6-9- - - - - - - - - Output: 2 7 5 2 Step 6: Pop the node off the front of the queue (node 6). Push that node's left child onto the queue (node 5). Push that node's right child onto the queue (node 11). Output that node's value (6). New queue: 9-5-11- - - - - - - - Output: 2 7 5 2 6 Step 7: Pop the node off the front of the queue (node 9). Push that node's left child onto the queue (node 4). Push that node's right child onto the queue (NULL, so take no action). Output that node's value (9). New queue: 5-11-4- - - - - - - - Output: 2 7 5 2 6 9 Step 8: You will notice that because the remaining nodes in the queue have no children, nothing else will be added to the queue, so the nodes will just be popped off and output consecutively (5, 11, 4). This gives the following: Final output: 2 7 5 2 6 9 5 11 4 which is a level-order traversal of the tree. Queue-based level order traversal Also, listed below is pseudocode for a simple queue based level order traversal, and will require space proportional to the maximum number of nodes at a given depth. This can be as much as the total number of nodes / 2. A more space-efficient approach for this type of traversal can be implemented using an iterative deepening depth-first search. q = empty queue while not q.empty do node := q.dequeue() if node.left ≠ null if node.right ≠ null Inorder traversal It is particularly common to use an inorder traversal on a binary search tree because this will return values from the underlying set in order, according to the comparator that set up the binary search tree (hence the name). To see why this is the case, note that if n is a node in a binary search tree, then everything in n 's left subtree is less than n, and everything in n 's right subtree is greater than or equal to n. Thus, if we visit the left subtree in order, using a recursive call, and then visit n, and then visit the right subtree in order, we have visited the entire subtree rooted at n in order. We can assume the recursive calls correctly visit the subtrees in order using the mathematical principle of structural induction. Traversing in reverse inorder similarly gives the values in decreasing Preorder traversal Traversing a tree in preorder while inserting the values into a new tree is common way of making a complete copy of a binary search tree. One can also use preorder traversals to get a prefix expression (Polish notation) from expression trees: traverse the expression tree preorderly. To calculate the value of such an expression: scan from right to left, placing the elements in a stack. Each time we find an operator, we replace the two top symbols of the stack with the result of applying the operator to those elements. For instance, the expression ∗ + 234, which in infix notation is (2 + 3) ∗ 4, would be evaluated like this: Using prefix traversal to evaluate an expression tree │ Expression (remaining) │ Stack │ │ ∗ + 234 │ │ │ ∗ + 23 │ 4 │ │ ∗ + 2 │ 3 4 │ │ ∗ + │ 2 3 4 │ │ ∗ │ 5 4 │ │ Answer │ 20 │ Functional traversal We could perform the same traversals in a functional language like using code similar to this: data Tree a = Nil | Node (Tree a) a (Tree a) preorder Nil = [] preorder (Node left x right) = [x] ++ (preorder left) ++ (preorder right) postorder Nil = [] postorder (Node left x right) = (postorder left) ++ (postorder right) ++ [x] inorder Nil = [] inorder (Node left x right) = (inorder left) ++ [x] ++ (inorder right) Iterative traversing All the above recursive algorithms require stack space proportional to the depth of the tree. Recursive traversal may be converted into an iterative one using various well-known methods. A sample is shown here for postorder traversal: nonrecursivepostorder (rootNode) nodeStack.push (rootNode) while (!nodeStack.empty()) currNode = nodeStack.last () if ((currNode.left != null) and (currNode.left.visited == false)) nodeStack.push (currNode.left) if ((currNode.right != null) and (currNode.right.visited == false)) nodeStack.push (currNode.right) print currNode.value currNode.visited := true nodeStack.pop () In this case, for each node is required to keep an additional "visited" flag, other than usual informations (value, left-child-reference, right-child-reference). See also External links • Dale, Nell. Lilly, Susan D. "Pascal Plus Data Structures". D. C. Heath and Company. Lexington, MA. 1995. Fourth Edition. • Drozdek, Adam. "Data Structures and Algorithms in C++". Brook/Cole. Pacific Grove, CA. 2001. Second edition. • http://www.math.northwestern.edu/~mlerma/courses/cs310-05s/notes/dm-treetran
{"url":"http://www.reference.com/browse/Expression+tree","timestamp":"2014-04-19T11:25:24Z","content_type":null,"content_length":"94238","record_id":"<urn:uuid:d7dd413e-90d2-48cd-be83-0ba6e9f459d3>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple but long proof... June 1st 2012, 07:24 PM #1 Jun 2012 Simple but long proof... First off I hope this is the right forum, if not let me know and I will gladly move it. I am a developer and have recently been given the equation and proof below to turn into code. I know it is a lot to ask but if someone could help me solve this step by step I can use this to program from. I have honestly spent hours and have yet been able to come up with the same answer. Any help is greatly appreciated!! SI=60, T=1, C1=1.1557, C2=1.0031, C3=-0.0408, AGET=76.4073801204541, C4=0.9807, C5=0.0314, CR=0.475, RHK=1, RHYXS=0.05, RHM=1.1, RHR=13, RHB=-1.6, RELHT=0.806719650456935, RHXS=0 Expected Result: HG= 0.112113426675006 Re: Simple but long proof... No idea what you mean by "turning into code"; or by "proof". Noticed that the .75*RHK*? has a right bracket ")" missing. What are you looking for? Ways to shorten? Ways to "keypunch it in"? Re: Simple but long proof... I am actually just trying to verify that this equation does indeed solve to the expected result. My math is not strong enough for me to tell if there is a something wrong with how I am coding it, or if there is something wrong with the proof. I do see the missing closing bracket. I will follow up on that and verify its location. I really appreciate your help. Re: Simple but long proof... Also, the |_ _| brackets shown (beginning and end of equation) mean the "floor function" (as example, if 3.746... then 3 is result), which makes no sense given your expected results of HG= 0.112113426675006 I assume e = Euler number. The "powers" are quite difficult to discern; can you show them in a clear way? Other minor stuff is confusing, like why mutiply by T when T=1 ?! Last edited by Wilmer; June 2nd 2012 at 11:04 AM. Re: Simple but long proof... Very helpful heads up on the floor function. I had only considered it as a grouping function. e is Euler number. T is for a duration variable. The objective of this equation is to calculate the amount a tree has grown for a single cycle and the duration of the cycle is specified by T. Re: Simple but long proof... OK. You have RHSX = 0. Is that ok? Means RHSX^(1 - RHB) = 0. Finally, on powers: the right portion of 2nd line of equation shows e^[-RHR / (1 - RHB)] * (another expression) : is the "other expression" part of the "power", or is it just a subsequent multiplication? AND the missing half-bracket?! Re: Simple but long proof... OK the flooring brackets are meant to be parenthesis. The missing half bracket is supposed to include the rest of the line. .75*RHK*(...(1/1-RHM)) e^[-RHR / (1 - RHB)] * (another expression) - this is part of the power You have been a tremendous help on this. I really appreciate it, I had no idea there were this many anomalies in this equation. Re: Simple but long proof... HOKAY!! Now I see why parts of your equation are a bit like Future Value of money formulas... The using of e^(.....) is obviously for continuous compounding. Well, I entered your formula along with the values of the variables (I use UBasic programming) but cannot come anywhere near .112113.... as solution: I get 0.006953.... You got that formula from "WHO"? Again, I'll tell you that I don't understand what you mean by needing "help in coding". Let's take something simple: A = u + v + w^x u=4, v=5, w=2, x=3 Much smaller/simpler than your problem, but works same way. To "code" that in UBasic: u=4, v=5, w=2, x=3 A = u + v + w^x Print A output: 17 If it helps, I renamed your variables this way: u=C1=1.1557, v=C2=1.0031, w=C3=-0.0408, x=C4=0.9807, y=C5=0.0314, z=CR=0.475 p=SI=60, q=RELHT=0.806719650456935, r=AGET=76.4073801204541 f=RHB=-1.6, g=RHK=1, h=RHM=1.1, i=RHR=13, j=RHXS=0, k=RHYXS=0.05 So, as example, the left side of the 1st line of your formula becomes: u * p^v * (1 - e^(w * (r - 1)))^(x * p^y) Re: Simple but long proof... Man after many rounds with the provider of that #$@! equation I finally got it worked out. I started off assuming the equation was right and my math was wrong. Turns out there were several errors in there that were a result of him not knowing how to specify his needs in an equation. Below is the working C# code that produces the desired result. You will notice a couple of bounding statements that weren't in the original equation!! You have no idea how much you helped me man, truely appreciate it. I was losing my mind! private double getHeightGrowth(){ double condition1 = (100*Math.Pow(CR, 3.0)*Math.Exp(-5.0*CR)) > 1.0 ? 0.99 : (100*Math.Pow(CR, 3.0)*Math.Exp(-5.0*CR)); double condition2 = (RHK)*Math.Pow((1.0+((Math.Pow((RHK/RHYXS), (RHM-1.0)))-1.0)*Math.Exp(((-1.0*RHR)/(1.0-RHB))*((Math.Pow(RelativeHeight, (1.0-RHB)))-(Math.Pow(RHXS, (1.0-RHB)))))),(1.0/ (1.0-RHM))) > 1.0 ? 0.99 : (RHK)*Math.Pow((1.0+((Math.Pow((RHK/RHYXS), (RHM-1.0)))-1.0)*Math.Exp(((-1.0*RHR)/(1.0-RHB))*((Math.Pow(RelativeHeight, (1.0-RHB)))-(Math.Pow(RHXS, (1.0-RHB)))))),(1.0/(1.0-RHM))); double result = DurationOfCycle* (((C1*(Math.Pow(SiteIndex, C2)))*Math.Pow((1.0-Math.Exp(C3*((AGET)+5.0))), (Math.Pow((C4*SiteIndex), C5))))- ((C1*(Math.Pow(SiteIndex, C2)))*Math.Pow((1.0-Math.Exp(C3*(AGET))), (Math.Pow((C4*SiteIndex), C5)))))*(0.25*condition1+0.75*condition2); return result/5.0; Last edited by cwigley; June 3rd 2012 at 09:00 PM. Re: Simple but long proof... Re: Simple but long proof... In that messy C# stuff ^ is an XOr operator. I am happy to code in about anything, do it long enough its all semantics. C# was just part of the requirement June 2nd 2012, 12:35 AM #2 MHF Contributor Dec 2007 Ottawa, Canada June 2nd 2012, 10:21 AM #3 Jun 2012 June 2nd 2012, 10:59 AM #4 MHF Contributor Dec 2007 Ottawa, Canada June 2nd 2012, 01:50 PM #5 Jun 2012 June 2nd 2012, 02:25 PM #6 MHF Contributor Dec 2007 Ottawa, Canada June 2nd 2012, 07:27 PM #7 Jun 2012 June 3rd 2012, 08:06 PM #8 MHF Contributor Dec 2007 Ottawa, Canada June 3rd 2012, 08:58 PM #9 Jun 2012 June 4th 2012, 01:11 PM #10 MHF Contributor Dec 2007 Ottawa, Canada June 4th 2012, 03:36 PM #11 Jun 2012
{"url":"http://mathhelpforum.com/algebra/199555-simple-but-long-proof.html","timestamp":"2014-04-17T11:57:54Z","content_type":null,"content_length":"60433","record_id":"<urn:uuid:c1559a44-bd8f-4e06-8cc4-058ed3ea5e70>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Sharp Park, CA Calculus Tutor Find a Sharp Park, CA Calculus Tutor ...I teach my students both the mathematical concepts of statistics/probability and how to deal with word problems: recognize the appropriate statistical setting described in the scenario and apply the correct method to solve the problem. The positive reviews left by many of my statistics students ... 14 Subjects: including calculus, statistics, geometry, algebra 2 ...In addition, I have taken multiple courses in logic both mathematical and philosophical. I am an adjunct professor and am well-versed in the pedagogical methods needed to tutor this discipline. I am a 25-year veteran of Silicon Valley, where all my experience has been in Marketing. 39 Subjects: including calculus, English, chemistry, reading ...I felt I was almost born with some intuitive knowledge of this. But then I had a roommate who didn't cook. She was very interested in it, but didn't know what to do. 32 Subjects: including calculus, chemistry, physics, Spanish ...I'm an Australian high school mathematics and science teacher, with seven years experience, who has recently moved to the bay area because my husband found employment here. I'm an enthusiastic teacher, who loves helping students to succeed to the best of their ability, and loves all facets of mathematics and physics. I will teach all levels from middle school mathematics up to calculus 11 Subjects: including calculus, chemistry, physics, statistics ...I took honors precalculus in high school which eventually lead me to earning a 5 on the AP Calculus BC exam. I earned a 740 on the SAT math section while in high school. I excelled at math in high school and earned a hard science degree at a selective liberal arts college. 27 Subjects: including calculus, chemistry, English, reading Related Sharp Park, CA Tutors Sharp Park, CA Accounting Tutors Sharp Park, CA ACT Tutors Sharp Park, CA Algebra Tutors Sharp Park, CA Algebra 2 Tutors Sharp Park, CA Calculus Tutors Sharp Park, CA Geometry Tutors Sharp Park, CA Math Tutors Sharp Park, CA Prealgebra Tutors Sharp Park, CA Precalculus Tutors Sharp Park, CA SAT Tutors Sharp Park, CA SAT Math Tutors Sharp Park, CA Science Tutors Sharp Park, CA Statistics Tutors Sharp Park, CA Trigonometry Tutors Nearby Cities With calculus Tutor Alameda Pt, CA calculus Tutors Brisbane calculus Tutors Desert Edge, CA calculus Tutors Marin City, CA calculus Tutors Mount Eden, CA calculus Tutors Muir Beach, CA calculus Tutors Nas Miramar, CA calculus Tutors Pacifica calculus Tutors Palomar Park, CA calculus Tutors Presidio, CA calculus Tutors Rancho California, CA calculus Tutors Rancho Suey, CA calculus Tutors Tamalpais Valley, CA calculus Tutors Terra Linda, CA calculus Tutors West Menlo Park, CA calculus Tutors
{"url":"http://www.purplemath.com/Sharp_Park_CA_Calculus_tutors.php","timestamp":"2014-04-18T11:10:33Z","content_type":null,"content_length":"24341","record_id":"<urn:uuid:583f487a-c486-4fbd-bd85-3c45f23320b7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Devon SAT Math Tutor Find a Devon SAT Math Tutor ...On the other hand, if you have chosen one of the other talented tutors, keep on learning! "An expert problem solver must be endowed with two incomparable qualities: a restless imagination and a patient pertinacity." - Howard W. Eves, American MathematicianMy expertise is in the field of analyti... 9 Subjects: including SAT math, chemistry, algebra 2, geometry ...I have learned a great deal from my students in this process! My tutoring focuses on a solid understanding of the material and a consistent and methodical approach to problem-solving, with special attention paid to a good foundation in mathematical methods. I am a native German-speaker, and have been working for several years as a German-to-English translator. 21 Subjects: including SAT math, reading, physics, writing ...I have worked as a Teaching Assistant in undergraduate clinical nursing classes and have taught research classes at the Graduate level. As a Nurse Educator, I have received multiple teaching awards including Teacher of the Year. I look forward to working with you to develop a plan to achieve academic success! 39 Subjects: including SAT math, chemistry, English, biology ...I have a large range of teaching experience from very young to adult. I enjoy teaching and am considered to have a caring approach, and I try to make sure that learning takes place in a relaxed but studious situation. I have a combined economics and international affairs degree from the George Washington University. 38 Subjects: including SAT math, reading, chemistry, physics ...They both did very well in school. I had my own special requirements for them regarding homework from Sept. through June. Homework was the highest priority. 13 Subjects: including SAT math, geometry, ASVAB, algebra 1
{"url":"http://www.purplemath.com/devon_pa_sat_math_tutors.php","timestamp":"2014-04-19T23:12:05Z","content_type":null,"content_length":"23768","record_id":"<urn:uuid:3afe08ff-0b78-4669-ab9d-0e2c3937fa94>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
[2/3] gdiplus: fix GdipPathIterNextMarker behaviour on path without markers. fix tests. [2/3] gdiplus: fix GdipPathIterNextMarker behaviour on path without markers. fix tests. Michael Karcher wine at mkarcher.dialup.fu-berlin.de Sun Jul 13 05:15:21 CDT 2008 Am Sonntag, den 13.07.2008, 13:51 +0400 schrieb Nikolay Sivov: > Reece Dunn wrote: > > Is this the way it works on Windows? You have 4 basic cases: > > a. exactly equal; > > b. equal + epsilon for rounding/representation errors; > > c. equal - epsilon for rounding/representation errors; > > d. not equal. I don't see a difference between b and c. A matrix has four elements. Some might be slightly too big and others slightly too small. Would this make b or c? It's just one case: equal +/- epsilon. > Exactly, so I plan to test this deeper on native. As I see now after > some basic tests, most probably native call does a bitwise comparison > cause the result is affected by a smallest matrix element change (e.g. > 1.0000000f -> 1.0000001f). But I'll test it more. 1+FLT_EPSILON is the value to use for the "smallest matrix element change". But sour test already is a very strong indication that it really is strict equality test. FLT_EPSILON is in <float.h> > Reece, do you have any > suggestions to do some bounds test (I mean test is call affected by > representation or round errors or not)? Another thing you could test is: Is the Matrix (1,FLT_EPSILON/2,0,1) equal to (1,0,0,1). A matrix like te former can easily be the result of multiplying a matrix by its inverse matrix (if represented as floating point numbers). One might expect that the product of a matrix and its inverse always is the identity matrix. > >> 2. GdipIsMatrixInvertible which contains a check for 'not above zero' > >> determinant. > >> > > a. det(A) > 0 > > b. det(A) == 0 > > c. det(A) < 0 > > > > So does Windows match your assumption for the answer to these (a ==> > > yes; b, c ==> no). > What do you mean here by (c => no)? Native allows negative det and I do so. He implied it form your "not above zero" quote. A hint how I would implement that test (if it turns out that a strict comparison to 0 is not the right result): Do not calculate the sum of a*d and -b*c, but compare these numbers in a sensible way. Michael Karcher More information about the wine-devel mailing list
{"url":"http://www.winehq.org/pipermail/wine-devel/2008-July/067362.html","timestamp":"2014-04-20T16:31:10Z","content_type":null,"content_length":"5452","record_id":"<urn:uuid:f10a88de-0bf5-444b-adc1-d0fe8cc3af09>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Applets of Miklós Maróti 1. The model printer applet can calculate all nonisomorphic models of finite sentences. For example, you can generate all nonisomorphic graphs or groupoids of small sizes. 2. The graph polymorphisms applet can help to find maps that are polymorphisms of a set of graphs using a bounded width one algorithm. 3. The collapsing monoid applet can help to find which monoids are collapsing, that is, for which there is a unique clone whose unary part is the given monoid. Last updated on September 28, 2010.
{"url":"http://www.math.u-szeged.hu/~mmaroti/applets/index.html","timestamp":"2014-04-17T01:46:03Z","content_type":null,"content_length":"2588","record_id":"<urn:uuid:52b0d088-25e0-4e59-8e7c-1e27b56a12fb>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
Oscillator based on Complex Number Multiplication Next | Prev | Top | JOS Index | JOS Pubs | JOS Home | Search A well known difference equation exists which can compute samples of parametrically well behaved sinusoids [1,2]. The frequency of the sinusoid can easily be changed without changing the amplitude. The equation is a property of the multiplication of two complex numbers.^1Complex numbers can be represented as two dimensional vectors in a plane in which the x axis is the real part of the number and the y axis is the imaginary part of the number. The polar coordinate form of the vector is a magnitude and an angle (measured from the x axis). The magnitude of the product of two numbers is the product of the magnitudes of the two numbers and the angle of the product is the sum of the angles of the two numbers. We will use the following definitions and relations: In terms of real and imaginary components: Let a sequence of complex numbers be formed from the product of two complex numbers The imaginary part of sine wave: The magnitude of the sine wave is always 1. The period of the sine wave is By changing The actual difference equations to compute the sine wave are obtained by writing Eq.1) as real and imaginary parts. This yields two difference equations which can be extrapolated to compute Next | Prev | Top | JOS Index | JOS Pubs | JOS Home | Search Download smac03maxjos.pdf
{"url":"https://ccrma.stanford.edu/~jos/smac03maxjos/Oscillator_based_Complex_Number.html","timestamp":"2014-04-17T21:30:02Z","content_type":null,"content_length":"13535","record_id":"<urn:uuid:35d2824a-bec8-4756-bfd6-6ed9b2875803>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Defining Repeatability 4.9.2.2 – RepeatabilityRepeatabilityThe performance statistic most familiar to feeder users, repeatability quantifies the degree to which a feeder’s discharge stream varies over brief time intervals, producing a snapshot of one dimension of feeder performance.Note that the repeatability measurement says nothing at all about whether the feeder is delivering, on the average, the targeted rate (that is the purpose of the linearity measure performed on a properly spanned feeder). Repeatability does, however, reveal a great deal about the expected extent of short-term flow-rate inconsistencies, an important contributor to the integrity of the formulation and end-product quality.The repeatability measurement is performed by taking a series (usually at least 30) of carefully timed consecutive catch samples from the discharge stream, weighing each, and then calculating the + standard deviation of sample weights expressed as a percentage of the mean value of the samples taken. The performance statistic most familiar to feeder users, repeatability quantifies the degree to which a feeder’s discharge stream varies over brief time intervals, producing a snapshot of one dimension of feeder performance. Note that the repeatability measurement says nothing at all about whether the feeder is delivering, on the average, the targeted rate (that is the purpose of the linearity measure performed on a properly calibrated feeder). Repeatability does, however, reveal a great deal about the expected extent of short-term flow-rate inconsistencies, an important contributor to the integrity of the formulation and end-product quality. The repeatability measurement is performed by taking a series (usually at least 30) of carefully timed consecutive catch samples from the discharge stream, weighing each, and then calculating the +/- standard deviation of sample weights expressed as a percentage of the mean value of the samples taken. The measurement is typically performed at the nominal intended operating rate(s) of the feeder. See measurement procedures section for more information. For example, owing to the random nature of repeatability errors, if sampling shows a standard deviation of +/- 0.3% it can be said that 68.3% of sample weights will fall within the +/- 0.3% error band (1 Sigma), 95.5% will occur within +/- 0.6% (2 Sigma), and 99.7% will lie within +/- 0.9% (3 Sigma). These expressions are equivalent. Traditionally, repeatability has been expressed at two standard deviations (2 Sigma) over minute-to-minute sample periods. However, due to higher throughput rates and more stringent quality standards, many processors are now requiring sampling periods as short as several seconds. Where such short sampling periods are required, a corresponding lowering of repeatability precision is to be expected. For more information, see the following section on repeatability timescales. A complete expression of a repeatability statistic must contain the following elements: a +/- percentage error value, the Sigma level, and the sampling criteria. For example, a repeatability performance statement might take the following form: +/- 0.5% of sample average (@ 2 Sigma) based on 30 consecutive samples of one minute, one kilogram, one belt revolution, or thirty screw revolutions, whichever is greater.
{"url":"http://www.ptonline.com/articles/repeatability","timestamp":"2014-04-16T07:15:47Z","content_type":null,"content_length":"22605","record_id":"<urn:uuid:69c857e6-a966-40fc-980a-34e2822a0e68>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Structural Operations on Rational Expressions For ordinary polynomials, Factor and Expand give the most important forms. For rational expressions, there are many different forms that can be useful. Different kinds of expansion for rational expressions. Structural operations on rational expressions. In mathematical terms, Apart decomposes a rational expression into "partial fractions". In expressions with several variables, you can use Apart[expr, var] to do partial fraction decompositions with respect to different variables.
{"url":"http://reference.wolfram.com/mathematica/tutorial/StructuralOperationsOnRationalExpressions.html","timestamp":"2014-04-20T19:20:20Z","content_type":null,"content_length":"42189","record_id":"<urn:uuid:31a3291b-3c74-41cc-8a45-dabec290e712>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Standard Form A linear equation in standard form is an equation that looks like ax + by = c where a, b, and c are real numbers and a and b aren't both zero. c can be zero if it wants. It's the favorite child, so it gets special privileges. If only a is 0, the equation can be rewritten to look like y = (some number). If only b is 0, the equation can be rewritten to look like x = (some number). For example, the equation 3y = 8 is equivalent to the equation b = 1). Meanwhile, the equation 2x = 4 is equivalent to the equation x = 2, which is also in standard form (with a = 1). If one of a or b is zero, we know how to graph the equation and how to read off an equation from a graph. You probably suspect there will be some cases where it won't be so easy, and neither a nor b will be zero. You suspect right. Okay, now what if an equation throws us a curveball? Should we sacrifice our bodies and take our base? If neither a nor b is zero, we can most easily graph the linear equation by finding its intercepts. Sample Problem Graph the linear equation x + 4y = 8. Let's find the intercepts. To find the x-intercept, let y = 0, since the x intercept will be at a point of the form (something, 0). Then x + 4(0) = 8 and so x = 8 is the x-intercept. For the y-intercept, let x = 0. Then 0 + 4y = 8 and y = 2 is the y-intercept. We have now determined both intercepts. Who needs a or b to be zero? Not us. Now we can plot the intercepts: and connect the dots to get the line: Sample Problem Write, in standard form, the linear equation graphed below: The x intercept is -1, which means whatever a, b, and c are, a(-1) + b(0) = c. Let's make life easy on ourselves and let a = 1. That's right...we're going to dip this equation in a bucket of A-1 sauce. (-1) = c. To find b, the remaining coefficient, we look at the y-intercept: y = -2. x will be 0, and we have already decided that c = -1, so we find 0 + b(-2) = -1. If we want to make things pretty, we can multiply both sides of the equation by 2 and write the resulting equation, which has integer coefficients. If we want to make things really pretty, we can dress the equation up in a sequined ball gown and give it a makeover. Let's start small, though: 2x + y = -2. Sample Problem Write, in standard form, the linear equation graphed below: The x intercept is -2, which means whatever a, b, and c are, a(-2) + b(0) = c. We can let a = 1, so (-2) = c. To find b we look at the y-intercept, which occurs at y = 4. At the y-intercept x = 0, and since we've decided c = -2, we find 0 + b(4) = -2. This means To make things pretty, we can multiply both sides of the equation by 2 to get an equivalent equation with integer coefficients: 2x -y = -4 Now for that makeover. Standard Form Practice: Graph the linear equation 3x + 4y = 5. Graph the following linear equation: 3x – y = 7. Graph the following linear equation: x + 2y = -5. Graph the following linear equation: -3x + 2y = 8. Determine the linear equation on the following graph: Determine the linear equation on the following graph: Determine the linear equation on the following graph:
{"url":"http://www.shmoop.com/functions/standard-form-help.html","timestamp":"2014-04-18T06:35:13Z","content_type":null,"content_length":"45485","record_id":"<urn:uuid:878eaf31-bce7-419b-90da-739f7902ba15>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
Why can I not find the top point of this quadratic equation? April 29th 2012, 07:27 AM Why can I not find the top point of this quadratic equation? Given the equation: I want to find the x-value of the top point of this equation, which as a graph I've found to be $x=\frac{1}{3}$ Attachment 23730 However when I try to find this value, I don't end up with $\frac{1}{3}$ First I differentiate the equation: $f'(x)=\frac{d}{dx}1+x-\frac{x^2}{1}-\frac{x^3}{1}=1-2\cdot x-3\cdot x^2$ Now I put the differentiated equation equal to 0 and isolate x: $1-2\cdot x-3\cdot x^2=0$ Why am I not getting $x=\frac{1}{3}$? According to my graph, that is the correct value? April 29th 2012, 07:38 AM Re: Why can I not find the top point of this quadratic equation? Given the equation: I want to find the x-value of the top point of this equation, which as a graph I've found to be $x=\frac{1}{3}$ Attachment 23730 However when I try to find this value, I don't end up with $\frac{1}{3}$ First I differentiate the equation: $f'(x)=\frac{d}{dx}1+x-\frac{x^2}{1}-\frac{x^3}{1}=1-2\cdot x-3\cdot x^2$ Now I put the differentiated equation equal to 0 and isolate x: $1-2\cdot x-3\cdot x^2=0$ How did you do that? You can find out by substituting 1/3 for x $=1-2\cdot (\frac{1}{3})-3\cdot (\frac{1}{3})^2$ $=1- (\frac{2}{3})-3\cdot \frac{1}{9}$ $=1- (\frac{2}{3})-\cdot \frac{3}{9}$ $=1- (\frac{2}{3})-\cdot \frac{1}{3} = 0$ So x=1/3 is a solution of the quadratic equation. By the way are you sure x=1/3 is the one and only solution? April 29th 2012, 07:55 AM Re: Why can I not find the top point of this quadratic equation? April 29th 2012, 08:08 AM Re: Why can I not find the top point of this quadratic equation? Your quadratic factorises (1+x)(1-3x)=0 So x=1/3 or -1 April 29th 2012, 08:22 AM Re: Why can I not find the top point of this quadratic equation? There are many ways to get this solutions. One of them: You need to solve $1-2x-3x^2 = 0$ We devide by -3 $-\frac{1}{3}+\frac{2}{3}x+x^2 = 0$ We add a zero $-\frac{1}{3}+\frac{2}{3}x+(\frac{2}{3 \cdot 2})^2-(\frac{2}{3 \cdot 2})^2+x^2 = 0$ Some rearranging $-\frac{1}{3}+\frac{2}{3}x+(\frac{2}{6})^2+x^2 = +(\frac{2}{6})^2$ $-\frac{1}{3}+\frac{2}{3}x+(\frac{2}{6})^2+x^2 = \frac{4}{36}$ $\frac{2}{3}x+(\frac{2}{6})^2+x^2 = \frac{4}{36} +\frac{1}{3}$ $\frac{2}{3}x+(\frac{2}{6})^2+x^2 = \frac{4}{36} +\frac{1}{3}$ $\frac{2}{3}x+(\frac{1}{3})^2+x^2 = \frac{4}{36} +\frac{1}{3}$ As you know it is $(a+b)^2 = a^2+2ab+b^2$ (do you know this theoreme? Respectively do you see how we find a and b?) is the same as Using this in $\frac{2}{3}x+(\frac{1}{3})^2+x^2 = \frac{4}{36} +\frac{1}{3}$ leads us to $(x+\frac{1}{3})^2 = \frac{4}{36} +\frac{1}{3}$ and so on... $(x+\frac{1}{3})^2 = \frac{1}{9} +\frac{1}{3}$ $(x+\frac{1}{3})^2 = \frac{1}{9} +\frac{3}{9}$ $(x+\frac{1}{3})^2 = \frac{4}{9}$ $x+\frac{1}{3} = \sqrt{\frac{4}{9}}$ $x+\frac{1}{3} = \pm \frac{2}{3}$ $x = -\frac{1}{3} \pm \frac{2}{3}$ so -1 and 1/3 are the solutions you are looking for
{"url":"http://mathhelpforum.com/algebra/198096-why-can-i-not-find-top-point-quadratic-equation-print.html","timestamp":"2014-04-19T20:06:46Z","content_type":null,"content_length":"17106","record_id":"<urn:uuid:55be036a-6fc4-46dd-80fd-e5ec10cd5407>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Vector spaces I wonder could anyone get me started on this? U is the solution set to a system of linear equations Ax=0 where A= 3 4 61 112 -1 34 109 -5 -27 Is the subset U of a vector space V a subspace of V? Thanks a lot
{"url":"http://mathhelpforum.com/advanced-algebra/67892-vector-spaces.html","timestamp":"2014-04-16T05:54:52Z","content_type":null,"content_length":"36702","record_id":"<urn:uuid:58d4a1ba-b17a-4ca7-8ed9-cb0fd0e921ca>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Some help please May 6th 2007, 08:50 AM #1 Apr 2007 Some help please this is just practice problems that I am trying to get a hang of: a ^-4 b ^3 (a ^7 b ^5) x ^-9 y (x ^9 y ^3) now the choices I have to choose from are: a ^3 b ^8 y ^3 a ^3 b ^8 xy ^4 a ^3 b ^8 y ^4 a ^0 b ^10 y ^4 none of these I need help in learning how to do this type of problem.. Thanks this is just practice problems that I am trying to get a hang of: a ^-4 b ^3 (a ^7 b ^5) x ^-9 y (x ^9 y ^3) now the choices I have to choose from are: a ^3 b ^8 y ^3 a ^3 b ^8 xy ^4 a ^3 b ^8 y ^4 a ^0 b ^10 y ^4 none of these I need help in learning how to do this type of problem.. Thanks everything is being multiplied in the top and bottom right? ok, here's how we deal with this. the following are laws of exponents that you should know (you sjould know more than these, but these are the ones you need for this problem). 1) when we multiply two numbers of the same base, we add the powers. that is, (x^n)*(x^m) = x^(n+m) example: (x^2)*(x^3) = x^(2+3) = x^5 2) when we divide two numbers of the same base, we subtract the power of the lower one from the power of the top one, that is: (x^n)/(x^m) = x^(n-m) example: (x^5)/(x^2) = x^(5-2) = x^3 don't worry if the top power is smaller than the bottom one, still subtract example 2: (x^2)/(x^7) = x^(2-7) = x^-5 3) the last example should put a question into your head...well, not really, but yeah. How do we deal with negative powers? what does it mean to have a negative power? negative powers do not change the sign of a number, when we have a negative power, it means we take the inverse on the number, it means put 1 over the number with a positive power. example: x^-n = 1/(x^n) example 2: x^-5 = 1/(x^5) example 3: (x/y)^-3 = (y/x)^3 = (y^3)/(x^3) 4) Anything (other than zero) raised to the zero power is 1 example: x^0 = 1, 35^0 = 1, 67394.678924^0 = 1 i think that's all we need to tackle this problem, so now let's see how qbkr21 got to his answer. May 6th 2007, 08:55 AM #2 May 6th 2007, 11:01 AM #3
{"url":"http://mathhelpforum.com/algebra/14621-some-help-please.html","timestamp":"2014-04-16T11:17:17Z","content_type":null,"content_length":"34427","record_id":"<urn:uuid:231b6a6b-91ca-4c89-b932-f02508735d55>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Led Shader Tutorial So as you can see, this pixelation effect is very important. But our pixelation from LED Shader Tutorial - Part I was definitely not looking like this. Mainly, there were no black borders seperating the pixel regions, but also significant is the lack of rounded edges on our pixel regions. New Uniforms used in the fragment program: • float "pixelRadius" - radius of the circle defining our pixel regions. This should have a range [0.0 - 1.0] for reasons you will understand later. Beginning at 0.5 the circles will begin to hit the edge of their pixel regions and they'll look more like rounded squares. At 1.0 you will have a square. • float "tolerance" - a value used to determine the gradient from pixel region color to the black used to seperate the regions. The higher you make this value the more blurry the edges of your circles will get. I will now discuss two new devices we will need to implement these improvements. An equation for a circle and the GLSL function smoothstep. Equation for a circle: A circle is defined in cartesian coordinate as (x - h)^2 + (y - k)^2 = r^2 where (h,k) is the center of the circle of radius r. As you may recall from LED Shader Tutorial - Part I we are mostly working in texture coordinates. The problem with this is that we have an elliptical coordinate system. Look at the figure below. You may remember me mentioning in the previous tutorial that texCoordsStep will likely be different in the x direction then the y direction. This is because the values m and n in the figure above will be different. Using texture coordinates directly as our coordinate system results in an elliptical system. This is a problem. Imagine if we tried to use a base case + offset method similar to what we are doing for our sample points. So the center of our circle would be located at (inPixelStep.x + inPixelHalfStep.x, inPixelStep.y + inPixelHalfStep.y) and our radius wouldn't be constant! With out a constant radius we can't use the equation for a circle. So if we use this method we would have to use an equation for an ellipse: (x - h)^2 / a^2 + (y - k)^2 / b^2 = 1. This is much more computationally expensive and therefore undesirable. So we do something a little different. We add a variable: • pixelRegionCoords - stores x and y coordinates in pixel region space. This might seem confusing. Another coordinate space to think about. It turns out it is not that complicated. All we are doing is taking the fraction left over by the division we perform to get our pixel region bin. vec2 pixelRegionCoords = fract(gl_TexCoord[0].st/texCoordsStep); Now we can use our circle equation in conjunction with these pixelRegionCoords to make our pixel regions rounded! Note that this pixel region space will range from 0.0 to 1.0. This is a built in GLSL function with the following specification: So instead of assigning our pixel region color to all fragments in the circle and black to all fragments out of the circle, we use this function to get a blending of color from inside to outside. So how do these two devices combine? Look at the following code: vec2 powers = pow(abs(pixelRegionCoords - 0.5),vec2(2.0)); float radiusSqrd = pow(pixelRadius,2.0); float gradient = smoothstep(radiusSqrd-tolerance, radiusSqrd+tolerance, powers.x+powers.y); gl_FragColor = mix(avgColor, vec4(0.1,0.1,0.1,1.0), gradient); First we compute (x - h)^2 and (y - k)^2 from our cirlce equation. Recall that (h,k) is the center of our circle. Since we are operating in pixel region coordinates, the center of our pixel region is (0.5,0.5). We take the abs(pixelRegionCoords - 0.5) simply due to a quark of the GLSL language. The pow(x,y) function is undefined for values of x < 0. We then compute r^2 from our circle equation and then we are ready for smoothstep. We smoothstep around our radiusSqrd value by using the tolerance variable described earlier. The higher this tolerance is, the larger a gradient we will get along the edge of our circle. We use our gradient value to do a linear blend between our pixel region color and a very dark gray. This turns out quite nicely. Conceptual summary of shader steps for each fragment (The current fragment being processed is referred to in the first person): • compute base case sample locations in texture coordinates • find out which pixel region I am in and apply that offset to the base case to get my sample locations • use texture coordinates I have computed to get 9 color values from my pixel region • store the average of those color values • determine where I am in relation to the circle defining my pixel region • use tolerance uniform and my position to get a gradient coeffecient • I get assigned a blending of my pixel region color and dark gray based on the gradient coeffecient Go to LED Shader Tutorial - Part III for a look at improving this technique! Advanced Topic Section This section is not needed to understand the tutorial. Possible speed enhancements? It should be noted that there are some speed enhancements that can be made to this shader. If the current fragment being processed will result in a gradient coeffecient = 0.0 (in other words, the fragment is completely outside of the circle and will have no pixel region color in it), we need not compute the pixel region color. This will rarely be the case, however, and it would make the code much less intuitive to read. Another interesting enhancement would be to use some kind of texture as a means of determining the shape of each pixel region. Then you wouldn't be limited to the circle shape, you could easily do things with rounded rectangular regions, star formations (which are what some LEDs look like due to an arrangement of the LEDs into star-like clusters), etc. The code (if you are going to use the code you should refer to the Source and Demo page): void main (void) gl_TexCoord[0] = gl_MultiTexCoord0; gl_Position = ftransform(); *Shader by: Jason Gorski *Email: jasejc 'at' aol.com *CS594 University of Illinios at Chicago *LED Shader Tutorial *For more information about this shader view the tutorial page *at http://www.lighthouse3d.com/opengl/ledshader or email me #define KERNEL_SIZE 9 uniform int pixelSize; //size of bigger "pixel regions". These regions are forced to be square uniform ivec2 billboardSize; //dimensions in pixels of billboardTexture uniform sampler2D billboardTexture; //texure to be applied to billboard quad //uniforms added since billboard1 // a tolerance used to determine the amount of blurring // along the edge of the circle defining our "pixel region" uniform float tolerance; //the radius of the circle that will be our "pixel region", values > 0.5 hit the edge of the "pixel region" uniform float pixelRadius; vec2 texCoords[KERNEL_SIZE]; //stores texture lookup offsets from a base case void main(void) //will hold our averaged color from our sample points vec4 avgColor; //width of "pixel region" in texture coords vec2 texCoordsStep = 1.0/(vec2(float(billboardSize.x),float(billboardSize.y))/float(pixelSize)); //x and y coordinates within "pixel region" vec2 pixelRegionCoords = fract(gl_TexCoord[0].st/texCoordsStep); //"pixel region" number counting away from base case vec2 pixelBin = floor(gl_TexCoord[0].st/texCoordsStep); //width of "pixel region" divided by 3 (for KERNEL_SIZE = 9, 3x3 square) vec2 inPixelStep = texCoordsStep/3.0; vec2 inPixelHalfStep = inPixelStep/2.0; //use offset (pixelBin * texCoordsStep) from base case // (the lower left corner of billboard) to compute texCoords float offset = pixelBin * texCoordsStep; texCoords[0] = vec2(inPixelHalfStep.x, inPixelStep.y*2.0 + inPixelHalfStep.y) + offset; texCoords[1] = vec2(inPixelStep.x + inPixelHalfStep.x, inPixelStep.y*2.0 + inPixelHalfStep.y) + offset; texCoords[2] = vec2(inPixelStep.x*2.0 + inPixelHalfStep.x, inPixelStep.y*2.0 + inPixelHalfStep.y) + offset; texCoords[3] = vec2(inPixelHalfStep.x, inPixelStep.y + inPixelHalfStep.y) + offset; texCoords[4] = vec2(inPixelStep.x + inPixelHalfStep.x, inPixelStep.y + inPixelHalfStep.y) + offset; texCoords[5] = vec2(inPixelStep.x*2.0 + inPixelHalfStep.x, inPixelStep.y + inPixelHalfStep.y) + offset; texCoords[6] = vec2(inPixelHalfStep.x, inPixelHalfStep.y) + offset; texCoords[7] = vec2(inPixelStep.x + inPixelHalfStep.x, inPixelHalfStep.y) + offset; texCoords[8] = vec2(inPixelStep.x*2.0 + inPixelHalfStep.x, inPixelHalfStep.y) + offset; //take average of 9 pixel samples avgColor = texture2D(billboardTexture, texCoords[0]) + texture2D(billboardTexture, texCoords[1]) + texture2D(billboardTexture, texCoords[2]) + texture2D(billboardTexture, texCoords[3]) + texture2D(billboardTexture, texCoords[4]) + texture2D(billboardTexture, texCoords[5]) + texture2D(billboardTexture, texCoords[6]) + texture2D(billboardTexture, texCoords[7]) + texture2D(billboardTexture, texCoords[8]); avgColor /= float(KERNEL_SIZE); //blend between fragments in the circle and out of the circle defining our "pixel region" //Equation of a circle: (x - h)^2 + (y - k)^2 = r^2 vec2 powers = pow(abs(pixelRegionCoords - 0.5),vec2(2.0)); float radiusSqrd = pow(pixelRadius,2.0); float gradient = smoothstep(radiusSqrd-tolerance, radiusSqrd+tolerance, powers.x+powers.y); gl_FragColor = mix(avgColor, vec4(0.1,0.1,0.1,1.0), gradient); [Previous: part I] [Next: part III]
{"url":"http://www.lighthouse3d.com/opengl/ledshader/index.php?page2","timestamp":"2014-04-17T12:29:46Z","content_type":null,"content_length":"22313","record_id":"<urn:uuid:9e206f00-2725-43bf-b46b-f5be8276a5ce>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Natasha Dobrinen Natasha Dobrinen's Research Research Interests My research interests mainly fall under the broad category of Logic and Foundations of Mathematics. I do research in Set Theory, Ramsey Theory, Boolean Algebras, and Measure Theory. I have investigated relationships between random reals, eventually dominating functions, measure, generalized weak distributive laws, infinitary two-player games, and complete embeddings of the Cohen algebra into complete Boolean algebras. Currently, I am working on problems in Ramsey theory, problems regarding the structure of the Tukey types of ultrafilters, and problems involving both. National Science Foundation Grant DMS-1301665. "Ramsey Theory, Set Theory, and Tukey Order." September 1,2013 - August 31, 2016. $114,368. Support for Travel and Living Expenses to participate in the Fields Institute Thematic Program in Forcing and Its Applications, National Science Foundation, July 2012 - December 2012. $5000. Simons Foundation Collaboration Grant #245286. "Classification of Tukey types of ultrafilters." September 1, 2012 - August 31, 2017. $35,000. (Terminated September 1, 2013 due to NSF grant.) Association for Women in Mathematics/National Science Foundation Mentoring Travel Grant. November/December 2010 (Paris). $2800. University of Denver Faculty Research Fund grant. "Subtle Measurements on Large Spaces." July/August 2010 (Paris). $3000. Natasha Dobrinen. "Generalized weak distributive laws in Boolean algebras and issues related to a problem of von Neumann" thesis.pdf Natasha Dobrinen. "Games and general distributive laws in Boolean algebras," Proc. Amer. Math. Soc. 131 (2003) 309-318. Distributive_laws.pdf Natasha Dobrinen. "Errata to `Games and general distributive laws in Boolean algebras'," Proc. Amer. Math. Soc. 131 (2003) 2967-2968. Distributive_laws_errata Natasha Dobrinen. "Complete embeddings of the Cohen algebra into three families of c.c.c., non-measurable Boolean algebras," Pacific Jour. Math. 214(2) (2004) 201-222. Cohen_algebras.pdf Natasha Dobrinen and Stephen G. Simpson. "Almost everywhere domination," Jour. of Symbolic Logic 69(3) (2004) 914-922. Almost everywhere domination.pdf Natasha Dobrinen and Sy-David Friedman. "Co-stationarity of the ground model," Jour. Symbolic Logic 71(3) (2006) 1029-1043. co-stationarity.pdf James Cummings and Natasha Dobrinen. "The hyper-weak distributive law and a related game in Boolean algebras," Annals of Pure and Applied Logic 149 (2007), no. 1-3, 14--24 hyper-weak.pdf. Natasha Dobrinen. "More ubiquitous undetermined games and other results on uncountable length games in Boolean algebras," Note di Matematica 27 (2007), suppl. 1, 65--83. unctbl_games.pdf. Natasha Dobrinen. "Co-stationarity of the ground model and new $\omega$-seqences," Proc. Amer. Math. Soc. 136 (2008), no.5, 1815--1821. new_omega_seq.pdf. Natasha Dobrinen. "$\kappa$-stationary subsets of $\mathcal{P}_{\kappa^+}\lambda$, infinitary games, and distributive laws in Boolean algebras," Journal of Symbolic Logic 73 (2008), no. 1, 238--260. Natasha Dobrinen and Sy-David Friedman. "Internal consistency and co-stationarity of the ground model," Journal of Symbolic Logic 73 (2008), no. 2, 512--521. Internal Consistency Costationarity.pdf. Natasha Dobrinen and Sy-DavidFriedman. "Homogeneous iteration and measure one covering relative to HOD," Archive for Mathematical Logic 47 (2008), no. 7-8, 711--718. Homogeneous Iterations Natasha Dobrinen and Sy-David Friedman. "The consistency strength of the tree property at the double successor of a measurable," Fundamenta Mathematicae 208 (2010), 123--153. treeproperty.pdf. Natasha Dobrinen and Stevo Todorcevic. "Tukey types of ultrafilters," Illinois Journal of Mathematics 55(3) (2011), 907--951. (This paper now has an official publication date of 2011, even though it is appearing in 2013 due to the journal's backlog.) Here is a revised and improved version: tukey.pdf. Natasha Dobrinen. "Continuous cofinal maps on ultrafilters," (2010) 23 pp, submitted. continuous_maps.pdf Natasha Dobrinen and Stevo Todorcevic. "A new class of Ramsey-Classification Theorems and their applications in the Tukey theory of ultrafilters, Part 1," Transactions of the American Mathematical Society, 26 pp, to appear. A Ramsey Classification Theorem Natasha Dobrinen and Stevo Todorcevic. "A new class of Ramsey-classification Theorems and their applications in the Tukey theory of ultrafilters, Part 2," Transactions of the American Mathematical Society, 34 pp, to appear. Ramsey-Classification Theorems "Natasha Dobrinen and Stevo Todorcevic. "A new class of Ramsey-classification Theorems and their applications in the Tukey theory of ultrafilters, Parts 1 and 2," (Extended Abstract from poster at the Erdos Centennary conference, Budapest, July 2013), Electronic Notes in Discrete Mathematics, 43 (2013) 107--112. Andreas Blass, Natasha Dobrinen, and Dilip Raghavan. "The next best thing to a p-point," (2013) 35 pp, submitted. Natasha Dobrinen. "Survey on the Tukey theory of ultrafilters," Mathematical Institutes of the Serbian Academy of Sciences," (2013) 29 pp, submitted. Natasha Dobrinen. "The Abstract Nash-Williams Theorem and applications to initial structures in the Tukey types of non-p-points satisfying weak partition properties," preprint. Natasha Dobrinen, Jose Mijares, and Timothy Trujillo. "A general framework for topological Ramsey spaces, Ramsey-classification theorems, and initial structures in the Tukey types of p-points, Part 1," in preparation. Please note: If there are any problems with the links to papers, the latter ones may be found at the Mathematics Department Preprint Series http://www.du.edu/nsm/departments/mathematics/research/
{"url":"http://web.cs.du.edu/dobrinen/research.html","timestamp":"2014-04-16T10:11:14Z","content_type":null,"content_length":"9414","record_id":"<urn:uuid:a2c02549-425e-48d2-afff-879b804c1eb1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Burnham, IL Precalculus Tutor Find a Burnham, IL Precalculus Tutor ...If you are considering applying for Chicago’s selective enrollment high schools or are already engaged in the admissions process, I am happy to assist you with any and every aspect of this endeavor. I went through the process myself, successfully gaining admission to Northside College Prep, and ... 38 Subjects: including precalculus, Spanish, reading, statistics ...After high school, I continued on and played tennis for my college, Rose-Hulman Institute of Technology, a division 3 school. During the four season with the team, I played both singles and doubles at every match. By my senior year, I was named captain of the Women's varsity team and the number 1 singles player. 13 Subjects: including precalculus, chemistry, algebra 1, algebra 2 Do you need some help with your math homework? I love math and helping students understand it. I first tutored math in college and have been tutoring for a couple years independently. 26 Subjects: including precalculus, Spanish, geometry, chemistry ...I am currently a student at Purdue University Calumet, and I am majoring in math education. One day I plan on being a high school math teacher. I specifically I want to tutor in math for elementary, middle, or high school students. 9 Subjects: including precalculus, calculus, vocabulary, phonics ...I taught Web Design for three years in my current position as a Mathematics Teacher. I have experience with HTML, XHTML, JavaScript, PHP, MySQL, Cascading Style Sheets, and Search Engine Optimization. I began learning PHP several years ago in an effort to write web pages for processing web-based form submissions. 14 Subjects: including precalculus, calculus, algebra 2, algebra 1
{"url":"http://www.purplemath.com/Burnham_IL_Precalculus_tutors.php","timestamp":"2014-04-21T13:11:17Z","content_type":null,"content_length":"24087","record_id":"<urn:uuid:7ecfb0c2-a1e7-470d-aa6d-5a4b70824f24>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
3Ms for Instruction: Reviews of Maple, Mathematica, and Matlab MAY/JUNE 2005 (Vol. 7, No. 3) pp. 7-13 1521-9615/05/$31.00 © 2005 IEEE Published by the IEEE Computer Society 3Ms for Instruction: Reviews of Maple, Mathematica, and Matlab Article Contents Undergraduate Education Proven Applications Paradigmatic Applications Download Citation Download Content DOWNLOAD PDF Most CiSE readers have probably used Maple, Mathematica, or Matlab for several years. With this review series, our goal is to help you now decide whether one of the others is better suited to your temperament and current practice than your original choice. For those of you new to integrative computing packages, our goal is to enable you to make an informed first choice. In this installment, we begin to examine how these tools serve the professional work of undergraduate education. Within this context, we'd like to raise several significant issues for those teaching undergraduates to be scientists and engineers. We point to some exemplary materials and offer our own paradigms for major educational uses, which provide a framework for discussing the packages and drawing some implications for those issues in a concluding installment. In subsequent issues, we'll explore how the tools serve scientific and engineering research and communication. We begin with the premise that science and engineering undergraduates should have experience in using modern computational tools. Indeed, this is already an explicit criterion for engineering schools' curricula in the US as prescribed by the Accreditation Board for Engineering and Technology (ABET; www.abet.org/criteria.html). In this article, we examine the extent to which these tool packages so qualify: What kinds of computational experiences with them are appropriate for undergraduate students? We're aware of the multiple goals that educational uses of computing technology must serve, as well as the challenge they present to a fair evaluation of computing software. Foremost in our minds as instructors experienced in the design of electronic instructional materials is the importance of appearance, simplicity, and user-interface functionality to the success of such materials. Yet, there are several types of user interfaces that connect users to different computing tasks according to different educational goals. This begs several questions: What are some major educational goals for science and engineering undergraduates? How are specific computing tasks related to those goals? How does each of the three productivity packages realize the required computations? Undergraduates have a variety of learning styles and abilities, and they must simultaneously master material while learning how to learn. Ease of use in the packages' user interfaces as well as their adaptability to the variety of interactive mechanisms used in educational applications are key issues. Keep in mind, however, that the way and degree to which these are important depends on who the students are as well as the goals of the applications. College and university instructors must be judicious in the type and intensity of development projects they undertake in creating educational materials, with respect to both the time and resources they dedicate. How well do these tool packages serve for materials-development work that faculty will likely perform alone? How efficient are they when fast response times are required for modifications? How expensive are they to purchase and, equally important, maintain? Our approach to this review series is to describe the functions, features, and other elements these packages support and allow you to judge their value based on your values and objectives. To do this, we depart from making lists of features devoid of use-contexts, instead setting contexts in a variety of examples, both real and idealized. We've principally drawn the real examples from each company's Web site, but each is implemented in only one of the three packages. By examining explicit application software, albeit developed for an educational purpose that might not match your own, we hope to present examples that help you envision how applications you create could work in each package. From our perspective, these examples provide a concrete feature set to which we can refer when discussing how each package would implement an idealized example. We define each example as a paradigmatic application directed to one of the following educational roles for computational productivity packages in science and engineering undergraduate education: content tutorials, simulations, and computational programming. Maplesoft, Wolfram Research, and MathWorks all show evidence of wanting to maintain a position in the educational workplace. This is true in spite of the companies' self-histories in which only Maple claims to have had an academic birth and, as reported in the introductory article to this review series ( CiSE, Jan./Feb. 2005, pp. 8–16), has what most feels like an academic "personality." All three have an extensive number of exemplary educational applications available online: • Maple ( www.maplesoft.com/academic/teaching/index.aspx) • Mathematica ( http://library.wolfram.com/infocenter/Courseware/) • Matlab ( www.mathworks.com/academia/faculty_center/curriculum/) These examples are resources for instructors to borrow as models or for students to use to supplement their learning opportunities on their own. The significance of these examples goes beyond evidence of the companies' commitment to education and is broader than their use as classroom materials. They also illustrate a great deal about each package's range of computing power, and new developers can learn how to harness that power by examining the code. If you're already using one package, you might also find it valuable to "test drive" applications built with the others. In evaluating the packages' look, feel, and capabilities through these examples, we didn't feel a need to select the same type of examples from each. Rather, we chose examples that demonstrated interesting features, strengths, peculiar characteristics, and so on. For simplicity, we present the examples in alphabetical order by company name. We selected precalculus and calculus tutorial applications because they reflect Maple's mathematical character, and a rather extensive range of such lessons are available on the Maple site for free download. Many educators involved in the New Calculus movement have adopted Maple for implementing their material, which means that using it inducts you into a community defined by its novel approach to teaching calculus, gives you access to comprehensive materials, and provides an opportunity for a major overhaul of your mathematics curriculum. The Precalculus Study Guide is an exercise-based electronic textbook developed by Maplesoft and available for sale on the company's Web site. The educational material in the lessons fall into a category that we'll call "tutorials." They don't stretch Maple's computational power, but they could conceivably be used for remedial work in colleges. Among the subjects this study guide covers are graphing of lines, polynomials, and rational functions; roots and rational powers; and transcendental and piecewise-defined functions. The free downloads are less elegant versions of the textbook lessons, but they nicely illustrate the standard Maple worksheet format. This permits interleaving comments and questions (displayed in developer-formatted text) with Maple expressions and functions (displayed in system-formatted text and shown, following command prompts [>], in red in Figure 1 The latest version of the Precalculus Study Guide includes 20 new tutorials, each using a GUI (Maplet) with buttons, input windows, graphical output windows, and so on. A GUI significantly alters the study guide's style, making it more congenial to exploration, but it lacks the more extensive text narrative possible with a worksheet because a GUI's advantages are diminished if the user must scroll to navigate it. Notice the complete absence of explanatory text in the window in Figure 2 Two of the educational issues we'll develop in this review are the costs and benefits of several material-delivery methods, including worksheets and GUIs. Distributing material through GUIs and worksheets is one way to take advantage of both. The Maple worksheet interface provides the user with immediate access to the Maple code and allows the user to change the code and explore additional options. The code's accessibility also provides a way to learn how to produce worksheets. However, the code for the Maplet tutorial lessons is one level removed from the user. Thus, the GUI protects the Maplet code from students' casual tampering (although it's fairly easy to access the code by moving to the Maple worksheet that generated the Maplet GUI). The Calculus Study Guide, also available for purchase on the Maple site, is an electronic book to help those students taking their first course in calculus. The guide contains 31 Maple worksheets that give extensive coverage to five major subjects—limits, derivatives, application of derivatives, integrals, and applications of integrals—and 17 Maplet tutors that use the same GUI style as the Precalculus Study Guide. Other Maple course materials (including multivariable calculus, differential equations, partial differential equations, complex analysis, and matrix algebra) are also available for free download. Maplet tutors (for example, the Calculus 1 step-by-step differentiation problem solver) are useful as stand-alone student practice modules or for classroom demonstrations. An educational advantage of modularity in the packages is the ability for quick and responsive personalization of learning materials. Mathematica has its own calculus tutor package, but we were prompted by the package's formalistic mathematical character and Wolfram's background as a physicist to select some physics applications. The company's Web site lists more than 750 Mathematica-related physics references (articles, books, demos, and courseware) available for download, so our application selection was somewhat arbitrary, but we did narrow the choices by eliminating examples written in earlier versions of Mathematica. In the end, we chose two examples from the quantum physics section: particle in a box and hydrogen These examples are part of a category of materials we call "visualizations." They don't use much of Mathematica's computational power, but they demonstrate an important component of all the packages: rendering of results. Moreover, the contents of these examples are standard topics in most college physics and quantum mechanics courses. Figure 3 shows the GUIs for these two visualizations. Each example calculates and plots the respective quantum mechanical solutions to the infinite-square potential well problem and the hydrogen atom from user-specified values. The examples include text explanations using standard mathematical notation for the equations together with output-graphing windows. The Mathematica site also includes other interesting visualization examples. The optics example, for instance, displays a detailed solution to Maxwell's equations for electromagnetic waves sustained in a homogeneous and isotropic dielectric medium, expressing this solution in both Cartesian and spherical coordinates. This example also shows the solution as an animated wave traveling through this medium, bringing out some subtle features of traveling waves that really require animation to be apparent to the novice. This capability thus has high value in lessons for first-time quantum These lessons incorporate the recently developed GUI with its Web-based method of delivery—WebMathematica. (We briefly discuss alternative delivery systems for all three packages, emphasizing Web delivery, at the end of this section.) We first looked for engineering-like applications, such as systems operations, to test Matlab, which has been widely adopted by the professional engineering community. However, during our initial Google search of existing Matlab examples, we found an even better choice in Erik Cheever's Visualizing Phasors (the Matlab file is available for download directly from a Swarthmore College Web site We selected this application for the Matlab example because it serves to mediate a particularly thorny learning task—understanding phasors—but also because our paradigmatic simulation example will be an exercise based on the behavior of AC electrical circuits analyzed using phasors. Visualizing Phasors is a tool for depicting the relations between phasor and time graph representations of sinusoids. The user selects input parameters (voltage, impedance, and frequency) and then observes the corresponding time-dependent graphs of current and voltage (see Figure 4 The tool then calculates the current's behavior through from the voltage applied across the impedance element by computing it from the AC generalization of Ohm's law. We will call this category of educational application a "simulation." Given that Matlab's great strength is in the engineering community, however, we needed an engineering example as well. A compelling reason for this strong engineering bent comes from Matlab's numerical calculation capabilities. The syntax of its arithmetic representations is based on the matrix as the fundamental numerical construct (a real number is even represented as a 1 × 1 matrix, for example). Moreover, the development environment's architecture is geared toward computational programming as it uses multiple windows, command histories, search paths, and so on, which all favor code development. A lab exercise developed by Jeff Holmes, a biomedical engineer, for his students at Columbia presents a nice example in which the educational objective—numerical problem solving—neatly dovetails with executable code that illustrates how to program in Matlab. For the lab experiment, the instructor furnishes students with a Matlab (M-code) file as a program for simulating the experiment they will perform. Students write their own Matlab programs to reduce their experimental data, and then use the M-code simulation to test their own computational algorithms. This application thus serves the dual educational roles of simulation and computational programming. The M-code and lab instructions are available for download at www.computer.org/cise/techreviews/. Application Delivery Except for the Mathematica Optics example, users would normally need to have the application package to execute the cases we've described here. However, Mathematica's Web-based delivery system lets us use the applications without buying the package—albeit with restricted ability to explore the code and no ability to modify it. Maple and Matlab have their own Web-based delivery systems, which are similar to Mathematica's though different in functionality. The Matlab Web Server lets developers deploy Web-based Matlab applications. The user sends data from a Web browser to the Matlab application running on a server for computation, and the server returns results for display by the user's Web browser. Maple TA is a Web-based system designed for creating tests, assignments, and exercises; it automatically assesses student responses and performance—a useful feature for educational users. Web-based delivery is a relatively new feature in all three packages. It's especially useful for educational applications, such as tutorials and simulations, because it favors instructional developers who value borrowing and swapping materials freely. Moreover, it gives a more consistent, although constrained, look to user interfaces across packages and obviates students' need to have the tool packages to operate the applications. We'll discuss some of the differences between—and implications of—these Web-based interfaces more fully in the next installment. Paradigmatic Applications Earlier, we singled out three roles for educational applications. To compare common application examples that compare how the packages work in educational settings, we'll invent one paradigm for each. Given our limited space, we'll develop the simulation role in considerably more depth than the tutorials and computations. The simulation example will be built around a lab-based scenario and will have features that spill over into the other two roles. It will serve as the baseline case. We start with a definition of educational objectives: • Enable students to collaborate in small groups as design and development teams. • Serve as a medium for laboratory teaching assistants to interact with students in conducting "just in time" learning experiences. • Provide properly motivated students with the opportunity for open-ended experimentation beyond the exercise's immediate requirements. We then designed a paradigmatic exercise based on these goals. The flowchart in Figure 5 outlines such an exercise based on our objectives, specifications, and assumptions. This baseline case uses features that instructional applications generally need most. Our choices were animated visualizations, GUIs, text using standard mathematical notation, and interactive graphical output. With these features in mind, we chose to examine the behavior of AC electrical circuits, analyzed via phasors. The application should support the lab exercise by providing a prelab demonstration, in-lab design and simulation, and post-lab analysis and homework. The application should visually display passive elements that students can arrange in circuits of their choice. The application should simulate the circuit's current/voltage behavior under a choice of applied AC voltages. This is similar to the popular SPICE (simulation program with integrated circuit emphasis) technology, which encodes a circuit-performance simulation directly from circuits designed in the form of a schematic. In addition, the application should provide the capability for graphical comparison of the results from the computer's simulation to data obtained by students from measurements on the physical circuits based on the design. Finally, it should have at least one Web-based implementation so that students can conduct it remotely from dorm rooms or public computer labs, as well as locally in the electricity lab. We assume that, while in the lab, students will work collaboratively in groups and have access to a teaching assistant for help and advice. These assumptions favor graphics rather than text because graphical displays can, in the first instance, mediate group discussion more effectively. It is easier for several people to simultaneously look at and discuss graphical objects rather than text. In the second instance, graphical controls facilitate intervention by laboratory teaching assistants, who usually work over the shoulders of the group. In the tutorial paradigm, we had a similar, but separate, set of educational objectives: • Enable students to work individually. • Serve as a medium for interactive learning. • Provide properly motivated students with the opportunity for open-ended experimentation beyond the exercise's immediate requirements. Here, individual activity replaces group work, and the interaction is between an individual learner and the organized material rather than between a group and a set of laboratory tools. In contrast to the baseline case, this situation favors easy readability and structural transparency of textual information. In consort with the baseline case, interactivity and open-endedness require embedded active objects, such as modifiable command statements and data containers. With tutorials, our experience shows that the principal elements most needed are readable text using standard mathematical notation and interactive graphical output. In addition, sequential segments of the material should be organized by a clustering mechanism that lets the user alternatively "drill down" (open) and "reprise" (collapse) those segments, thus overlaying coarse-grained overviews on fine-grained details. The New User Guides for these three packages are good paradigms for tutorials. They conveniently furnish us with three realizations of the ideal and obviate the need for providing a set of detailed specifications. We can then simply evaluate the three package tutorials on how well they support our educational objectives. Computational Programming Proceeding to the last case, we developed the following educational objectives for the computational programming paradigm: • Enable students to work individually or in groups. • Serve as a medium for interactive learning of how to design, implement, and test computational algorithms. • Provide properly motivated students with the opportunity for project-type experimentation—for example, collecting algorithms or adding appropriate input and output mechanisms—to produce comprehensive applications. Here, we must accommodate both individual and group work with multiple interactions—among students, among students and programming tools, and among students and numerical analysis methods and algorithms. In contrast to the baseline case, this situation favors a congenial code development environment, a full and flexible programming "language," and the existence of and access to numerical analysis information and tools. This case also demands ease of transport for code segments and modules among students and with the broader user community. In computational programming, a paradigm is actually a complete program development environment. In this case, we conveniently have three realizations of this ideal—namely, the development environments for Maple, Mathematica, and Matlab, themselves. Again, we can simply evaluate these package environments on how they support our educational objectives. This review project has grown in size beyond our original estimate. Happily, this means that we've uncovered lots of interesting material. Unfortunately, it also means that we had to split the print edition into two installments. In the July/August issue of CiSE, we'll continue this discussion by examining the inner workings of these tool packages. We'll describe the tools' flexibility, facility, and accessibility from the instructor-developer's standpoint. We'll also discuss the experiences of instructor-users, referring to specific parts of the paradigmatic applications described here. This should provide a comparison of the effort involved and the outcomes achievable using each of the three packages for common end applications. We hope these observations and conclusions will carry some implications about the cost and benefits of using each package in a variety of educational contexts. Researchers and developers, remember that we'll be continuing this series in September/October with a review of the packages from the perspective of scientists and engineers at work. Also, keep in mind that the project of constructing this review series is an experiment in determining what is useful to you, our readers, for use in your own work. As we promised earlier, CiSE will soon be soliciting your feedback on the new Tech Review format through an online "usability" evaluation—part of the magazine's renewed effort to evolve along with those whom we serve. We look forward to your cooperation in this effort. Norman Chonacky , most recently a senior research scientist in environmental engineering at Columbia, is currently a research fellow in the Center for UN Studies at Yale. Chonacky received a PhD in physics from the University of Wisconsin, Madison. He is a member of the American Association of Physics Teachers (AAPT), the American Physical Society (APS), the IEEE Computer Society, the American Society for Engineering Education (ASEE), and the American Association for the Advancement of Science (AAAS). Contact him at cise-editor@aip.org. David Winch is emeritus professor of physics at Kalamazoo College. His research interests are focused on educational technologies. Winch received a PhD in physics from Clarkson University. He is coauthor of Physics: Cinema Classics (ZTEK, videodisc/CD 1993, DVD/CD 2004). Contact him at dmwinch@kitcarson.net.
{"url":"http://www.computer.org/csdl/mags/cs/2005/03/c3007.html","timestamp":"2014-04-23T22:57:03Z","content_type":null,"content_length":"71202","record_id":"<urn:uuid:8ce34034-a6cd-4490-835d-dfb56f45b38e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Smellyforearmandfeet's Blog ABOUT US We are a group of students from Occupational therapy in Nanyang Polytechnic! Hope you enjoy our research presentation:) INTRODUCTION This site is created to blog about our Statistics project on: Is there a relationship between the length of our foot and the length of our forearm? There has always been a belief about how different parts of our body proportions are similar to each other like the circumference of our neck is equivalent to waist size or the length of our foot is equivalent to the length of our forearm. Goals for research: To prove any relationship that maybe present between the variables To understand the process of gathering data and analysing relationship between valuables To reflect on the research process, evaluate and explore room for improvements in future data gathering processes. Importance of this finding: 1. To find a suitable feet size for a prosthetic leg that looks proportionate to the body in cases of bilateral below knee-amputation(BKA). 2. We will be able to see if our forearm length for shoe size testing when going shopping! If it is true, then no more trying for shoes outside! Our Hypothesis: H[1]: There is a positive relationship between the length of the foot and the length of the forearm H[0] : There is no significant relationship between the length of the foot and the length of the forearm We’ll be collecting data on these “vital statistics” from friends around our cohort, analyse them and come up with a conclusion. We hope that the data and the analysis gathered at the end of this project will let us see how true this is. Leonardo di ser Piero da Vinci was a famous Renaissance Italian man who is celebrated for his unquenchable curiosity which was equalled only by his invention and painting. Some of the world renowned paintings such as the ‘Mona Lisa’ and ‘The Vitruvian Man’ are illustrious even till today. During the 1480s, when he was working on the equestrian monument to Francesco Sforza, Leonardo also embarked on the first time on extensive groups of studies on the proportion of the human body, on anatomy and physiology. Thus in April 1489 he began a book with the title “On the human figure”. During the project, he made systematic studies of two young men. After what must have been months of taking measurements, as he was doing at almost exactly the same time with the horses belonging to his patron Ludovico il Moro, he arrived at a systematic overview of human proportions, at which point he then started to look at the proportion of sitting and kneeling figures. He then compared the results of his anthropometric studies – taking human measurements – with the only surviving theory of proportions from Antiquity, namely the Vitruvian man. Vitruvius, a moderately successful architect and engineer during the days of the Roman Empire had written a treatise on architect which included in its third book a description of the complete measurements of the human body. These led him to conclude that a man with legs and arms outstretched would fit into the square and the circle, perfect geometric figures. Vitruvius mentioned a formula of body proportion as fashioned by nature in his book; de Architectura book III chapter three. He proposed the use of this formula in the construction of the sacred temple of the immortal gods to truly call it a well-designed, symmetrical, proportional beauty of a building (Pollio, n.d., p30). With the formula described by Vitruvius, da Vinci produced the Vitruvian Man which clearly illustrates the body proportion of different parts of the body to one another. With the Vitruvian Man, da Vinci came up with a “golden proportion”, a relationship between two ratios which was expressed mathematically as 1: 1.61803, commonly known to the Greek as phi (Place, 2000). A comparison of two body part proportion ratio also mentioned by Place, which will be the focus of our research is: The length of foot is the same as the length of forearm, demonstrating a golden relationship between the hand and the foot (Place, 2000). Interestingly, other literature review also mentioned this hypothesis. In the article “The King’s Foot”, dewitt mentioned the evolution of measurement based on correspondences to various parts of the body, usually the King’s body. He said: “One apocryphal but likely account of the origins of the foot is that it was the distance from the king’s elbow to his wrist–whoever this king may have been.” (n.d.). METHODOLOGY This is how we carried out our research! Sample population: 30 randomly chosen subjects from the occupational therapy cohort (Cluster sampling) Variables: Dependent – forearm length (cm) Independent – feet length (cm) Measuring instruments/ tools: • Measuring tape to measure foot and forearm (x4) • Masking tape to secure the measuring tape to the floor(x1) • Chairs for subjects to sit when measuring forearm (x2) • Thick books for accurate measurement (x2) • List of subjects that we are to measure (x2) How did we carry out measurements? In order to collect data for the most accurate length, instructions for measurements must be standardized among the data collectors Factors to ensure validity, reliability and consistency of data • Start and end point of measurements were standardised among data collectors • Position of subjects when measurements were taken: □ All forearm measurement done in sitting □ Feet measurements done standing □ Subjects were required to remove all accessories from their arm for accurate data collection □ Time of the day- to cater to the expansion of the feet throughout the day ☆ All data was collected on one occasion and at the same time of the day ☆ Tools were standardised as well: ○ We compared the tape measures to make sure there are no discrepancies in the units and also to start from the 10cm mark to increase inter-rater reliability It has to be taken into account that the measurements may vary across the 4 data collectors. Thus, prior to the data collection, data collectors did a “trial and error” to maximize inter-rater reliability. We assigned 4 data collectors for measurement, 2 for forearm length and 2 for feet length. The data collectors collected data for left and right sides of the forearm and foot. 2 other data collectors were assigned to the forearm and feet sections respectively to help with data recording. Data collected for foot and hand were then collated by 2 individuals respectively. Figure 1 below shows the set up of the station for data collection. Figure 2 below shows the headings for the record sheet that we used for data collection To increase accuracy, we took the average of the 2 readings per side of the forearm and feet. The tedious process of our group members… Figure 4. shows the variable list Variables are defined as: Sex: 0= male, 1=female (nominal) Age: (scale) Left forearm: first and second reading, average reading (scale) Right forearm: first and second reading, average reading (scale) Left foot: first and second reading, average reading (scale) Right foot: first and second reading, average reading (scale) Therefore, in order to find out whether the length of our forearm is the same as our foot, our group use the Pearson’s correlation coefficient as a symmetric measure of association for interval level variables. Assumptions of the Pearson’s r: • Assumption 1- All observations must be independent of each other • Assumption 2- The dependent variable should be normally distributed at each value of the independent variable • Assumption 3- The dependent variable should have the same variability at each value of the independent variable • Assumption 4- The relationship between the dependent and independent variables should be linear In our case, Assumption 1 is fulfilled because the samples were taken independently and randomly. Assumption 2 needs to be verified by doing a QQ plot. From the QQ plots, it can be seen that they are all normal and do not violate this assumption. Most of the points are close to the best fit line For Assumptions 3 and 4, to verify them, scatterplots are used to confirm homogenous variance and that there is a linear relationship between R forearm and R foot and between L forearm and L foot. From the scatter plot, it appears to follow a general positive linear trend. From table 1, a Pearson’s correlation coefficient of 0.749 indicates a moderately strong relationship between the length of the left forearm and foot. There is thus a positive, moderate, and significant association between left forearm and foot. (r= 0.749, p<0.05, N=30) From the scatter plot, it appears to follow a general positive linear trend. From table 2, a Pearson’s correlation coefficient of 0.778 indicates a moderately strong relationship between the length of the right forearm and foot. There is thus a positive, moderate, and significant association between right forearm and foot. (r= 0.778, p<0.05, N=30) The stem and leaf plot is useful as it provides a summary of the data and also shows the extra details, in this case, gender and also shows the outliers clearly if needed. From the stem and leaf plots, some outliers may be identified in the males’ data for the left forearm. This might be because of their proportion to their height or other genetic factors. The graph for the males also tend to have a higher but narrower range of values as compared to the females graph. This is due to the fact that the males tend to be bigger in size and thus their forearm and feet are bigger too. Our group decided to see whether the distinctions still exist if we plot it on a scatter plot with markers. From the scatter plots, it shows a positive linear trend regardless of gender, thus there is still a linear relationship between the length of the forearm and foot. It is to be taken into consideration that our group only took a random sample of 30, thus a more accurate measurement to prove our hypothesis should include more. From our findings, we conclude that there is a positive relationship between the length of forearm and feet and thus we can accept H[1], and reject H[0. ]Even though there is a positive relationship with the two variables, it does not show that forearm length is of equal length with the foot length, thus only can be served as a good gauge when it comes to measurements of shoe sizes and prosthetics for patients. Our group felt that there are some areas that we could improve on. However, due to limitations in time and resources , we were not able maximise the potential of our research findings. Areas of improvement includes: 1. A larger sample group for a better representation of data 2. Substituting standardised electronic measuring instruments in place of subjective manual measuring instruments. Further research can be done in areas such as applying results of this statistical analysis to clinical practice. Benjamin’s Reflection Statistic module gave me a better understanding of the process of research and act as a step up from research methodology module we undertook in year one. It is definitely very helpful to understand the SPSS programme which is an asset in statistical data analysis. This project has stimulated our creative thinking incorporating the knowledge we learned last year in research methodology to come up with appropriate methodology for our research. The thinking process involved is definitely very fulfilling and will aid us in the future. Samantha’s reflection I have learnt alot from doing this project. I thought using a blog instead of a presentation was quite interesting. This project had certainly enriched me in ways by allowing me to explore statistics and apply what I have learnt in class. Working with classmates through the various processes was tedious but exciting. The SPSS definitely helped us quite abit in our project. Hopefully what we have learnt will help us in our FYP next year! Shamaine’s reflection I have learnt a lot while doing this project. From choosing an agreeable topic to work on with my group members, gathering the data, and analyzing them, leading to our final conclusion. The Spss helped us a lot with our statistical analysis. This has been an enriching experience and certainly prepared us with the basics for the FYP next year. Maybe in future, it might be able to help us with our evidence-based practice as well! Szesze’s reflection As our group aim to improve the interrelator results, 2 data collectors took turns to measure a subject, the tedious process in order to prove the reliability of our project made me feel the importance of data collection extremely important. I enjoyed the process with my other group mates thoroughly and gained much knowledge using the Spss. Nirma’s reflection This project has allowed me to see the application of statistical methods to a real research topic. The idea of setting up a blog is a creative method to allow students to share their research findings. It was a good experience in that we had to find ways to collate data in the most effective way possible and also gave us an opportunity to work on spss as a form of practice. I hope to be able to effectively integrate the knowledge received in upcoming research projects. Mohsin’s reflection From our project I felt that we needed better and more accurate ways to measure the feet of our subjects. What could be done better was to ensure proper positioning of the heel. Instead of using a book to rest the heel on, we could have used the wall to judge heel position. This method would have increased accuracy of measurement for heel length. I also feel that briefing of data collectors are important so as to ensure that readings collected could be used for our measurements as accurately as possible.
{"url":"http://smellyforearmandfeet.wordpress.com/","timestamp":"2014-04-19T02:23:13Z","content_type":null,"content_length":"50908","record_id":"<urn:uuid:6781e53c-5a55-472e-a796-d4d26164b693>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with Eigenvectors November 12th 2011, 06:08 AM #1 Nov 2011 Help with Eigenvectors Was wondering if anyone could help, i am trying to work out eigenvectors after working out the 3 eigenvalues for the below matrix: [5 3 2 9 7 3] My eigenvectors are correct however for 2 of the eigenvalues, my eigenvectors have opposite signs. I have checked these in MathCAD so i know that the eigenvectors are correct however for some reason some of the signs are opposite. I have attached my workings for the eigenvalue 1.65, the eigenvectors shown have opposite signs, can anyone spot why. Thanks so much for your help. Attachment 22720Attachment 22719 Re: Help with Eigenvectors Was wondering if anyone could help, i am trying to work out eigenvectors after working out the 3 eigenvalues for the below matrix: [5 3 2 9 7 3] My eigenvectors are correct however for 2 of the eigenvalues, my eigenvectors have opposite signs. I have checked these in MathCAD so i know that the eigenvectors are correct however for some reason some of the signs are opposite. I have attached my workings for the eigenvalue 1.65, the eigenvectors shown have opposite signs, can anyone spot why. Thanks so much for your help. Attachment 22720Attachment 22719 If x is an eigenvector then so is –x. Both answers are equally correct. Re: Help with Eigenvectors Thanks so much for your reply. Do you know if ive worked out the eigenvectors for all three eigenvalues and only two of sets have opposite signs, if ive followed the same method, would the other not come out with opposite signs too? Sorry if this is an obvious question, why does the sign of the eigenvector not matter? Thanks again...Hayley Re: Help with Eigenvectors suppose v is an eigenvector for the matrix A. then Av = λv, for some eigenvalue λ. but A(cv) = c(Av) = c(λv) = λ(cv), for any scalar c ≠ 0, so cv is likewise an eigenvector for A. in other words, Au = λu for all u in span({v}). in particular, all of the above is true when c = -1, as -1 is a non-zero scalar (in any field where char(F) ≠ 2). November 12th 2011, 07:33 AM #2 November 12th 2011, 07:56 AM #3 Nov 2011 November 12th 2011, 01:22 PM #4 MHF Contributor Mar 2011
{"url":"http://mathhelpforum.com/advanced-algebra/191704-help-eigenvectors.html","timestamp":"2014-04-17T06:11:32Z","content_type":null,"content_length":"40647","record_id":"<urn:uuid:14358ea6-bf5c-4702-b868-b0f145d74090>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Point at which a line intersects a plane So i know the equation of a plane. Ax + By +Cz = D Normal is the normal vector to the plane. A = normal.x B = normal.y C = normal.z p1 and p2 are 2 points on the line (which will intercept a plane at some point) the .x and .y and .z refer to there respective components of the vector. X = (p2.x - p1.x)* T + p1.x Y = (p2.y - p1.y)* T + p1.y Z = (p2.z - p1.z)* T + p1.z I also know what D equals I solved for that by moving stuff around The problem is a need a general solution for T. It should be something like A ((p2.x - p1.x) * T + p1.x)) + B ((p2.y - p1.y) * T + p1.y)) + C ((p2.z - p1.z) * T + p1.z)) = D Except isolated for T (I believe) In case your curious this is for a programming function. Thats why i'm using nothing but variables. I can solve for every equation but T, while I can solve for T by myself given specific numbers im not sure how to isolate it even if I expand out the equation to stuff like Ap2.x - Ap1.x * AT + Ap1.x ... I'm thinking maybe im going down the wrong path here or something. Any help would be much appreciated. :)
{"url":"http://www.physicsforums.com/showthread.php?p=4158594","timestamp":"2014-04-17T21:33:17Z","content_type":null,"content_length":"24179","record_id":"<urn:uuid:16871f8e-7276-46a6-8145-046e11c898e8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamical Chaos and the Volume Gap (Haggard's ILQGS talk) The discussion has reminded me of and/or clarified several points The LQG volume operators have discrete spectrum. Already demonstrated. A discrete set of positive numbers does not have to be bounded away from zero. E.g. {1/n | n=1,2,3....} So the is a separate issue from discreteness. It seems OK for there to be several volume operators. They agree in certain basic cases and this agreement is sufficient--people should use whichever best suits the application. Ashtekar observed that the LQG area gap has been proven and ensures finiteness in the applications he's interested in. He seemed to be saying at one point that LQG does not need a volume gap---it's interesting but not to get worked up about. Classical chaos can have regions of phase space which are unstable surrounding small islands of stability. The maps are visually interesting. Having discrete islands of classical stability amidst chaos seem to correspond to having discrete quantum spectra. If anyone wants to take another look at the color-coded phase-space maps in the Coleman-Smith + Müller paper, here's the link: Scroll quickly down to Figures 15, 17, 18, 19 at the VERY END. These are a lot of little SLICES of the phase space. Like thin slices of an exotic sausage that would make you wonder what was in it and lose your appetite. What Hal Haggard did was to take ONE SLICE FROM EACH of those figures, an interesting central slice that you could sort of explain and see what was going on. And blow it up---enlarge that one slice---and discuss that. Whoseever idea that was, Hal, or Coleman-Smith, or Berndt Müller's, it was a good communication idea. You get more out of Hal Haggard's slides version, focusing on a small manageable amount of information, than you get out of Coleman-Smith Müller's Figures 15-19, or so I think. COMPARE those figures with Haggard's slide #29! which has the enlarged figures. We should talk a bit about the two-digit adjacency code for classifying pentahedrons, to make sense of the adjacency "orbitals" that the pentahedron can jump around among by various pachneresque moves. The two digits designate the sides that AREN'T in contact with all the others. Three of the sides are quadrilateral and share an edge with all four others, but two are trilateral, and don't. All this geometry is getting a bit overwhelming, I'm going to take a break.
{"url":"http://www.physicsforums.com/showthread.php?p=4269121","timestamp":"2014-04-20T23:41:09Z","content_type":null,"content_length":"30176","record_id":"<urn:uuid:be4853df-987d-4fb6-8a77-427a463dec2d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Compressors and Turbines Explained: Part 1 Learn more about compressors and turbines. Video Tutorials Turbochargers while simple in design, can get very complex in theory. From deciding what compressor trim is desired, to what turbine housing to be used is confusing to most who enter the field of forced induction. This article will hopefully take away all the confusion in turbocharger selection. Before I can begin writing an article explaining turbochargers, terminology must be learned. Here is a list of the more commonly used turbocharger terms: Compressor - Essentially a fan that spins and compresses air within an enclosed area (compressor housing). In order to allow the air to compress and build pressure within the housing, the fan must be spun at certain rpm levels. Compressor Housing - Housing that encloses the compressor. Pictured is a compressor housing. Compressor Map – A map that allows the ability to plot compressor pressure ratio vs. engine airflow. An “island” shape is created on the plot showing where the compressor is efficient. Compressor Efficiency – Compressors efficiency is the ability to produce lowest possible temperature from the compressor air. When air is compressed heat is generated, at certain range of speeds of the compressor rotation the heat can be keep to a minimum. This is what is known as “being in the efficiency range of a compressor” Higher efficiency, lower outlet temperatures. Highest possible efficiency of compressors are 78~82%. Lower outlet temperature=lower intake air temperature. Lower intake air temperature=more dense mixture of air=more oxygen available in the combustion to burn. The greater amount of oxygen present with fuel provides more energy. More energy=more heat=more torque=more power. Compressor Trim – The trim of the compressor refers to the squared ratio of the smaller diameter divided by the larger diameter multiplied by 100 of the compressor wheel. The smaller diameter of the wheel is known as the inducer, and the larger diameter of the wheel is known as the exducer. Compressor Families – Beyond compressor trim levels there is compressor family of wheels. In the Garret turbo line of older technology compressors there is T22, T25, T3, T350, T04b, T04e, T04s and T04r families. In each family there is trim levels to the family. Although there is a 60 trim in both the t3 and to4e family wheels, the main difference is the inducer diameter of the wheels. The trim is only a ratio of the exducer/inducer, so while the inducer size of the compressor wheels are vastly different, the ratio between the exducer/inducer remains constant since it’s the comparison between the exducer to inducer size. Turbine – A fan that uses exhaust energy to rotate. The rotation of the turbine is transmitted through a shaft that is connected to the compressor. Faster the turbine spins, faster the compressor spins. Compressor uses the rpm translation through the turbine/compressor-connecting shaft to compress air at the rpm level that dictates compressor flow. Turbine Housing – Housing that encloses the turbine wheel. Turbine housing size affects the ability of the turbine to transmit rpm to the compressor wheel. Smaller turbine housing, quicker spool up due to quicker translation of rpm’s to compressor. Trade-off is increased low-end response for less high-end response from turbocharger. Picture below is a turbine housing. Turbine Trim – The trim of the turbine refers to the squared ratio of the smaller diameter divided by the larger diameter multiplied by 100 of the compressor wheel. The smaller diameter of the wheel is known as the inducer, and the larger diameter of the wheel is known as the exducer. Turbine Families – As with compressor families, there is turbine families. The most common evidence of the turbine families is the t31, t350 and t04 wheel used in the t3, t3/t4 turbos sold on the market. Precision offers the t31, aka stage 3 blade in their smaller line of sport compact series turbochargers. The t31 comes in two different trim levels the 69 and 76 trim. The t350, aka stage 5 blade comes in two different trim levels as well, 69 and 76 trim. The t31 will spool faster than the t350 due to the physical size differences (t31 being smaller). The smaller the trim level the quicker spool, but less top end. Essentially you are changing the turbine pressure ratio when you are selecting the family and trim level of the turbine wheel you are using. The larger family and trim level you choose the more power the turbocharger will produce at the expense of lag. As with the compressor trim levels both the t31 and t350 have the 69 and 76 trim levels, which are not the same. The turbine trim is the ratio of the exducer compared to the inducer size of the turbine wheel, since its only a ratio the size of the inducer/exducers are completely different. Turbine Map – A map that allows the ability to plot turbine expansion ratio vs. engine airflow. An “island” shape is created on the plot showing where the turbine is efficient. A/R – Ratio of the area of the compressor/turbine housing to the radius of the compressor/turbine wheel. In order to find out the A/R of the compressor or turbine housing select a point where the compressor/turbine housing begins and measure the cross-sectional area at that point. Cross sectional area is A=P*(Radius)2. Next step is to measure the distance between the center of the area and the center of the compressor/turbine wheel, this is the radius measurement. If you choose a different point on the compressor/turbine housing and remeasure the area and radius, you’ll find that it stays constant. This is due to the housing getting constantly smaller in diameter as it gets closer and closer to the compressor/turbine wheel. When you upgrade from a .48 to a .63, or .63 to a .82 A/R you are essentially increasing the area of the housing. Increasing the area increases the amount airflow to the turbine wheel. The smaller area of the smaller turbine housing builds pressure quickly and transmits this pressure to the turbine. The pressure gives the turbine enough rpm’s to allow the compressor to compress air at lower engine speeds (less engine speed, less airflow from engine). The trade off is that pressure builds up quickly in the housing to obtain quick spool up, but the pressure quickly becomes to great and backpressure builds up. The backpressure is the restriction that limits shaft speed of the compressor, and as the rpm increase (engine airflow increases) the torque curve begins to drop off due to the volumetric efficiency of the engine decreasing. Think of the turbine housing sizing as increasing/decreasing inlet pressure to the housing in order to gain low end, midrange or top-end response from the turbocharger. The smaller turbine housing wont carry the torque curve to a high rpm, limiting the amount of peak whp. Excellent low-end and midrange gains are felt through smaller housings. Compressor/Turbine Mismatch – When “matching” a compressor and a turbine you are seeking to balance the turbine characteristics to the compressor characteristics. When you increase the size of the turbine wheel you are decreasing the pressure ratio of the turbine, essentially decreasing the shaft speed connecting the compressor/turbine. When pairing a larger turbine wheel to a small compressor wheel, the smaller the compressor wheel the higher the rpm the wheel has to be spun at to compress the air. This becomes a problem in that the smaller compressor cannot generate adequate shaft speed to compress air. The same can hold true when pairing a huge compressor to a small turbine wheel. The larger compressor needs less shaft speed to compressor airflow, but the smaller turbine wheel will spin at a much higher rpm level that is what is necessary. The result is crossing over the choke or surge line on the compressor map (this will be explained in part 2 of this article). Note the two different compressor maps, one of a 60 trim t3 compressor wheel, the other a t64 compressor wheel. Part 2 will explain compressor and turbine maps, and all the terminology that goes along with topics.
{"url":"http://www.evans-tuning.com/tech-articles/compressors-turbines-explained/","timestamp":"2014-04-16T07:50:59Z","content_type":null,"content_length":"23564","record_id":"<urn:uuid:a9708e9f-afa6-4d60-bea1-c7b69454a378>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00465-ip-10-147-4-33.ec2.internal.warc.gz"}