Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
url
stringlengths
36
386
fetch_time
int64
1,368,856,729B
1,726,893,809B
content_mime_type
stringclasses
1 value
warc_filename
stringlengths
108
138
warc_record_offset
int64
4.49M
1.03B
warc_record_length
int64
1.31k
88.5k
text
stringlengths
191
46k
token_count
int64
70
19.8k
char_count
int64
191
46k
metadata
stringlengths
439
443
score
float64
3.5
4.97
int_score
int64
4
5
crawl
stringclasses
74 values
snapshot_type
stringclasses
2 values
language
stringclasses
1 value
language_score
float64
0.1
1
prefix
stringlengths
90
5.28k
target
stringlengths
1
25.3k
https://math.stackexchange.com/questions/4591726/fracab4-fracbc4-fraccd4-fracde4-fracea4
1,721,677,653,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763517915.15/warc/CC-MAIN-20240722190551-20240722220551-00176.warc.gz
318,427,534
37,511
# $(\frac{a}{b})^4+(\frac{b}{c})^4+(\frac{c}{d})^4+(\frac{d}{e})^4+(\frac{e}{a})^4\ge\frac{b}{a}+\frac{c}{b}+\frac{d}{c}+\frac{e}{d}+\frac{a}{e}$ How exactly do I solve this problem? (Source: 1984 British Math Olympiad #3 part II) $$\begin{equation*} \bigl(\frac{a}{b}\bigr)^4 + \bigl(\frac{b}{c}\bigr)^4 + \bigl(\frac{c}{d}\bigr)^4 + \bigl(\frac{d}{e}\bigr)^4 + \bigl(\frac{e}{a}\bigr)^4 \ge \frac{b}{a} + \frac{c}{b} + \frac{d}{c} + \frac{e}{d} + \frac{a}{e} \end{equation*}$$ There's not really a clear-cut way to use AM-GM on this problem. I've been thinking of maybe using the Power Mean Inequality, but I don't exactly see a way to do that. Maybe we could use harmonic mean for the RHS? • someone please explain why this is closed. I think I have adequately explained some strategies that I've tried. I believe I've provided enough context. Commented Dec 10, 2022 at 19:54 • I'm kinda new around here, but I was also surprised to see it closed. Also I found the accepted solution to be very nice. Commented Dec 10, 2022 at 21:48 Applying the AM-GM $$LHS - \bigl(\frac{e}{a}\bigr)^4 = \bigl(\frac{a}{b}\bigr)^4 + \bigl(\frac{b}{c}\bigr)^4 + \bigl(\frac{c}{d}\bigr)^4 + \bigl(\frac{d}{e}\bigr)^4 \ge4 \cdot \frac{a}{b} \cdot \frac{b}{c} \cdot \frac{c}{d} \cdot\frac{d}{e} = 4\cdot\frac{a}{e}$$ Do the same thing for these 4 others terms, and make the sum $$5 LHS - LHS \ge 4 RHS$$ $$\Longleftrightarrow LHS \ge RHS$$ The equality occurs when $$a=b=c=d=e$$ • I think in the end you should have $5LHS-LHS\geq 4RHS$, since you repeat the procedure 5 times, not 4. Then everything works. :) Commented Dec 5, 2022 at 9:33 • @Freshman'sDream You're right, I just corrected this typo. Thanks! – NN2 Commented Dec 5, 2022 at 9:34 • ohhhhh ok thanks! Commented Dec 9, 2022 at 23:08 NN2 gave a simple and very elegamt proof. I tried another way. What is the minumum of the function $$f(x_1,x_2,x_3,x_4,x_5)=\sum_{i=1}^{5}(x_i^4-x_i^{-1})$$ with domain $$\Bbb{R}^{5+}$$, subject to the constraint equation $$x_1x_2x_3x_4x_5=1$$? The system of a Lagrange multplier $$\lambda$$ gives the equations $$4x_i^3+x_i^{-2}=\lambda x_i^{-1}$$ for all $$i=1,2,3,4,5$$. From these equations we have $$4x_ix_j(x_i^4-x_j^4)=x_i-x_j$$ for all $$i,j.$$ I am stuck. Any ideas?
874
2,261
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.90625
4
CC-MAIN-2024-30
latest
en
0.820644
# $(\frac{a}{b})^4+(\frac{b}{c})^4+(\frac{c}{d})^4+(\frac{d}{e})^4+(\frac{e}{a})^4\ge\frac{b}{a}+\frac{c}{b}+\frac{d}{c}+\frac{e}{d}+\frac{a}{e}$ How exactly do I solve this problem? (Source: 1984 British Math Olympiad #3 part II) $$\begin{equation*} \bigl(\frac{a}{b}\bigr)^4 + \bigl(\frac{b}{c}\bigr)^4 + \bigl(\frac{c}{d}\bigr)^4 + \bigl(\frac{d}{e}\bigr)^4 + \bigl(\frac{e}{a}\bigr)^4 \ge \frac{b}{a} + \frac{c}{b} + \frac{d}{c} + \frac{e}{d} + \frac{a}{e} \end{equation*}$$ There's not really a clear-cut way to use AM-GM on this problem. I've been thinking of maybe using the Power Mean Inequality, but I don't exactly see a way to do that. Maybe we could use harmonic mean for the RHS? • someone please explain why this is closed. I think I have adequately explained some strategies that I've tried. I believe I've provided enough context. Commented Dec 10, 2022 at 19:54 • I'm kinda new around here, but I was also surprised to see it closed. Also I found the accepted solution to be very nice. Commented Dec 10, 2022 at 21:48 Applying the AM-GM $$LHS - \bigl(\frac{e}{a}\bigr)^4 = \bigl(\frac{a}{b}\bigr)^4 + \bigl(\frac{b}{c}\bigr)^4 + \bigl(\frac{c}{d}\bigr)^4 + \bigl(\frac{d}{e}\bigr)^4 \ge4 \cdot \frac{a}{b} \cdot \frac{b}{c} \cdot \frac{c}{d} \cdot\frac{d}{e} = 4\cdot\frac{a}{e}$$ Do the same thing for these 4 others terms, and make the sum $$5 LHS - LHS \ge 4 RHS$$ $$\Longleftrightarrow LHS \ge RHS$$ The equality occurs when $$a=b=c=d=e$$ • I think in the end you should have $5LHS-LHS\geq 4RHS$, since you repeat the procedure 5 times, not 4. Then everything works. :) Commented Dec 5, 2022 at 9:33 • @Freshman'sDream You're right, I just corrected this typo. Thanks! – NN2 Commented Dec 5, 2022 at 9:34 • ohhhhh ok thanks! Commented Dec 9, 2022 at 23:08 NN2 gave a simple and very elegamt proof. I tried another way. What is the minumum of the function $$f(x_1,x_2,x_3,x_4,x_5)=\sum_{i=1}^{5}(x_i^4-x_i^{-1})$$ with domain $$\Bbb{R}^{5+}$$, subject to the constraint equation $$x_1x_2x_3x_4x_5=1$$? The system of a Lagrange multplier $$\lambda$$ gives the equations $$4x_i^3+x_i^{-2}=\lambda x_i^{-1}$$ for all $$i=1,2,3,4,5$$.
From these equations we have $$4x_ix_j(x_i^4-x_j^4)=x_i-x_j$$ for all $$i,j.$$ I am stuck.
https://math.stackexchange.com/questions/767888/math-for-future-value-of-growing-annuity
1,597,311,740,000,000,000
text/html
crawl-data/CC-MAIN-2020-34/segments/1596439738964.20/warc/CC-MAIN-20200813073451-20200813103451-00073.warc.gz
395,301,204
34,016
Math for Future Value of Growing Annuity Am I working this out correctly? I need to verify that my code is correct... $$1000 \cdot \left(\frac{(1 + 0.1 / 12)^{40 * 12} - (1 + 0.06 / 12)^{40 * 12}}{(0.1 / 12) - (0.06 / 12)}\right)$$ Something like this: 53.700663174244 - 10.957453671655 ( = 42.7432095026 ) / 0.0083333333333333 - 0.005 ( = 0.00333333333 ) * 1000 = 12822 962.8636 ps. could someone please help me with the tag selection * blush* EDIT: Sorry I know this is a mouthful, but if the math don't add up the code can't add up plus I'm actually a designer... not equal to programmer or mathematician. I'm a creative logician :) Below is part A which must be added (summed) to part B (original question). A: $$Future Value (FV) of Lumpsum = PV \cdot (1+i/12)^{b*12}$$ B: $$FV of Growing Annuity = R1 \cdot \left(\frac{(1 + i / 12)^{b * 12} - (1 + g / 12)^{b * 12}}{(i / 12) - (g / 12)}\right)$$ • Current savings for retirement (Rands) = PV • Rate of return = i/100 • Retirement age (years) – Current age (years) = b • Current monthly contribution towards retirement (Rands) = R1 • 6/100 (Annual Growth rate of annuities) = g This is all I have to offer except for the more complicated formula to work out the rest of "Savings for Retirement", but if my example B is correct then the B they gave me is wrong and it's driving me nuts because I'm also having trouble with: C: $$PV of an Growing Annuity = \left(\frac{R2 \cdot(1 + g / 12)^{b * 12}}{(i / 12) - (g / 12)}\right) \cdot \left(1- \left( \frac{(1 + g / 12)^{b * 12}}{(1 + i / 12)^{n * 12}}\right)\right)$$ • Rate of return = i/100 • Retirement age (years) – Current age (years) = b • 95 (Assumed age of death) - Retirement age (years) = n • Monthly income need at retirement (Rands) = R2 • 6/100 (Annual Growth rate of annuities) = g Which then must be: $$C-(A+B)$$ And finally, let me just give it all... D: $$FV of Growing Annuity = R3 \cdot \left(\frac{((1 + i / 12)^{b * 12} - (1 + g / 12)^{b * 12} )}{(i / 12) - (g / 12)}\right)$$ • Answer of C-(A + B) = FV of Growing Annuity • Rate of return = i/100 • Retirement age (years) – Current age (years) = b • 6/100 (Annual Growth rate of annuities) = g • You should state what problem you are trying to solve. It appears you are starting with a deposit of 1000 that draws some amount of interest for some time, but what the subtractions mean I can't guess. I think the first term is $10\%$ annual interest compounded monthly for 40 years. Then you should write it mathematically-we don't necessarily know what the arguments for Math.Pow are. – Ross Millikan Apr 24 '14 at 21:05 • To elaborate on what @RossMillikan meant, you gave a series of numbers and asked "Is this correct?" without specifying what those numbers mean and the goal of the calculation. For instance, $1000(1+0.1/12)^{40*12}$ gives your total money with an initial investment of \$1000, a rate of 10%, monthly compounding and 40 years of time. Why are you then subtracting the same calculation but with a 6% rate? Why are you dividing by the difference of these rates? We can't know if what you're doing is correct if we don't know what you're trying to do. – RandomUser Apr 24 '14 at 21:27
1,030
3,194
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.0625
4
CC-MAIN-2020-34
latest
en
0.778724
Math for Future Value of Growing Annuity Am I working this out correctly? I need to verify that my code is correct... $$1000 \cdot \left(\frac{(1 + 0.1 / 12)^{40 * 12} - (1 + 0.06 / 12)^{40 * 12}}{(0.1 / 12) - (0.06 / 12)}\right)$$ Something like this: 53.700663174244 - 10.957453671655 ( = 42.7432095026 ) / 0.0083333333333333 - 0.005 ( = 0.00333333333 ) * 1000 = 12822 962.8636 ps. could someone please help me with the tag selection * blush* EDIT: Sorry I know this is a mouthful, but if the math don't add up the code can't add up plus I'm actually a designer... not equal to programmer or mathematician. I'm a creative logician :) Below is part A which must be added (summed) to part B (original question).
A: $$Future Value (FV) of Lumpsum = PV \cdot (1+i/12)^{b*12}$$ B: $$FV of Growing Annuity = R1 \cdot \left(\frac{(1 + i / 12)^{b * 12} - (1 + g / 12)^{b * 12}}{(i / 12) - (g / 12)}\right)$$ • Current savings for retirement (Rands) = PV • Rate of return = i/100 • Retirement age (years) – Current age (years) = b • Current monthly contribution towards retirement (Rands) = R1 • 6/100 (Annual Growth rate of annuities) = g This is all I have to offer except for the more complicated formula to work out the rest of "Savings for Retirement", but if my example B is correct then the B they gave me is wrong and it's driving me nuts because I'm also having trouble with: C: $$PV of an Growing Annuity = \left(\frac{R2 \cdot(1 + g / 12)^{b * 12}}{(i / 12) - (g / 12)}\right) \cdot \left(1- \left( \frac{(1 + g / 12)^{b * 12}}{(1 + i / 12)^{n * 12}}\right)\right)$$ • Rate of return = i/100 • Retirement age (years) – Current age (years) = b • 95 (Assumed age of death) - Retirement age (years) = n • Monthly income need at retirement (Rands) = R2 • 6/100 (Annual Growth rate of annuities) = g Which then must be: $$C-(A+B)$$ And finally, let me just give it all... D: $$FV of Growing Annuity = R3 \cdot \left(\frac{((1 + i / 12)^{b * 12} - (1 + g / 12)^{b * 12} )}{(i / 12) - (g / 12)}\right)$$ • Answer of C-(A + B) = FV of Growing Annuity • Rate of return = i/100 • Retirement age (years) – Current age (years) = b • 6/100 (Annual Growth rate of annuities) = g • You should state what problem you are trying to solve.
https://mathematica.stackexchange.com/questions/216293/phase-portrait-for-ode-with-ivp?noredirect=1
1,696,284,912,000,000,000
text/html
crawl-data/CC-MAIN-2023-40/segments/1695233511021.4/warc/CC-MAIN-20231002200740-20231002230740-00732.warc.gz
414,570,837
42,266
# Phase Portrait for ODE with IVP I'm trying to make a phase portrait for the ODE x'' + 16x = 0, with initial conditions x[0]=-1 & x'[0]=0. I know how to solve the ODE and find the integration constants; the solution comes out to be x(t) = -cos(4t) and x'(t) = 4sin(4t). But I don't know how to make a phase portrait out of it. I've looked at this link Plotting a Phase Portrait but I couldn't replicate mine based off of it. Phase portrait for any second order autonomous ODE can be found as follows. Convert the ODE to state space. This results in 2 first order ODE's. Then call StreamPlot with these 2 equations. Let the state variables be $$x_1=x,x_2=x'(t)$$, then taking derivatives w.r.t time gives $$x'{_1}=x_2,x'{_2}=x''(t)=-16 x_1$$. Now, using StreamPlot gives StreamPlot[{x2, -16 x1}, {x1, -2, 2}, {x2, -2, 2}] To see the line that passes through the initial conditions $$x_1(0)=1,x_2(0)=0.1$$, add the option StreamPoints StreamPlot[{x2, -16 x1}, {x1, -2, 2}, {x2, -5, 5}, StreamPoints -> {{{{1, .1}, Red}, Automatic}}] To verify the above is the correct phase plot, you can do ClearAll[x, t] ode = x''[t] + 16 x[t] == 0; ic = {x[0] == 1, x'[0] == 1/10}; sol = x[t] /. First@(DSolve[{ode, ic}, x[t], t]); ParametricPlot[Evaluate[{sol, D[sol, t]}], {t, 0, 3}, PlotStyle -> Red] The advatage of phase plot, is that one does not have to solve the ODE first (so it works for nonlinear hard to solve ODE's). All what you have to do is convert the ODE to state space and use function like StreamPlot If you want to automate the part of converting the ODE to state space, you can also use Mathematica for that. Simply use StateSpaceModel and just read of the equations. eq = x''[t] + 16 x[t] == 0; ss = StateSpaceModel[{eq}, {{x[t], 0}, {x'[t], 0}}, {}, {x[t]}, t] The above shows the A matrix in $$x'=Ax$$. So first row reads $$x_1'(t)=x_2$$ and second row reads $$x'_2(t)=-16 x_1$$ The following can be done to automate plotting StreamPlot directly from the state space ss result A = First@Normal[ss]; vars = {x1, x2}; (*state space variables*) eqs = A . vars; StreamPlot[eqs, {x1, -2, 2}, {x2, -5, 5}, StreamPoints -> {{{{1, .1}, Red}, Automatic}}] • Can you method plot y''[x]+2 y'[x]+3 y[x]==2 x? – yode Mar 27, 2022 at 8:59 • @yode Phase portrait are used for homogeneous ode's. Systems of the form $x'=A x$ and not $x'=A x + u$. Since it shows the behaviour of the system itself, independent of any forcing functions (the stuff on the RHS). This behavior is given by phase portrait diagram. The reason is, it is only the $A$ matrix eigenvalues and eigenvectors that determines this behaviour, and $A$ depends only on the system itself, without any external input being there. Mar 27, 2022 at 14:08 • Can we plot your ss in MMA directly? – yode Mar 29, 2022 at 10:59 • @yoda Yes. I've updated the above with what I think you are asking for. Hope this helps. Mar 29, 2022 at 15:23 EquationTrekker works for me, but if you are not interested in looking at a range of solutions, it might be easier to just do it with ParametricPlot x[t_] := -Cos[4 t] ParametricPlot[{x[t], x'[t]} // Evaluate, {t, 0, 2 π}, Axes -> False, PlotLabel -> PhaseTrajectory, Frame -> True, FrameLabel -> {x[t], x'[t]}, GridLines -> Automatic] • What version is this on, Bill? Someone in the QA that OP links to says EquationTrekker is broken for them on v11.0 Mar 15, 2020 at 6:04 • This plot is from ParametricPlot, not EquationTrekker, but in v12.0 EquationTrekker gives me plots, although I do get PropertyValue errors. Mar 15, 2020 at 7:40
1,140
3,557
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.65625
4
CC-MAIN-2023-40
longest
en
0.844324
# Phase Portrait for ODE with IVP I'm trying to make a phase portrait for the ODE x'' + 16x = 0, with initial conditions x[0]=-1 & x'[0]=0. I know how to solve the ODE and find the integration constants; the solution comes out to be x(t) = -cos(4t) and x'(t) = 4sin(4t). But I don't know how to make a phase portrait out of it. I've looked at this link Plotting a Phase Portrait but I couldn't replicate mine based off of it. Phase portrait for any second order autonomous ODE can be found as follows. Convert the ODE to state space. This results in 2 first order ODE's. Then call StreamPlot with these 2 equations. Let the state variables be $$x_1=x,x_2=x'(t)$$, then taking derivatives w.r.t time gives $$x'{_1}=x_2,x'{_2}=x''(t)=-16 x_1$$. Now, using StreamPlot gives StreamPlot[{x2, -16 x1}, {x1, -2, 2}, {x2, -2, 2}] To see the line that passes through the initial conditions $$x_1(0)=1,x_2(0)=0.1$$, add the option StreamPoints StreamPlot[{x2, -16 x1}, {x1, -2, 2}, {x2, -5, 5}, StreamPoints -> {{{{1, .1}, Red}, Automatic}}] To verify the above is the correct phase plot, you can do ClearAll[x, t] ode = x''[t] + 16 x[t] == 0; ic = {x[0] == 1, x'[0] == 1/10}; sol = x[t] /. First@(DSolve[{ode, ic}, x[t], t]); ParametricPlot[Evaluate[{sol, D[sol, t]}], {t, 0, 3}, PlotStyle -> Red] The advatage of phase plot, is that one does not have to solve the ODE first (so it works for nonlinear hard to solve ODE's). All what you have to do is convert the ODE to state space and use function like StreamPlot If you want to automate the part of converting the ODE to state space, you can also use Mathematica for that. Simply use StateSpaceModel and just read of the equations. eq = x''[t] + 16 x[t] == 0; ss = StateSpaceModel[{eq}, {{x[t], 0}, {x'[t], 0}}, {}, {x[t]}, t] The above shows the A matrix in $$x'=Ax$$. So first row reads $$x_1'(t)=x_2$$ and second row reads $$x'_2(t)=-16 x_1$$ The following can be done to automate plotting StreamPlot directly from the state space ss result A = First@Normal[ss]; vars = {x1, x2}; (*state space variables*) eqs = A . vars; StreamPlot[eqs, {x1, -2, 2}, {x2, -5, 5}, StreamPoints -> {{{{1, .1}, Red}, Automatic}}] • Can you method plot y''[x]+2 y'[x]+3 y[x]==2 x? – yode Mar 27, 2022 at 8:59 • @yode Phase portrait are used for homogeneous ode's.
Systems of the form $x'=A x$ and not $x'=A x + u$.
http://math.stackexchange.com/questions/tagged/vector-spaces+vector-analysis
1,398,348,055,000,000,000
text/html
crawl-data/CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00167-ip-10-147-4-33.ec2.internal.warc.gz
210,644,211
24,883
# Tagged Questions 23 views ### Why should we expect the divergence operator to be invariant under transformations? A lot of the time with vector calculus identities, something that seems magical at first ends up having a nice and unique proof. For the divergence operator, one can prove that it's invariant under a ... 3 views ### Gentle introduction to discrete vector field [closed] I am looking for a gentle introduction to discrete vector field. Thanks in advance. 26 views ### Vectors and Planes Let there be 2 planes: $x-y+z=2, 2x-y-z=1$ Find the equation of the line of the intersection of the two planes, as well as that of another plane which goes through that line. Attempt to solve: the ... 25 views 63 views ### Extrema of a vector norm under two inner-product constraints. If $\langle\vec{A},\vec{V}\rangle=1\; ,\; \langle\vec{B},\vec{V}\rangle=c$, then: \begin{align} max\left \| \vec{V} \right \|_{1}=?\;\;\;min\left \| \vec{V} \right \|_{1}=? \end{align} Consider the ... 129 views ### How to rotate two vectors (2d), where their angle is larger than 180. The rotation matrix $$\begin{bmatrix} \cos\theta & -\sin \theta\\ \sin\theta & \cos\theta \end{bmatrix}$$ cannot process the case that the angle between two vectors is larger than $180$ ... 53 views ### Is this statement about vectors true? If vectors $A$ and $B$ are parallel, then, $|A-B| = |A| - |B|$ Is the above statement true? 822 views ### Collinearity of three points of vectors Show that the three vectors $$A\_ = 2i + j - 3k , B\_ = i - 4k , C\_ = 4i + 3j -k$$ are linearly dependent. Determine a relation between them and hence show that the terminal points are collinear. ... 92 views 135 views ### Vectors transformation Give a necessary and sufficient condition ("if and only if") for when three vectors $a, b, c, \in \mathbb{R^2}$ can be transformed to unit length vectors by a single affine transformation. This is ... 56 views ### To show the inequality $\|A\|\geq\max\{\|u_1\|,\ldots,\|u_q\|,\|\vec{v_1}\|,\ldots,\|\vec{v_q}\|\}$ Let $A\in$ $\mathbb{C}^{p\times q}$ with column $u_1,\ldots,u_q$ and rows $\vec{v_1},\ldots,\vec{v_p}$. show that $$\|A\|\geq\max\{\|u_1\|,\ldots,\|u_q\|,\|\vec{v_1}\|,\ldots,\|\vec{v_q}\|\}$$ and ... 160 views ### Find the necessary and sufficient conditions on $A$ such that $\|T(\vec{x})\|=|\det A|\cdot\|\vec{x}\|$ for all $\vec{x}$. Consider the mapping $T:\mathbb{R}^n\mapsto\mathbb{R}^n$ defined by $T(\vec{x})=A\vec{x}$ where $A$ is a $n\times n$ matrix. Find the necessary and sufficient conditions on $A$ such that ... 58 views ### Dot products of three or more vectors Can't we construct a mapping from $V^3(R^1)$ to $R$ such that $a.b.c = a_{x}b_{x}c_{x}+a_{y}b_{y}c_{y}+a_{z}b_{z}c_{z}$ (a,b,c are vectors in $V^3(R^1)$ ) and more generally $a^n$ , $a.b.c.d.e...$ ... 365 views 50 views
900
2,846
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
3.84375
4
CC-MAIN-2014-15
longest
en
0.833623
# Tagged Questions 23 views ### Why should we expect the divergence operator to be invariant under transformations? A lot of the time with vector calculus identities, something that seems magical at first ends up having a nice and unique proof. For the divergence operator, one can prove that it's invariant under a ... 3 views ### Gentle introduction to discrete vector field [closed] I am looking for a gentle introduction to discrete vector field. Thanks in advance. 26 views ### Vectors and Planes Let there be 2 planes: $x-y+z=2, 2x-y-z=1$ Find the equation of the line of the intersection of the two planes, as well as that of another plane which goes through that line. Attempt to solve: the ... 25 views 63 views ### Extrema of a vector norm under two inner-product constraints. If $\langle\vec{A},\vec{V}\rangle=1\; ,\; \langle\vec{B},\vec{V}\rangle=c$, then: \begin{align} max\left \| \vec{V} \right \|_{1}=?\;\;\;min\left \| \vec{V} \right \|_{1}=? \end{align} Consider the ... 129 views ### How to rotate two vectors (2d), where their angle is larger than 180. The rotation matrix $$\begin{bmatrix} \cos\theta & -\sin \theta\\ \sin\theta & \cos\theta \end{bmatrix}$$ cannot process the case that the angle between two vectors is larger than $180$ ... 53 views ### Is this statement about vectors true? If vectors $A$ and $B$ are parallel, then, $|A-B| = |A| - |B|$ Is the above statement true? 822 views ### Collinearity of three points of vectors Show that the three vectors $$A\_ = 2i + j - 3k , B\_ = i - 4k , C\_ = 4i + 3j -k$$ are linearly dependent. Determine a relation between them and hence show that the terminal points are collinear. ... 92 views 135 views ### Vectors transformation Give a necessary and sufficient condition ("if and only if") for when three vectors $a, b, c, \in \mathbb{R^2}$ can be transformed to unit length vectors by a single affine transformation. This is ... 56 views ### To show the inequality $\|A\|\geq\max\{\|u_1\|,\ldots,\|u_q\|,\|\vec{v_1}\|,\ldots,\|\vec{v_q}\|\}$ Let $A\in$ $\mathbb{C}^{p\times q}$ with column $u_1,\ldots,u_q$ and rows $\vec{v_1},\ldots,\vec{v_p}$. show that $$\|A\|\geq\max\{\|u_1\|,\ldots,\|u_q\|,\|\vec{v_1}\|,\ldots,\|\vec{v_q}\|\}$$ and ... 160 views ### Find the necessary and sufficient conditions on $A$ such that $\|T(\vec{x})\|=|\det A|\cdot\|\vec{x}\|$ for all $\vec{x}$. Consider the mapping $T:\mathbb{R}^n\mapsto\mathbb{R}^n$ defined by $T(\vec{x})=A\vec{x}$ where $A$ is a $n\times n$ matrix.
Find the necessary and sufficient conditions on $A$ such that ... 58 views ### Dot products of three or more vectors Can't we construct a mapping from $V^3(R^1)$ to $R$ such that $a.b.c = a_{x}b_{x}c_{x}+a_{y}b_{y}c_{y}+a_{z}b_{z}c_{z}$ (a,b,c are vectors in $V^3(R^1)$ ) and more generally $a^n$ , $a.b.c.d.e...$ ... 365 views 50 views
https://quant.stackexchange.com/questions/68635/characteristics-of-factor-portfolios/68646
1,713,245,452,000,000,000
text/html
crawl-data/CC-MAIN-2024-18/segments/1712296817043.36/warc/CC-MAIN-20240416031446-20240416061446-00402.warc.gz
439,703,907
40,053
# characteristics of factor portfolios In the paper Characteristics of Factor Portfolios (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1601414), when it discusses pure factor portfolios, it says that simple style factor portfolios have zero exposure to all other style, country, and industry factors. Could someone help me understand the math for why the style factor portfolios have zero exposure to all other style, country, and industry factors? So, for example, if we are interested in the return of a P/E factor and a P/B factor, we would gather the P/E and P/B for all of our stocks into a matrix of loadings $$B$$. $$B$$ would have two columns – one containing P/E and one containing P/B for all assets. We then regress $$R$$ (a vector containing the returns of all assets) on $$B$$. OLS regression gives us $$f= (B’B)^{-1} B’R$$ = the returns of the style factors for this particular period. The rows of $$(B’B)^{-1} B’$$ are considered to be the factor portfolios. So, let’s go one step further and look at the loadings of the portfolio on the individual styles by multiplying the factor portfolios with the matrix of loadings. This gives $$(B’B)^{-1} B’B = I$$ - an identity matrix. Hence, the loadings of each factor portfolio are 1 against the particular style and 0 against any other style.
312
1,312
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.59375
4
CC-MAIN-2024-18
latest
en
0.874208
# characteristics of factor portfolios In the paper Characteristics of Factor Portfolios (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1601414), when it discusses pure factor portfolios, it says that simple style factor portfolios have zero exposure to all other style, country, and industry factors. Could someone help me understand the math for why the style factor portfolios have zero exposure to all other style, country, and industry factors? So, for example, if we are interested in the return of a P/E factor and a P/B factor, we would gather the P/E and P/B for all of our stocks into a matrix of loadings $$B$$. $$B$$ would have two columns – one containing P/E and one containing P/B for all assets. We then regress $$R$$ (a vector containing the returns of all assets) on $$B$$. OLS regression gives us $$f= (B’B)^{-1} B’R$$ = the returns of the style factors for this particular period. The rows of $$(B’B)^{-1} B’$$ are considered to be the factor portfolios. So, let’s go one step further and look at the loadings of the portfolio on the individual styles by multiplying the factor portfolios with the matrix of loadings.
This gives $$(B’B)^{-1} B’B = I$$ - an identity matrix.
https://math.stackexchange.com/questions/851072/theorem-on-giuga-number/851114
1,561,549,511,000,000,000
text/html
crawl-data/CC-MAIN-2019-26/segments/1560628000306.84/warc/CC-MAIN-20190626114215-20190626140215-00312.warc.gz
522,635,721
34,745
# Theorem on Giuga number Giuga number : $n$ is a Giuga number $\iff$ For every prime factor $p$ of $n$ , $p | (\frac{n}{p}-1)$ How to prove the following theorem on Giuga numbers $n$ is a giuga number $\iff$ $\sum_{i=1}^{n-1} i^{\phi(n)} \equiv -1 \mod {n}$ ## 1 Answer The $\Rightarrow$ part. For first, a giuga number must be squarefree, since, by assuming $p^2\mid n$, we have that $p$ divides two consecutive numbers, $\frac{n}{p}$ and $\frac{n}{p}-1$, that is clearly impossible. So we have: $$n = \prod_{i=1}^{k} p_i$$ that implies: $$\phi(n) = \prod_{i=1}^{k} (p_i-1).$$ By considering the sum $$\sum_{i=0}^{n-1}i^{\phi(n)}$$ $\pmod{p_i}$ we have that all the terms contribute with a $1$, except the multiples of $p_i$ that contribute with a zero. This gives: $$\sum_{i=0}^{n-1}i^{\phi(n)}\equiv n-\frac{n}{p_i}\equiv (n-1)\pmod{p_i}\tag{1}$$ that holds for any $i\in[1,k]$. The chinese theorem now give: $$\sum_{i=0}^{n-1}i^{\phi(n)}\equiv n-1\pmod{\prod_{i=1}^{k}p_i}$$ that is just: $$\sum_{i=0}^{n-1}i^{\phi(n)}\equiv -1\pmod{n}$$ as claimed. For the $\Leftarrow$ part, we have that the congruence $\!\!\!\pmod{n}$ implies the congruence $\!\!\!\pmod{p_i}$, hence $(1)$ must hold, so we must have: $$\frac{n}{p_i}\equiv 1\pmod{p_i}$$ that is equivalent to $p_i\mid\left(\frac{n}{p_i}-1\right).$
504
1,311
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.15625
4
CC-MAIN-2019-26
latest
en
0.746232
# Theorem on Giuga number Giuga number : $n$ is a Giuga number $\iff$ For every prime factor $p$ of $n$ , $p | (\frac{n}{p}-1)$ How to prove the following theorem on Giuga numbers $n$ is a giuga number $\iff$ $\sum_{i=1}^{n-1} i^{\phi(n)} \equiv -1 \mod {n}$ ## 1 Answer The $\Rightarrow$ part. For first, a giuga number must be squarefree, since, by assuming $p^2\mid n$, we have that $p$ divides two consecutive numbers, $\frac{n}{p}$ and $\frac{n}{p}-1$, that is clearly impossible. So we have: $$n = \prod_{i=1}^{k} p_i$$ that implies: $$\phi(n) = \prod_{i=1}^{k} (p_i-1).$$ By considering the sum $$\sum_{i=0}^{n-1}i^{\phi(n)}$$ $\pmod{p_i}$ we have that all the terms contribute with a $1$, except the multiples of $p_i$ that contribute with a zero. This gives: $$\sum_{i=0}^{n-1}i^{\phi(n)}\equiv n-\frac{n}{p_i}\equiv (n-1)\pmod{p_i}\tag{1}$$ that holds for any $i\in[1,k]$. The chinese theorem now give: $$\sum_{i=0}^{n-1}i^{\phi(n)}\equiv n-1\pmod{\prod_{i=1}^{k}p_i}$$ that is just: $$\sum_{i=0}^{n-1}i^{\phi(n)}\equiv -1\pmod{n}$$ as claimed.
For the $\Leftarrow$ part, we have that the congruence $\!\!\!\pmod{n}$ implies the congruence $\!\!\!\pmod{p_i}$, hence $(1)$ must hold, so we must have: $$\frac{n}{p_i}\equiv 1\pmod{p_i}$$ that is equivalent to $p_i\mid\left(\frac{n}{p_i}-1\right).$
https://math.meta.stackexchange.com/questions/31929/is-there-any-stack-exchange-site-that-allows-sharing-review-of-interesting-obse/31930
1,620,777,485,000,000,000
text/html
crawl-data/CC-MAIN-2021-21/segments/1620243990419.12/warc/CC-MAIN-20210511214444-20210512004444-00461.warc.gz
412,254,493
31,820
# Is there any Stack Exchange site that allows sharing, review of interesting observations\results made\obtained by students in Mathematics? I'm a 10th grader who is extremely interested in Mathematics and I frequently come across some interesting (at least, to me) results while doing some Math problem and sometimes I want to get an expert-level opinion on that result. For example, I was recently thinking about how one would go on about defining a function that gives a graph like the one given below : I successfully defined such a function using a combination of the floor function, ceiling function, fractional part function and the signum function. It was pretty interesting for me. Another time, I discovered a simple derivation for the quadratic formula and once, a derivation for the compound angle identities in Trigonometry These are some examples of when I wanted to share these and get some reviews/opinions about the results that I had obtained. So, basically, is there a website for Mathematics like Code Review for Coding in the Stack Exchange Community? Thanks! PS : If you're wondering what the functions is, it's given below : $$f(x) = \text{Sign}\Bigg(\Bigg\{\dfrac{\lceil x \rceil}{2} \Bigg \} - a \Bigg) \text{, where } 0 < a < 0.5$$ $$\text{Here, }\{ x \} \text { is the fractional part function which is defined as } \{ x \} = x - \lfloor x \rfloor$$ $$\text{And Sign}(x) \text{ is the signum function, which gives the sign of the input, and } 0 \text{ in case the input is }0$$ Edit : I recently thought of a much simpler version of the function that I talk about above. It is : $$f(x) = \cos(\lfloor x \rfloor \cdot \pi)$$ • A better and clearer definition would be to not insist that it be given by a single formula and to say that $$f(x)=\begin{cases}1&\mbox{ if }2n<x\leq 2n+1\\-1&\mbox{ otherwise}.\end{cases}$$ – Matt Samuel Jun 16 '20 at 20:47 • An addition though : "Where $n \in \Bbb Z$". Actually, the reason that I insisted on a Mathematical definition of the function was so that it can be graphed using a graphing calculator and embedded in a computer program with a mathematical approach. Thanks for the suggestion! – Rajdeep Sindhu Jun 16 '20 at 20:52 • Is "check my work" question not allowed? We have a tag specified for that. – Arctic Char Jun 16 '20 at 20:55 • I am familiar with the solution-verification tag and in fact, have used it a few times too. As far as I know, this question : math.stackexchange.com/questions/3704308/… was closed till some time ago for the reason : Homework and check my work type questions not allowed. It's re-opened now though. Also, wouldn't a separate site (like Code Review for reviewing programs) be nice? – Rajdeep Sindhu Jun 16 '20 at 21:03 • I looked at the timeline and it doesn't appear that the question was ever closed. The reason is invalid in any case, because both homework and check-my-work questions are allowed. – Matt Samuel Jun 16 '20 at 21:34 • @MattSamuel I'm sorry for the misleading info. Looks like I can't recall the question which was closed for being a 'check my work' type question. Maybe (and most probably), it wasn't even on Mathematics SE. – Rajdeep Sindhu Jun 16 '20 at 21:43 • Then get rid of the first sentence of your post, @RajdeepSindhu ! – amWhy Jun 16 '20 at 23:30
859
3,289
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.96875
4
CC-MAIN-2021-21
latest
en
0.924377
# Is there any Stack Exchange site that allows sharing, review of interesting observations\results made\obtained by students in Mathematics? I'm a 10th grader who is extremely interested in Mathematics and I frequently come across some interesting (at least, to me) results while doing some Math problem and sometimes I want to get an expert-level opinion on that result. For example, I was recently thinking about how one would go on about defining a function that gives a graph like the one given below : I successfully defined such a function using a combination of the floor function, ceiling function, fractional part function and the signum function. It was pretty interesting for me. Another time, I discovered a simple derivation for the quadratic formula and once, a derivation for the compound angle identities in Trigonometry These are some examples of when I wanted to share these and get some reviews/opinions about the results that I had obtained. So, basically, is there a website for Mathematics like Code Review for Coding in the Stack Exchange Community? Thanks! PS : If you're wondering what the functions is, it's given below : $$f(x) = \text{Sign}\Bigg(\Bigg\{\dfrac{\lceil x \rceil}{2} \Bigg \} - a \Bigg) \text{, where } 0 < a < 0.5$$ $$\text{Here, }\{ x \} \text { is the fractional part function which is defined as } \{ x \} = x - \lfloor x \rfloor$$ $$\text{And Sign}(x) \text{ is the signum function, which gives the sign of the input, and } 0 \text{ in case the input is }0$$ Edit : I recently thought of a much simpler version of the function that I talk about above.
It is : $$f(x) = \cos(\lfloor x \rfloor \cdot \pi)$$ • A better and clearer definition would be to not insist that it be given by a single formula and to say that $$f(x)=\begin{cases}1&\mbox{ if }2n<x\leq 2n+1\\-1&\mbox{ otherwise}.\end{cases}$$ – Matt Samuel Jun 16 '20 at 20:47 • An addition though : "Where $n \in \Bbb Z$".
https://math.stackexchange.com/questions/2792471/linearization-of-system-of-odes-around-operating-point-transfer-functions-and
1,726,799,609,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00678.warc.gz
340,249,568
37,653
# Linearization of System of ODEs around Operating Point / Transfer Functions and State Space I have this system of ODEs and I'm trying to get a linearized version of it around the "operating point" $\overline{x}_1 = 1$ $$\left\{\begin{matrix} \ddot{x_1}(t)+2\dot{x_1}(t)+2x_1^2(t)-2\dot{x_2}(t)=0 \\ 2\ddot{x_2}(t)+2\dot{x_2}(t)-2\dot{x_1}(t)=f(t) \end{matrix}\right.$$ So I define a small perturbation $\delta x_1$, $\delta x_2$ and $\delta f$ around the operating point $\overline{x}_1$, $\overline{x}_2$ and $\overline{f}$ $$\delta x_1 = x_1 - \overline{x}_1 \Rightarrow \dot{x_1} = \dot{\delta x_1} \Rightarrow \ddot{x_1} = \ddot{\delta x_1}$$ $$\delta x_2 = x_2 - \overline{x}_2 \Rightarrow \dot{x_2} = \dot{\delta x_2} \Rightarrow \ddot{x_2} = \ddot{\delta x_2}$$ $$\delta f = f - \overline{f}$$ I use Taylor polynomial to linearize $x_1^2(t)$ around $\overline{x}_1=1$ as $$x_1^2 \approx \overline{x}_1^2 + 2\overline{x}_1 \delta x_1 = 1 + 2\delta x_1$$ I replace all in the original equations: $$\left\{\begin{matrix}\delta\ddot{x_1}(t)+2\delta\dot{x_1}(t)+2\left [1+2\delta x_1(t) \right ] - 2 \delta \dot{x_2}(t)=0 \\ 2\delta \ddot{x_2}(t)+2\delta \dot{x_2}(t)-2\delta \dot{x_1}(t)=\overline{f}+\delta f(t) \end{matrix}\right.$$ This system is "linear", but not homogeneous, because it has constant terms $2$ and $\overline{f}$. In fact, through force balance we get that $\overline{f}=2$, so the constant terms should mathematically cancel out somehow. How can I get rid of this constant terms? Is there another (better) way to linearize this system of ODEs around $\overline{x}_1=1$ By the way, I got this systems of ODEs from this physical system: • How did you determine that $x_1=1$ is the operating point? You will need a non-constant $\bar f$, as with a constant one you get $x_1=0$ as equilibrium point, just from physical considerations. Note that $$\frac{d}{dt}\left[\frac12 \dot x_1(t)^2+\dot x_2(t)^2+\frac23x_1(t)^3\right]=f(t)\dot x_2(t)-2(\dot x_1(t)-\dot x_2(t))^2,$$ where the last term continuously loses energy, leading to $x_1 \to 0$. You will need a very specific $f$ to continuously replace that lost energy. Commented May 23, 2018 at 7:24 • This is a problem from a textbook. It specifically ask to linearize about $x_1=1$. Thank you Commented May 23, 2018 at 7:35 When linearising a non-linear system of the form $\dot{x} = g(x,f)$ at an operating point $\bar{x}$ and $\bar{f}$ requires that $g(\bar{x},\bar{f})=0$. Since $\bar{x}_1$ is given and $g(x,f)$ is not a function of $x_2$, then $g(\bar{x},\bar{f})=0$ only has a solution when $x_2$ is omitted from the state space vector, so $x$ only contains $x_1$, $\dot{x}_1$ and $\dot{x}_2$ and no $x_2$. So $\bar{\dot{x}}_2$ can then be a non-zero constant, which can be chosen such that $g(\bar{x},\bar{f})=0$ can be satisfied.
1,011
2,827
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.84375
4
CC-MAIN-2024-38
latest
en
0.709077
# Linearization of System of ODEs around Operating Point / Transfer Functions and State Space I have this system of ODEs and I'm trying to get a linearized version of it around the "operating point" $\overline{x}_1 = 1$ $$\left\{\begin{matrix} \ddot{x_1}(t)+2\dot{x_1}(t)+2x_1^2(t)-2\dot{x_2}(t)=0 \\ 2\ddot{x_2}(t)+2\dot{x_2}(t)-2\dot{x_1}(t)=f(t) \end{matrix}\right.$$ So I define a small perturbation $\delta x_1$, $\delta x_2$ and $\delta f$ around the operating point $\overline{x}_1$, $\overline{x}_2$ and $\overline{f}$ $$\delta x_1 = x_1 - \overline{x}_1 \Rightarrow \dot{x_1} = \dot{\delta x_1} \Rightarrow \ddot{x_1} = \ddot{\delta x_1}$$ $$\delta x_2 = x_2 - \overline{x}_2 \Rightarrow \dot{x_2} = \dot{\delta x_2} \Rightarrow \ddot{x_2} = \ddot{\delta x_2}$$ $$\delta f = f - \overline{f}$$ I use Taylor polynomial to linearize $x_1^2(t)$ around $\overline{x}_1=1$ as $$x_1^2 \approx \overline{x}_1^2 + 2\overline{x}_1 \delta x_1 = 1 + 2\delta x_1$$ I replace all in the original equations: $$\left\{\begin{matrix}\delta\ddot{x_1}(t)+2\delta\dot{x_1}(t)+2\left [1+2\delta x_1(t) \right ] - 2 \delta \dot{x_2}(t)=0 \\ 2\delta \ddot{x_2}(t)+2\delta \dot{x_2}(t)-2\delta \dot{x_1}(t)=\overline{f}+\delta f(t) \end{matrix}\right.$$ This system is "linear", but not homogeneous, because it has constant terms $2$ and $\overline{f}$. In fact, through force balance we get that $\overline{f}=2$, so the constant terms should mathematically cancel out somehow. How can I get rid of this constant terms? Is there another (better) way to linearize this system of ODEs around $\overline{x}_1=1$ By the way, I got this systems of ODEs from this physical system: • How did you determine that $x_1=1$ is the operating point? You will need a non-constant $\bar f$, as with a constant one you get $x_1=0$ as equilibrium point, just from physical considerations. Note that $$\frac{d}{dt}\left[\frac12 \dot x_1(t)^2+\dot x_2(t)^2+\frac23x_1(t)^3\right]=f(t)\dot x_2(t)-2(\dot x_1(t)-\dot x_2(t))^2,$$ where the last term continuously loses energy, leading to $x_1 \to 0$. You will need a very specific $f$ to continuously replace that lost energy. Commented May 23, 2018 at 7:24 • This is a problem from a textbook. It specifically ask to linearize about $x_1=1$. Thank you Commented May 23, 2018 at 7:35 When linearising a non-linear system of the form $\dot{x} = g(x,f)$ at an operating point $\bar{x}$ and $\bar{f}$ requires that $g(\bar{x},\bar{f})=0$. Since $\bar{x}_1$ is given and $g(x,f)$ is not a function of $x_2$, then $g(\bar{x},\bar{f})=0$ only has a solution when $x_2$ is omitted from the state space vector, so $x$ only contains $x_1$, $\dot{x}_1$ and $\dot{x}_2$ and no $x_2$.
So $\bar{\dot{x}}_2$ can then be a non-zero constant, which can be chosen such that $g(\bar{x},\bar{f})=0$ can be satisfied.
https://math.stackexchange.com/questions/633757/order-of-conjugate-of-an-element-given-the-order-of-its-conjugate?noredirect=1
1,627,528,506,000,000,000
text/html
crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00452.warc.gz
370,976,788
40,303
# Order of conjugate of an element given the order of its conjugate Let $G$ is a group and $a, b \in G$. If $a$ has order $6$, then the order of $bab^{-1}$ is... How to find this answer? Sorry for my bad question, but I need this for my study. • Hint: conjugation is an automorphism – dani_s Jan 10 '14 at 14:27 • @dani_s: that would be too much... it is a basic question and you are proposing to look at the automorphism.. Not a good idea i guess.. – user87543 Jan 10 '14 at 15:32 $$|bab^{-1}|=k\to (bab^{-1})^k=e_G$$ and $k$ is the least positive integer. But $e_G=(bab^{-1})^k=ba^kb^{-1}$ so $a^k=e_G$ so $6\le k$. Obviously, $k\le 6$ (Why?) so $k=6$. Two good pieces of advice are already out here that prove the problem directly, but I'd like to decompose and remix them a little. For a group $G$ and any $g\in G$, the map $x\mapsto gxg^{-1}$ is actually a group automorphism (self-isomorphism). This is a good exercise to prove if you haven't already proven it. Intuitively, given an isomorphism $\phi$, $\phi(G)$ looks just like $G$, and $\phi(g)$ has the same group theoretic properties as $g$. (This includes order.) This motivates you to show that $g^n=1$ iff $\phi(g)^n=1$, and so (for minimal choice of $n$) they share the same order. Here's a slightly more general statement for $\phi$'s that aren't necessarily isomorphisms. Let $\phi:G\to H$ be a group homomorphism of finite groups. Then for each $g\in G$, the order of $\phi(g)$ divides the order of $g$. (Try to prove this!) If $\phi$ is an isomorphism, then so is $\phi^{-1}$, and so the order of $\phi(g)$ divides the order of $g$, and the order of $\phi^{-1}(\phi(g))=g$ divides the order of $\phi(g)$, and thus they're equal. • @Andreas It seems this question (and variants) are destined to be prototypical examples of an abstract duplicate (e.g. recall the recent question). In fact, even the comments are becoming duplicate! – Bill Dubuque Jan 12 '14 at 17:56 • @BillDubuque, an optimistic view of the fact that the comments are becoming duplicates is that we are reaching a consensus on a canonical form for answers and comments ;-) – Andreas Caranti Jan 12 '14 at 18:08 Note that $(bab^{-1})^2 = bab^{-1}bab^{-1} = ba^2b^{-1}$. Similarly $(bab^{-1})^n = ba^nb^{-1}$ for any $n$. When will $ba^nb^{-1} = 1$ using the information about $a$? Then you just have to check to see that $ba^mb^{-1} \not = 1$ for any $1 \leq m < n$. • I still don't get it. – Yagami Jan 10 '14 at 14:58 In general, let $o(a)=n$ and $o(bab^{-1})=k$, then $(bab^{-1})^k=ba^kb^{-1}=e$, by Cancellation Law in group, we can get $a^k=e$, since $o(a)=n$, then $k \geq n$ (in fact we can get $n|k$, but in this proof $k \geq n$ is enough). Easy to see that if $k=n$ then $(bab^{-1})^n=ba^nb^{-1}=beb^{-1}=e$, hence $k=n$. CONCLUSION: $o(a)=o(bab^{-1})$.
911
2,811
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.921875
4
CC-MAIN-2021-31
latest
en
0.887827
# Order of conjugate of an element given the order of its conjugate Let $G$ is a group and $a, b \in G$. If $a$ has order $6$, then the order of $bab^{-1}$ is... How to find this answer? Sorry for my bad question, but I need this for my study. • Hint: conjugation is an automorphism – dani_s Jan 10 '14 at 14:27 • @dani_s: that would be too much... it is a basic question and you are proposing to look at the automorphism.. Not a good idea i guess.. – user87543 Jan 10 '14 at 15:32 $$|bab^{-1}|=k\to (bab^{-1})^k=e_G$$ and $k$ is the least positive integer. But $e_G=(bab^{-1})^k=ba^kb^{-1}$ so $a^k=e_G$ so $6\le k$. Obviously, $k\le 6$ (Why?) so $k=6$. Two good pieces of advice are already out here that prove the problem directly, but I'd like to decompose and remix them a little. For a group $G$ and any $g\in G$, the map $x\mapsto gxg^{-1}$ is actually a group automorphism (self-isomorphism). This is a good exercise to prove if you haven't already proven it. Intuitively, given an isomorphism $\phi$, $\phi(G)$ looks just like $G$, and $\phi(g)$ has the same group theoretic properties as $g$. (This includes order.) This motivates you to show that $g^n=1$ iff $\phi(g)^n=1$, and so (for minimal choice of $n$) they share the same order. Here's a slightly more general statement for $\phi$'s that aren't necessarily isomorphisms. Let $\phi:G\to H$ be a group homomorphism of finite groups. Then for each $g\in G$, the order of $\phi(g)$ divides the order of $g$. (Try to prove this!) If $\phi$ is an isomorphism, then so is $\phi^{-1}$, and so the order of $\phi(g)$ divides the order of $g$, and the order of $\phi^{-1}(\phi(g))=g$ divides the order of $\phi(g)$, and thus they're equal. • @Andreas It seems this question (and variants) are destined to be prototypical examples of an abstract duplicate (e.g. recall the recent question). In fact, even the comments are becoming duplicate! – Bill Dubuque Jan 12 '14 at 17:56 • @BillDubuque, an optimistic view of the fact that the comments are becoming duplicates is that we are reaching a consensus on a canonical form for answers and comments ;-) – Andreas Caranti Jan 12 '14 at 18:08 Note that $(bab^{-1})^2 = bab^{-1}bab^{-1} = ba^2b^{-1}$. Similarly $(bab^{-1})^n = ba^nb^{-1}$ for any $n$. When will $ba^nb^{-1} = 1$ using the information about $a$? Then you just have to check to see that $ba^mb^{-1} \not = 1$ for any $1 \leq m < n$. • I still don't get it. – Yagami Jan 10 '14 at 14:58 In general, let $o(a)=n$ and $o(bab^{-1})=k$, then $(bab^{-1})^k=ba^kb^{-1}=e$, by Cancellation Law in group, we can get $a^k=e$, since $o(a)=n$, then $k \geq n$ (in fact we can get $n|k$, but in this proof $k \geq n$ is enough). Easy to see that if $k=n$ then $(bab^{-1})^n=ba^nb^{-1}=beb^{-1}=e$, hence $k=n$.
CONCLUSION: $o(a)=o(bab^{-1})$.
https://gamedev.stackexchange.com/questions/138165/how-can-i-move-and-rotate-an-object-in-an-infinity-or-figure-8-trajectory/138167
1,560,682,448,000,000,000
text/html
crawl-data/CC-MAIN-2019-26/segments/1560627998100.52/warc/CC-MAIN-20190616102719-20190616124719-00534.warc.gz
453,195,493
34,997
# How can I move and rotate an object in an “infinity” or “figure 8” trajectory? I know that the easiest way to move an object with the figure 8 trajectory is: x = cos(t); y = sin(2*t) / 2; but I just don't know how to rotate it, lets says with a new variable r as rotation, how do I merge it into the above formula ? can anyone please advise me on what is the simplest and cheapest way/formula to move and rotate the figure 8 trajectory ? ## 1 Answer The object should point in the direction of the derivative, which is [-sin(t), cos(2t)]. Its angle is atan2(cos(2t), -sin(t)). Edit: OP is apparently asking how to rotate the "trajectory," not the object itself. To rotate the figure, choose an angle, θ, in radians, that you'd like the trajectory to be rotated. The position along this rotated figure is: x = cos(θ) * cos(t) - sin(θ) * sin(2t)/2 y = sin(θ) * cos(t) + cos(θ) * sin(2t)/2 • so how would I modify the formula to get a rotated figure of 8 ? – user1998844 Mar 3 '17 at 18:24 • That is a completely different question than the one I answered. I'll edit my answer with a solution to this question. – Drew Cummins Mar 3 '17 at 18:31
325
1,153
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.59375
4
CC-MAIN-2019-26
latest
en
0.921025
# How can I move and rotate an object in an “infinity” or “figure 8” trajectory? I know that the easiest way to move an object with the figure 8 trajectory is: x = cos(t); y = sin(2*t) / 2; but I just don't know how to rotate it, lets says with a new variable r as rotation, how do I merge it into the above formula ? can anyone please advise me on what is the simplest and cheapest way/formula to move and rotate the figure 8 trajectory ? ## 1 Answer The object should point in the direction of the derivative, which is [-sin(t), cos(2t)]. Its angle is atan2(cos(2t), -sin(t)). Edit: OP is apparently asking how to rotate the "trajectory," not the object itself. To rotate the figure, choose an angle, θ, in radians, that you'd like the trajectory to be rotated. The position along this rotated figure is: x = cos(θ) * cos(t) - sin(θ) * sin(2t)/2 y = sin(θ) * cos(t) + cos(θ) * sin(2t)/2 • so how would I modify the formula to get a rotated figure of 8 ? – user1998844 Mar 3 '17 at 18:24 • That is a completely different question than the one I answered. I'll edit my answer with a solution to this question.
– Drew Cummins Mar 3 '17 at 18:31
https://engineering.stackexchange.com/questions/54395/how-much-force-is-needed-to-break-off-the-stick
1,719,331,888,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198866143.18/warc/CC-MAIN-20240625135622-20240625165622-00431.warc.gz
196,296,852
39,140
# How much force is needed to break off the stick Let's consider the following figure The grey box contains a blue stick which is fixed. The blue stick has a length of $$a+b+c$$ and two diameters $$f,h$$. The diameter $$h$$ describes the part $$b$$ of the stick. The stick is fixed in the plane but the plane is not connected to the grey box. A force $$F$$ is pushing against the withe plane like in the picture. How much force is needed to break off the stick in part $$b$$? • How much effort have you applied to try to obtain a proposed solution? Commented Feb 26, 2023 at 20:03 • I don't have an idea how I could solve this because I had never a mechanical problem with a notch. What I also can say is that I see two different ways how this plane could move: One way would be striaght downward if the force is close to the notch or the plane is rotated if the force comes from the outer part of the plane. Commented Feb 26, 2023 at 20:26 • I just have some knowledge about bending sticks and not about stuff like in my picture. Commented Feb 26, 2023 at 20:43 • Apply your knowledge about bending sticks to try to solve the problem. We should like to see how far that takes you. Commented Feb 26, 2023 at 20:57 • Does the white plane slide against the grey plane or does it tilt ie pivot at the lower left corner? Commented Feb 26, 2023 at 21:18 ## 1 Answer We assume the distance from F to the hinge to be $$X_F=a+b+c+d/2$$ We calculate the equivalent I of the cantilever beam, with the parallel axis. When it bends it will rotate about a point at the lower corner of the gray support, call it point A. Let's annotate the thickness of the bar, B. $$I_{Beam} =I_{stick}+ A_{stick}*Y^2_{stick}$$ $$I_{stick}= bh^3/12$$ $$I_{Beam}=bh^3/12+bh(e+f/2)^2$$ we assume the stick will break at yield stress and ignore 2nd hardening, or if we have it we plug it. $$\sigma_y=\frac{MC}{I_{Beam}}=\frac{(F*x)(e+f/2)}{bh^3/12+bh(e+f/2)^2}$$ $$F*X=\frac{\sigma y*(bh^3/12+bh(e+f/2)^2)}{e+f/2}$$ $$F=\frac{\sigma y*(bh^3/12+bh(e+f/2)^2)}{(e*f/2)*(a+b+c+d/2)}$$
611
2,057
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.21875
4
CC-MAIN-2024-26
longest
en
0.930916
# How much force is needed to break off the stick Let's consider the following figure The grey box contains a blue stick which is fixed. The blue stick has a length of $$a+b+c$$ and two diameters $$f,h$$. The diameter $$h$$ describes the part $$b$$ of the stick. The stick is fixed in the plane but the plane is not connected to the grey box. A force $$F$$ is pushing against the withe plane like in the picture. How much force is needed to break off the stick in part $$b$$? • How much effort have you applied to try to obtain a proposed solution? Commented Feb 26, 2023 at 20:03 • I don't have an idea how I could solve this because I had never a mechanical problem with a notch. What I also can say is that I see two different ways how this plane could move: One way would be striaght downward if the force is close to the notch or the plane is rotated if the force comes from the outer part of the plane. Commented Feb 26, 2023 at 20:26 • I just have some knowledge about bending sticks and not about stuff like in my picture. Commented Feb 26, 2023 at 20:43 • Apply your knowledge about bending sticks to try to solve the problem. We should like to see how far that takes you. Commented Feb 26, 2023 at 20:57 • Does the white plane slide against the grey plane or does it tilt ie pivot at the lower left corner? Commented Feb 26, 2023 at 21:18 ## 1 Answer We assume the distance from F to the hinge to be $$X_F=a+b+c+d/2$$ We calculate the equivalent I of the cantilever beam, with the parallel axis. When it bends it will rotate about a point at the lower corner of the gray support, call it point A. Let's annotate the thickness of the bar, B. $$I_{Beam} =I_{stick}+ A_{stick}*Y^2_{stick}$$ $$I_{stick}= bh^3/12$$ $$I_{Beam}=bh^3/12+bh(e+f/2)^2$$ we assume the stick will break at yield stress and ignore 2nd hardening, or if we have it we plug it.
$$\sigma_y=\frac{MC}{I_{Beam}}=\frac{(F*x)(e+f/2)}{bh^3/12+bh(e+f/2)^2}$$ $$F*X=\frac{\sigma y*(bh^3/12+bh(e+f/2)^2)}{e+f/2}$$ $$F=\frac{\sigma y*(bh^3/12+bh(e+f/2)^2)}{(e*f/2)*(a+b+c+d/2)}$$
https://stats.stackexchange.com/questions/592820/how-can-i-find-the-expectation-value-to-this-problem
1,721,790,883,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763518154.91/warc/CC-MAIN-20240724014956-20240724044956-00116.warc.gz
475,454,993
38,560
# How can I find the expectation value to this problem? At a wedding reception on an evening the representative of the host is taking it as an occasion to exercise and explain a classical analytic problem. specifically, he insists that he would start serving the food only when the first table, which is arranged for 12 guests to dine together, has guests born in every twelve months of the year. assume that any given guest is equally likely to be born in any of the twelve months of the year, and that new guests were arriving at every two minutes then. what is the expected waiting time of the first arriving guest before the food gets served eventually? Since this looked like a Coupon Collector's problem variation, my initial approach was to determine the sum of the expected value of each guests of unique birth months. X ~ FS(p) [First Success Distribution] X = time needed until food gets served $$E[X] = E[X1] + E[X2] + ... + E[X12]$$ $$=> E[X] = 12/12 + 12/11 + ... + 12/1$$ However, this is where i ran into problem, since I don't know how to handle the arrival at every two minutes in my equation. Should I just multiply by 2? Or am i missing something very obvious or basic trivia? Help will be appreciated.
279
1,228
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.5625
4
CC-MAIN-2024-30
latest
en
0.97415
# How can I find the expectation value to this problem? At a wedding reception on an evening the representative of the host is taking it as an occasion to exercise and explain a classical analytic problem. specifically, he insists that he would start serving the food only when the first table, which is arranged for 12 guests to dine together, has guests born in every twelve months of the year. assume that any given guest is equally likely to be born in any of the twelve months of the year, and that new guests were arriving at every two minutes then. what is the expected waiting time of the first arriving guest before the food gets served eventually? Since this looked like a Coupon Collector's problem variation, my initial approach was to determine the sum of the expected value of each guests of unique birth months.
X ~ FS(p) [First Success Distribution] X = time needed until food gets served $$E[X] = E[X1] + E[X2] + ... + E[X12]$$ $$=> E[X] = 12/12 + 12/11 + ... + 12/1$$ However, this is where i ran into problem, since I don't know how to handle the arrival at every two minutes in my equation.
https://math.stackexchange.com/questions/3003033/show-lim-x-to-x-0-fxx-x-0-0-when-f-mathbbr-subset-mathbbr
1,563,736,846,000,000,000
text/html
crawl-data/CC-MAIN-2019-30/segments/1563195527196.68/warc/CC-MAIN-20190721185027-20190721211027-00476.warc.gz
468,784,586
36,310
# Show $\lim_{x \to x_0^+} f(x)(x-x_0) =0$ when $f(\mathbb{R}) \subset \mathbb{R}^+$ & monotone increasing. Show $$\lim_{x \to x_0^+} f(x)(x-x_0) =0$$ when $$f(\mathbb{R}) \subset \mathbb{R}^+$$ & monotone increasing. Try I need to show, $$\forall \epsilon >0, \exists \delta >0 : x \in (x_0, x_0 + \delta) \Rightarrow |f(x) (x-x_0)| < \epsilon$$ I think I could find some upper bound $$M >0$$ such that $$|f(x) (x-x_0)| \le M |x - x_0|$$. Let $$M = f(x_0 + \epsilon)$$, and let $$\delta = \frac{\epsilon}{\max \{2M, 2 \}}$$, then clearly $$f(x) \le f(x_0 + \epsilon) = M$$ But I'm not sure $$|f(x) (x-x_0)| \le M |x - x_0|$$. Any hint about how I should proceed? Hint: Observe \begin{align} |f(x)(x-x_0)|\leq |f(x_0)||x-x_0| \end{align} for all $$x\leq x_0$$. Use $$M=f(x_0+1)$$ and cosider $$\delta=\min\{\frac{1}{2},\frac{\epsilon}{2M}\}$$. Fix $$\varepsilon>0$$. Let $$M=f(x_0+1)$$ and choose $$\delta=\mathrm{min}\{1,\frac{\varepsilon}{M}\}$$. For each $$x\in(x_0,x_0+\delta)$$, $$|f(x)|\leq M$$ since $$f$$ is strictly increasing. Thus, $$|f(x)(x-x_0)|\leq M|x-x_0|.
473
1,083
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
3.765625
4
CC-MAIN-2019-30
latest
en
0.547747
# Show $\lim_{x \to x_0^+} f(x)(x-x_0) =0$ when $f(\mathbb{R}) \subset \mathbb{R}^+$ & monotone increasing. Show $$\lim_{x \to x_0^+} f(x)(x-x_0) =0$$ when $$f(\mathbb{R}) \subset \mathbb{R}^+$$ & monotone increasing. Try I need to show, $$\forall \epsilon >0, \exists \delta >0 : x \in (x_0, x_0 + \delta) \Rightarrow |f(x) (x-x_0)| < \epsilon$$ I think I could find some upper bound $$M >0$$ such that $$|f(x) (x-x_0)| \le M |x - x_0|$$. Let $$M = f(x_0 + \epsilon)$$, and let $$\delta = \frac{\epsilon}{\max \{2M, 2 \}}$$, then clearly $$f(x) \le f(x_0 + \epsilon) = M$$ But I'm not sure $$|f(x) (x-x_0)| \le M |x - x_0|$$. Any hint about how I should proceed? Hint: Observe \begin{align} |f(x)(x-x_0)|\leq |f(x_0)||x-x_0| \end{align} for all $$x\leq x_0$$. Use $$M=f(x_0+1)$$ and cosider $$\delta=\min\{\frac{1}{2},\frac{\epsilon}{2M}\}$$. Fix $$\varepsilon>0$$. Let $$M=f(x_0+1)$$ and choose $$\delta=\mathrm{min}\{1,\frac{\varepsilon}{M}\}$$. For each $$x\in(x_0,x_0+\delta)$$, $$|f(x)|\leq M$$ since $$f$$ is strictly increasing.
Thus, $$|f(x)(x-x_0)|\leq M|x-x_0|.
https://math.stackexchange.com/questions/1762036/why-cant-you-count-up-to-aleph-null
1,702,151,622,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679100942.92/warc/CC-MAIN-20231209170619-20231209200619-00443.warc.gz
417,244,395
37,938
# Why can't you count up to aleph null? Recently I learned about the infinite cardinal $\aleph_0$, and stumbled upon a seeming contradiction. Here are my assumptions based on what I learned: 1. $\aleph_0$ is the cardinality of the natural numbers 2. $\aleph_0$ is larger than all finite numbers, and thus cannot be reached simply by counting up from 1. But then I started wondering: the cardinality of the set $\{1\}$ is $1$, the cardinality of the set $\{1, 2\}$ is $2$, the cardinality of the set $\{1, 2, 3\}$ is 3, and so on. So I drew the conclusion that the cardinality of the set $\{1, 2, \ldots n\}$ is $n$. Based on this conclusion, if the cardinality of the natural numbers is $\aleph_0$, then the set of natural numbers could be denoted as $\{1, 2, \ldots \aleph_0\}$. But such a set implies that $\aleph_0$ can be reached by counting up from $1$, which contradicts my assumption #2 above. This question has been bugging me for a while now... I'm not sure where I've made a mistake in my reasoning or if I have even used the correct mathematical terms/question title/tags to describe it, but I'd sure appreciate your help. • Can you count to $\aleph_0$?. I am not even going to start to see if I can., – user328032 Apr 28, 2016 at 1:19 • It seems to me that you want this to be an ordered set, but it does not really make sense to tack on $\aleph_0$ to the end in the way that you want. Apr 28, 2016 at 1:20 • @CameronWilliams Yes, but then what would be the last element of the set? Apr 28, 2016 at 1:21 • I can count up to $\aleph_0$. Just give me $\aleph_0$ seconds added to my life and I hope I will be able to be patient enough to do this... Countable doesn't mean you can count to it, it just means it contains the whole numbers excluding all the rational decimals between them. Apr 28, 2016 at 1:22 • @Timtech That's the thing. There isn't a "last" element here. There is a maximal element, but not a last. Last implies that you can reach that element in finitely many steps. "Last" is somewhat of a colloquialism. Apr 28, 2016 at 1:22 This is a good example where intuition about a pattern breaks down; what is true of finite sets is not true of infinite sets in general. The natural numbers $\textit{cannot}$ be denoted by the set $A=\{1,2,...,\aleph_0\}$ as the set $\aleph_0$ is not a natural number. It is true that the cardinality of $A$ is $\aleph_0$ (a good exercise), but it contains more than just natural numbers. If $\aleph_0$ were a natural number then, as you point out, we would have a contradiction. However $\aleph_0$ is the $\textit{cardinality}$ of the natural numbers, and not a natural number itself. By definition, $\aleph_0$ is the least ordinal number with which the set $\omega$ of natural numbers may be put into bijection. • Both... In $ZFC$ $\textit{everything}$ is a set, but more explicitly, the definition of cardinal numbers I know is this: Let $A$ be a set. Then the cardinal number of $A$ is the least ordinal $\kappa$ such that there exists a bijection $f: \kappa \to A$. Now by definition, ordinals are transitive sets that are well-ordered by $"\in"$, and since cardinals are in particular ordinals, they are sets. Since $\aleph_0$ is a cardinal number, it is also a set. Apr 28, 2016 at 22:01 $$\{1,2,\ldots,\text{ an infinite list of numbers },\ldots , \aleph_0\}$$
950
3,329
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.75
4
CC-MAIN-2023-50
latest
en
0.947401
# Why can't you count up to aleph null? Recently I learned about the infinite cardinal $\aleph_0$, and stumbled upon a seeming contradiction. Here are my assumptions based on what I learned: 1. $\aleph_0$ is the cardinality of the natural numbers 2. $\aleph_0$ is larger than all finite numbers, and thus cannot be reached simply by counting up from 1. But then I started wondering: the cardinality of the set $\{1\}$ is $1$, the cardinality of the set $\{1, 2\}$ is $2$, the cardinality of the set $\{1, 2, 3\}$ is 3, and so on. So I drew the conclusion that the cardinality of the set $\{1, 2, \ldots n\}$ is $n$. Based on this conclusion, if the cardinality of the natural numbers is $\aleph_0$, then the set of natural numbers could be denoted as $\{1, 2, \ldots \aleph_0\}$. But such a set implies that $\aleph_0$ can be reached by counting up from $1$, which contradicts my assumption #2 above. This question has been bugging me for a while now... I'm not sure where I've made a mistake in my reasoning or if I have even used the correct mathematical terms/question title/tags to describe it, but I'd sure appreciate your help. • Can you count to $\aleph_0$?. I am not even going to start to see if I can., – user328032 Apr 28, 2016 at 1:19 • It seems to me that you want this to be an ordered set, but it does not really make sense to tack on $\aleph_0$ to the end in the way that you want. Apr 28, 2016 at 1:20 • @CameronWilliams Yes, but then what would be the last element of the set? Apr 28, 2016 at 1:21 • I can count up to $\aleph_0$. Just give me $\aleph_0$ seconds added to my life and I hope I will be able to be patient enough to do this... Countable doesn't mean you can count to it, it just means it contains the whole numbers excluding all the rational decimals between them. Apr 28, 2016 at 1:22 • @Timtech That's the thing. There isn't a "last" element here. There is a maximal element, but not a last. Last implies that you can reach that element in finitely many steps. "Last" is somewhat of a colloquialism. Apr 28, 2016 at 1:22 This is a good example where intuition about a pattern breaks down; what is true of finite sets is not true of infinite sets in general.
The natural numbers $\textit{cannot}$ be denoted by the set $A=\{1,2,...,\aleph_0\}$ as the set $\aleph_0$ is not a natural number.
https://stats.stackexchange.com/questions/591229/generating-random-variable-which-has-a-power-distribution-of-box-and-tiao-1962
1,709,588,368,000,000,000
text/html
crawl-data/CC-MAIN-2024-10/segments/1707947476532.70/warc/CC-MAIN-20240304200958-20240304230958-00885.warc.gz
552,287,119
41,580
# Generating random variable which has a power distribution of Box and Tiao (1962) Box and Tiao (Biometrika 1962) use a distribution whose density has the following form: $$f(x; \mu, \sigma, \alpha) = \omega \exp\left\{ -\frac{1}{2} \Big\vert\frac{x-\mu}{\sigma}\Big\vert^{\frac{2}{(1+\alpha)}} \right\},$$ where $$\omega^{-1} = [\Gamma(g(\alpha)]\,2^{g(\alpha)}\sigma$$ is the normalizing constant with $$g(\alpha) = \frac{3}{2} + \frac{\alpha}{2},$$ $$\sigma \gt 0,$$ and $$-1 \lt \alpha \lt 1$$. When $$\alpha=0$$ this reduces to the normal distribution; when $$\alpha=1$$ it reduces to the double exponential (Laplace) distribution, and when $$\alpha \to -1^{+}$$ it tends to a uniform distribution. How can I generate random numbers from this distribution for any such value of $$\alpha$$? Box & Tiao refer to this as a "convenient class of power distributions," referencing Diananda (1949), Box (1953), and Turner (1960). Because $$\mu$$ and $$\sigma$$ just establish a unit of measurement and the absolute value reflects values around the origin, the basic density is proportional to $$\exp(-z^p/2)$$ where $$p = 2/(1+\alpha)$$ and $$z \ge 0.$$ Changing variables to $$y = z^p$$ for $$0\lt p \lt \infty$$ changes the probability element to $$\exp(-z^p/2)\mathrm{d}z \to \exp(-y/2) \mathrm{d}\left(y^{1/p}\right) = \frac{1}{p}y^{1/p - 1}e^{-y/2}\mathrm{d}y.$$ Since $$p = 2/(1+\alpha),$$ this is proportional to a scaled Gamma$$(1/p)$$ = Gamma$$((1+\alpha)/2)$$ density, also known as a Chi-squared$$(1+\alpha)$$ density. Thus, to generate a value from such a distribution, undo all these transformations in reverse order: Generate a value $$Y$$ from a Chi-squared$$(1+\alpha)$$ distribution, raise it to the $$2/(1+\alpha)$$ power, randomly negate it (with probability $$1/2$$), multiply by $$\sigma,$$ and add $$\mu.$$ This R code exhibits one such implementation. n is the number of independent values to draw. rf <- function(n, mu, sigma, alpha) { y <- rchisq(n, 1 + alpha) # A chi-squared variate u <- sample(c(-1,1), n, replace = TRUE) # Random sign change y^((1 + alpha)/2) * u * sigma + mu } Here are some examples of values generated in this fashion (100,000 of each) along with graphs of $$f.$$ Generating Chi-squared variates with parameter $$1+\alpha$$ near zero is problematic. You can see this code works for $$1+\alpha = 0.1$$ (bottom left), but watch out when it gets much smaller than this: The spike and gap in the middle should not be there. The problem lies with floating point arithmetic: even double precision does not suffice. By this point, though, the uniform distribution looks like a good approximation. ### Appendix This R code produced the plots. It uses the showtext library to access a Google font for the axis numbers and labels. Few of these fonts, if any, support Greek or math characters, so I had to use the default font for the plot titles (using mtext). Otherwise, everything is done with the base R plotting functions hist and curve. Don't be concerned about the relatively large simulation size: the total computation time is far less than one second to generate these 400,000 variates. library(showtext) showtext_auto() # # Density calculation. # f <- function(x, mu, sigma, alpha) exp(-1/2 * abs((x - mu) / sigma) ^ (2 / (1 + alpha))) C <- function(mu, sigma, alpha, ...) integrate(\(x) f(x, mu, sigma, alpha), -Inf, Inf, ...)\$value # # Specify the distributions to plot. # Parameters <- list(list(mu = 0, sigma = 1, alpha = 0), list(mu = 10, sigma = 2, alpha = 1/2), list(mu = 0, sigma = 3, alpha = -0.9), list(mu = 0, sigma = 4, alpha = 0.99)) # # Generate the samples and plot summaries of them. # n.sim <- 1e5 # Sample size per plot set.seed(17) # For reproducibility pars <- par(mfrow = c(2, 2), mai = c(1/2, 3/4, 3/8, 1/8)) # Shrink the margins for (parameters in Parameters) with(parameters, { x <- rf(n.sim, mu, sigma, alpha) hist(x, freq = FALSE, breaks = 100, family = "Informal", xlab = "", main = "", col = gray(0.9), border = gray(0.7)) mtext(bquote(list(mu==.(mu), sigma==.(sigma), alpha==.(alpha))), cex = 1.25, side = 3, line = 0) omega <- 1 / C(mu, sigma, alpha) # Compute the normalizing constant curve(omega * f(x, mu, sigma, alpha), add = TRUE, lwd = 2, col = "Red") }) par(pars) • That's some clean code... – Zen Oct 5, 2022 at 17:20 • Nice and very fastly delivered:) answer. – Yves Oct 5, 2022 at 17:32 • Beautiful solution. Thank you! Oct 6, 2022 at 18:19 • @whuber: Can you please show us how you generated the lovely plots? Oct 6, 2022 at 18:26 • @user67724 Done. – whuber Oct 6, 2022 at 19:05
1,414
4,604
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 30, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.703125
4
CC-MAIN-2024-10
latest
en
0.750842
# Generating random variable which has a power distribution of Box and Tiao (1962) Box and Tiao (Biometrika 1962) use a distribution whose density has the following form: $$f(x; \mu, \sigma, \alpha) = \omega \exp\left\{ -\frac{1}{2} \Big\vert\frac{x-\mu}{\sigma}\Big\vert^{\frac{2}{(1+\alpha)}} \right\},$$ where $$\omega^{-1} = [\Gamma(g(\alpha)]\,2^{g(\alpha)}\sigma$$ is the normalizing constant with $$g(\alpha) = \frac{3}{2} + \frac{\alpha}{2},$$ $$\sigma \gt 0,$$ and $$-1 \lt \alpha \lt 1$$. When $$\alpha=0$$ this reduces to the normal distribution; when $$\alpha=1$$ it reduces to the double exponential (Laplace) distribution, and when $$\alpha \to -1^{+}$$ it tends to a uniform distribution. How can I generate random numbers from this distribution for any such value of $$\alpha$$? Box & Tiao refer to this as a "convenient class of power distributions," referencing Diananda (1949), Box (1953), and Turner (1960). Because $$\mu$$ and $$\sigma$$ just establish a unit of measurement and the absolute value reflects values around the origin, the basic density is proportional to $$\exp(-z^p/2)$$ where $$p = 2/(1+\alpha)$$ and $$z \ge 0.$$ Changing variables to $$y = z^p$$ for $$0\lt p \lt \infty$$ changes the probability element to $$\exp(-z^p/2)\mathrm{d}z \to \exp(-y/2) \mathrm{d}\left(y^{1/p}\right) = \frac{1}{p}y^{1/p - 1}e^{-y/2}\mathrm{d}y.$$ Since $$p = 2/(1+\alpha),$$ this is proportional to a scaled Gamma$$(1/p)$$ = Gamma$$((1+\alpha)/2)$$ density, also known as a Chi-squared$$(1+\alpha)$$ density. Thus, to generate a value from such a distribution, undo all these transformations in reverse order: Generate a value $$Y$$ from a Chi-squared$$(1+\alpha)$$ distribution, raise it to the $$2/(1+\alpha)$$ power, randomly negate it (with probability $$1/2$$), multiply by $$\sigma,$$ and add $$\mu.$$ This R code exhibits one such implementation. n is the number of independent values to draw. rf <- function(n, mu, sigma, alpha) { y <- rchisq(n, 1 + alpha) # A chi-squared variate u <- sample(c(-1,1), n, replace = TRUE) # Random sign change y^((1 + alpha)/2) * u * sigma + mu } Here are some examples of values generated in this fashion (100,000 of each) along with graphs of $$f.$$ Generating Chi-squared variates with parameter $$1+\alpha$$ near zero is problematic.
You can see this code works for $$1+\alpha = 0.1$$ (bottom left), but watch out when it gets much smaller than this: The spike and gap in the middle should not be there.
https://math.stackexchange.com/questions/1184338/gibbs-phenomenon-and-fourier-series
1,571,861,776,000,000,000
text/html
crawl-data/CC-MAIN-2019-43/segments/1570987836295.98/warc/CC-MAIN-20191023201520-20191023225020-00216.warc.gz
581,563,792
32,428
# Gibbs Phenomenon and Fourier Series a) Show the partial sum $$S = \frac{4}{\pi} \sum_{n=1}^N \frac{\sin((2n-1)t)}{2n-1}$$ which may also be written as $$\frac{2}{\pi}\int_0^x\frac{\sin(2Nt)}{\sin(t)}dt$$ has extrema at $x= \frac{m\pi}{2N}$ where $m$ is any positive integer except m=2kN, k also integer. Solution: The derivative of $$S = \frac{2}{\pi}\frac{\sin(2Nx)}{\sin(x)} = 0$$ $$\text{where }\sin(2Nx)=0,$$ $\sin(x)$ cannot equal zero. $\sin(x) = 0$ where $x$ is a multiple of $\pi$. Therefore, $$\sin(2Nx)=0$$ where $x=\frac{m\pi}{2N}$ however $\sin(x)$ cannot equal zero, $$\sin(\frac{m\pi}{2N})\neq 0$$ som is any positive integer except $m=2kN$, $k$ also integer. Is this complete?! b) Consider the first extrema to the right of the discontinuity, located at $x=\frac{\pi}{2N}$. By considering a suitable small angle formula show that the value of the sum at this point $$S(\frac{\pi}{2N})≈\frac{2}{\pi}\int_0^{\pi} \frac{\sin(u)}{u}du$$ Solution: I'm not sure which small angle formula i'm meant to considering?! Taylor series of sin? or how to consider it? I see that $$S(\frac{\pi}{2N}) = \frac{4}{\pi} (\sin(\frac{\pi}{2N})+\frac{\sin(\frac{3\pi}{2N})}{3}+\frac{\frac{\sin(5\pi)}{2N}}{5}+\ldots)$$ $$=\frac{2}{\pi}(\frac{\pi}{N}(\frac{\sin(\frac{\pi}{2N})}{\frac{\pi}{2N}}+\frac{\sin(\frac{3\pi}{2N})}{\frac{3\pi}{2N}}+\frac{\sin(\frac{5\pi}{2N})}{\frac{5\pi}{2N}}+\cdots)$$ c) and by getting a computer to evaluate this numerically show that $$S(\frac{\pi}{2N})≈1.1790$$ independently of $N$. Not really sure how I could show this? Hence comment on the accuracy of Fourier series at discontinuities (also known as Gibbs phenomenon). Given that the error at $\frac{π}{2N}$ is nearly constant explain why the Fourier Convergence theorem is, or is not, valid for this problem? Where a function has a jump discontinuity, the fourier series will overshoot as it approaches the discontinuity. As the number of terms in the fouler series increases, the amount of overshoot will converge to a constant percentage (around 17.9) of the amount of the jump could someone explain this to me please? -sorry for the long winded question! In this answer, it is shown that the overshoot, on each side, is approximately $$\frac1\pi\int_0^\pi\frac{\sin(t)}{t}\mathrm{d}t-\frac12=0.089489872236$$ of the total jump. Thus, the overshoot you mention is twice that.
799
2,373
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.0625
4
CC-MAIN-2019-43
latest
en
0.841029
# Gibbs Phenomenon and Fourier Series a) Show the partial sum $$S = \frac{4}{\pi} \sum_{n=1}^N \frac{\sin((2n-1)t)}{2n-1}$$ which may also be written as $$\frac{2}{\pi}\int_0^x\frac{\sin(2Nt)}{\sin(t)}dt$$ has extrema at $x= \frac{m\pi}{2N}$ where $m$ is any positive integer except m=2kN, k also integer. Solution: The derivative of $$S = \frac{2}{\pi}\frac{\sin(2Nx)}{\sin(x)} = 0$$ $$\text{where }\sin(2Nx)=0,$$ $\sin(x)$ cannot equal zero. $\sin(x) = 0$ where $x$ is a multiple of $\pi$. Therefore, $$\sin(2Nx)=0$$ where $x=\frac{m\pi}{2N}$ however $\sin(x)$ cannot equal zero, $$\sin(\frac{m\pi}{2N})\neq 0$$ som is any positive integer except $m=2kN$, $k$ also integer. Is this complete?! b) Consider the first extrema to the right of the discontinuity, located at $x=\frac{\pi}{2N}$. By considering a suitable small angle formula show that the value of the sum at this point $$S(\frac{\pi}{2N})≈\frac{2}{\pi}\int_0^{\pi} \frac{\sin(u)}{u}du$$ Solution: I'm not sure which small angle formula i'm meant to considering?! Taylor series of sin? or how to consider it? I see that $$S(\frac{\pi}{2N}) = \frac{4}{\pi} (\sin(\frac{\pi}{2N})+\frac{\sin(\frac{3\pi}{2N})}{3}+\frac{\frac{\sin(5\pi)}{2N}}{5}+\ldots)$$ $$=\frac{2}{\pi}(\frac{\pi}{N}(\frac{\sin(\frac{\pi}{2N})}{\frac{\pi}{2N}}+\frac{\sin(\frac{3\pi}{2N})}{\frac{3\pi}{2N}}+\frac{\sin(\frac{5\pi}{2N})}{\frac{5\pi}{2N}}+\cdots)$$ c) and by getting a computer to evaluate this numerically show that $$S(\frac{\pi}{2N})≈1.1790$$ independently of $N$. Not really sure how I could show this? Hence comment on the accuracy of Fourier series at discontinuities (also known as Gibbs phenomenon). Given that the error at $\frac{π}{2N}$ is nearly constant explain why the Fourier Convergence theorem is, or is not, valid for this problem? Where a function has a jump discontinuity, the fourier series will overshoot as it approaches the discontinuity. As the number of terms in the fouler series increases, the amount of overshoot will converge to a constant percentage (around 17.9) of the amount of the jump could someone explain this to me please? -sorry for the long winded question!
In this answer, it is shown that the overshoot, on each side, is approximately $$\frac1\pi\int_0^\pi\frac{\sin(t)}{t}\mathrm{d}t-\frac12=0.089489872236$$ of the total jump.
https://electronics.stackexchange.com/questions/423463/will-linear-voltage-regulator-step-up-current/423464
1,566,195,672,000,000,000
text/html
crawl-data/CC-MAIN-2019-35/segments/1566027314667.60/warc/CC-MAIN-20190819052133-20190819074133-00021.warc.gz
441,024,035
31,916
# Will linear voltage regulator step up current? I have a regulated 9 volt 300mA power supply I want to step it down to 5 volt using Linear Voltage Regulator LM7805 , I want to know how much current can I can draw at 5 volts, will it be 300mA or will it be close to 540mA, since power = voltage * current. • With a 9V*0.3A=2.7W supply you can only achieve >90% efficiency with an SMPS to store energy and transfer with rapid switching. – Sunnyskyguy EE75 Feb 20 at 20:32 No. A linear regulator works by burning off excess voltage as heat, therefore current in equals current out. The linear regulator is essentially throwing away the excess energy in order to regulate, rather than converting it to the output. You need a switching regulator if you want to take advantage of power in equals power out in order to convert a high input voltage, low input current into a lower output voltage, higher output current. $$\P_{in} = P_{out}\$$ but for a linear regulator it looks like this: $$\V_{in} \times I_{in} = (V_{out} \times I_{out}) + [(V_{in} - V_{out}) \times I_{out}]\$$ The last term in square brackets is the excess voltage being converted to heat. If we expand and simplify the right hand side, a bunch of things cancel out and we get: $$\V_{in} \times I_{in} = V_{in} \times I_{out}\$$ Therefore: $$\I_{in} = I_{out}\$$ No, it won't step up current. You can think of a regulator as a resistor that adjusts it's resistance to keep the voltage stable. However, you can buy DC to DC converters that 'boost' the current. But DC to DC converters are usually called by what they do to the voltage, not the current. A boost converter 'boosts' or steps up the voltage from a lower voltage to a higher one (at the expense of current and a small loss in power) A buck converter or step down converter takes a higher voltage into a lower one (with potentially more current than is on the input of the converter, also with a small loss) They actually make 78XX series DC to DC converters that are drop in compatible with linear regulators that buck or boost voltage. • "drop in compatible" - whee, thanks! that's an improvement a hobbyist like can easily overlook – quetzalcoatl Feb 21 at 10:16 since power = voltage * current. It is true when applied to both sides of devices that transform electricity (as AC transformers, or more sophisticated devices known as "DC-DC converters"). These devices do transform voltages/currents, so if the output voltage is lower, the output current might be higher. Keep in mind that these devices do the transformation with certain efficiency (80-90%), so the "output power" = "input power" x 0.8 practically. In the case of linear regulators it is not true, the regulators don't "transform", they just regulate output by dissipating the excess of voltage (drop-out voltage) in its regulating elements (transistors). Therefore whatever current comes in, the same current goes out, and even a bit less, since the regulation takes some toll. For example, the old LM7805 IC will consume within itself about 4-5 mA for its "services", so if your input is strictly 300 mA, you might get only 295 mA out.
765
3,150
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.703125
4
CC-MAIN-2019-35
latest
en
0.911949
# Will linear voltage regulator step up current? I have a regulated 9 volt 300mA power supply I want to step it down to 5 volt using Linear Voltage Regulator LM7805 , I want to know how much current can I can draw at 5 volts, will it be 300mA or will it be close to 540mA, since power = voltage * current. • With a 9V*0.3A=2.7W supply you can only achieve >90% efficiency with an SMPS to store energy and transfer with rapid switching. – Sunnyskyguy EE75 Feb 20 at 20:32 No. A linear regulator works by burning off excess voltage as heat, therefore current in equals current out. The linear regulator is essentially throwing away the excess energy in order to regulate, rather than converting it to the output. You need a switching regulator if you want to take advantage of power in equals power out in order to convert a high input voltage, low input current into a lower output voltage, higher output current. $$\P_{in} = P_{out}\$$ but for a linear regulator it looks like this: $$\V_{in} \times I_{in} = (V_{out} \times I_{out}) + [(V_{in} - V_{out}) \times I_{out}]\$$ The last term in square brackets is the excess voltage being converted to heat.
If we expand and simplify the right hand side, a bunch of things cancel out and we get: $$\V_{in} \times I_{in} = V_{in} \times I_{out}\$$ Therefore: $$\I_{in} = I_{out}\$$ No, it won't step up current.
https://matheducators.stackexchange.com/questions/18576/ramanujan-results-for-middle-school/18599
1,702,123,804,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679100909.82/warc/CC-MAIN-20231209103523-20231209133523-00269.warc.gz
421,662,281
40,822
# Ramanujan results for middle school? Pls I wonder what Ramanujan's results could be explained to middle school level audience, ie without using integral etc that is up to university curriculum? For example Ramanujan's infinite radicals could be explained easily $$3=\sqrt {1+2{\sqrt {1+3{\sqrt {1+\cdots }}}}}$$ • Why Ramanujan specifically? Jul 17, 2020 at 22:52 • just his results have many infinite forms, which sound fun! @ChrisCunningham Jul 18, 2020 at 12:54 ## 1 Answer You can try the Rogers–Ramanujan identities: • The number of partitions of $$n$$ in which adjacent parts are at least 2 apart is the same as the number of partitions of $$n$$ in which each part ends with 1,4,6,9. • The number of partitions of $$n$$ without 1 in which adjacent parts are at least 2 apart is the same as the number of partitions of $$n$$ in which each part ends with 2,3,7,8. For example, taking $$n=10$$: • Partitions in which adjacent parts are at least 2 apart: $$10 = 10 = 9 + 1 = 8 + 2 = 7 + 3 = 6 + 4 = 6 + 3 + 1$$ • Partitions in which each part ends with 1,4,6,9: $$10 = 9 + 1 = 6 + 4 = 6 + 1 + 1 + 1 + 1 = 4 + 4 + 1 + 1 = 4 + 6\times 1 = 10 \times 1$$ • Partitions without 1 in which adjacent parts are at least 2 apart: $$10 = 10 = 8 + 2 = 7 + 3 = 6 + 4$$ • Partitions in which each part ends with 2,3,5,8: $$10 = 8 + 2 = 7 + 3 = 3 + 3 + 2 + 2 = 2 + 2 + 2 + 2 + 2$$
490
1,378
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.828125
4
CC-MAIN-2023-50
longest
en
0.936146
# Ramanujan results for middle school? Pls I wonder what Ramanujan's results could be explained to middle school level audience, ie without using integral etc that is up to university curriculum? For example Ramanujan's infinite radicals could be explained easily $$3=\sqrt {1+2{\sqrt {1+3{\sqrt {1+\cdots }}}}}$$ • Why Ramanujan specifically? Jul 17, 2020 at 22:52 • just his results have many infinite forms, which sound fun! @ChrisCunningham Jul 18, 2020 at 12:54 ## 1 Answer You can try the Rogers–Ramanujan identities: • The number of partitions of $$n$$ in which adjacent parts are at least 2 apart is the same as the number of partitions of $$n$$ in which each part ends with 1,4,6,9. • The number of partitions of $$n$$ without 1 in which adjacent parts are at least 2 apart is the same as the number of partitions of $$n$$ in which each part ends with 2,3,7,8.
For example, taking $$n=10$$: • Partitions in which adjacent parts are at least 2 apart: $$10 = 10 = 9 + 1 = 8 + 2 = 7 + 3 = 6 + 4 = 6 + 3 + 1$$ • Partitions in which each part ends with 1,4,6,9: $$10 = 9 + 1 = 6 + 4 = 6 + 1 + 1 + 1 + 1 = 4 + 4 + 1 + 1 = 4 + 6\times 1 = 10 \times 1$$ • Partitions without 1 in which adjacent parts are at least 2 apart: $$10 = 10 = 8 + 2 = 7 + 3 = 6 + 4$$ • Partitions in which each part ends with 2,3,5,8: $$10 = 8 + 2 = 7 + 3 = 3 + 3 + 2 + 2 = 2 + 2 + 2 + 2 + 2$$
https://math.stackexchange.com/questions/1469820/area-under-quarter-circle-by-integration
1,638,088,187,000,000,000
text/html
crawl-data/CC-MAIN-2021-49/segments/1637964358480.10/warc/CC-MAIN-20211128073830-20211128103830-00013.warc.gz
466,137,620
34,006
# Area under quarter circle by integration How would one go about finding out the area under a quarter circle by integrating. The quarter circle's radius is r and the whole circle's center is positioned at the origin of the coordinates. (The quarter circle is in the first quarter of the coordinate system) From the equation $x^2+y^2=r^2$, you may express your area as the following integral $$A=\int_0^r\sqrt{r^2-x^2}\:dx.$$ Then substitute $x=r\sin \theta$, $\theta=\arcsin (x/r)$, to get \begin{align} A&=\int_0^{\pi/2}\sqrt{r^2-r^2\sin^2 \theta}\:r\cos \theta \:d\theta\\ &=r^2\int_0^{\pi/2}\sqrt{1-\sin^2 \theta}\:\cos\theta \:d\theta\\ &=r^2\int_0^{\pi/2}\sqrt{\cos^2 \theta}\:\cos\theta \:d\theta\\ &=r^2\int_0^{\pi/2}\cos^2 \theta \:d\theta\\ &=r^2\int_0^{\pi/2}\frac{1+\cos(2\theta)}2 \:d\theta\\ &=r^2\int_0^{\pi/2}\frac12 \:d\theta+\frac{r^2}2\underbrace{\left[ \frac12\sin(2\theta)\right]_0^{\pi/2}}_{\color{#C00000}{=\:0}}\\ &=\frac{\pi}4r^2. \end{align} • Yes, we have, for $0<x<r$, $\frac{d\theta}{dx}=\frac{1}{\sqrt{r^2-x^2}}>0$, $0=\arcsin (0/r) \leq \theta (r)\leq \arcsin (r/r)=\pi/2$. Thanks! Oct 8 '15 at 18:45 Here is a quicker solution. The area can be seen as a collection of very thin triangles, one of which is shown below. As $d\theta\to0$, the base of the triangle becomes $rd\theta$ and the height becomes $r$, so the area is $\frac12r^2d\theta$. The limits of $\theta$ are $0$ and $\frac\pi2$. $$\int_0^\frac\pi2\frac12r^2d\theta=\frac12r^2\theta|_0^\frac\pi2=\frac14\pi r^2$$ let circle: $x^2+y^2=r^2$ then consider a slab of area $dA=ydx$ then the area of quarter circle $$A_{1/4}=\int_0^r ydx=\int_0^r \sqrt{r^2-x^2}dx$$ $$=\frac12\left[x\sqrt{r^2-x^2}+r^2\sin^{-1}\left(x/r\right)\right]_0^r$$ $$=\frac12\left[0+r^2(\pi/2)\right]=\frac{\pi}{4}r^2$$ or use double integration: $$=\iint rdr d\theta= \int_0^{\pi/2}\ d\theta\int_0^R rdr=\int_0^{\pi/2}\ d\theta(R^2/2)=(R^2/2)(\pi/2)=\frac{\pi}{4}R^2$$
801
1,934
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
4.375
4
CC-MAIN-2021-49
latest
en
0.641209
# Area under quarter circle by integration How would one go about finding out the area under a quarter circle by integrating. The quarter circle's radius is r and the whole circle's center is positioned at the origin of the coordinates. (The quarter circle is in the first quarter of the coordinate system) From the equation $x^2+y^2=r^2$, you may express your area as the following integral $$A=\int_0^r\sqrt{r^2-x^2}\:dx.$$ Then substitute $x=r\sin \theta$, $\theta=\arcsin (x/r)$, to get \begin{align} A&=\int_0^{\pi/2}\sqrt{r^2-r^2\sin^2 \theta}\:r\cos \theta \:d\theta\\ &=r^2\int_0^{\pi/2}\sqrt{1-\sin^2 \theta}\:\cos\theta \:d\theta\\ &=r^2\int_0^{\pi/2}\sqrt{\cos^2 \theta}\:\cos\theta \:d\theta\\ &=r^2\int_0^{\pi/2}\cos^2 \theta \:d\theta\\ &=r^2\int_0^{\pi/2}\frac{1+\cos(2\theta)}2 \:d\theta\\ &=r^2\int_0^{\pi/2}\frac12 \:d\theta+\frac{r^2}2\underbrace{\left[ \frac12\sin(2\theta)\right]_0^{\pi/2}}_{\color{#C00000}{=\:0}}\\ &=\frac{\pi}4r^2. \end{align} • Yes, we have, for $0<x<r$, $\frac{d\theta}{dx}=\frac{1}{\sqrt{r^2-x^2}}>0$, $0=\arcsin (0/r) \leq \theta (r)\leq \arcsin (r/r)=\pi/2$. Thanks! Oct 8 '15 at 18:45 Here is a quicker solution. The area can be seen as a collection of very thin triangles, one of which is shown below. As $d\theta\to0$, the base of the triangle becomes $rd\theta$ and the height becomes $r$, so the area is $\frac12r^2d\theta$. The limits of $\theta$ are $0$ and $\frac\pi2$.
$$\int_0^\frac\pi2\frac12r^2d\theta=\frac12r^2\theta|_0^\frac\pi2=\frac14\pi r^2$$ let circle: $x^2+y^2=r^2$ then consider a slab of area $dA=ydx$ then the area of quarter circle $$A_{1/4}=\int_0^r ydx=\int_0^r \sqrt{r^2-x^2}dx$$ $$=\frac12\left[x\sqrt{r^2-x^2}+r^2\sin^{-1}\left(x/r\right)\right]_0^r$$ $$=\frac12\left[0+r^2(\pi/2)\right]=\frac{\pi}{4}r^2$$ or use double integration: $$=\iint rdr d\theta= \int_0^{\pi/2}\ d\theta\int_0^R rdr=\int_0^{\pi/2}\ d\theta(R^2/2)=(R^2/2)(\pi/2)=\frac{\pi}{4}R^2$$
https://stats.stackexchange.com/questions/475242/how-can-i-estimate-the-probability-of-a-random-variable-from-one-population-bein
1,718,620,562,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198861701.67/warc/CC-MAIN-20240617091230-20240617121230-00479.warc.gz
503,642,805
41,098
# How can I estimate the probability of a random variable from one population being greater than all other random variables from unique populations? Lets assume I have samples from 5 unique populations. Let's also assume I have a mean and standard deviation from each of these populations, they are normally distributed and completely independent of one another. How can I estimate the probability that a sample of one of the populations will be greater than a sample from each of the other 4 populations? For a example, if I have 5 types of fish (the populations) in my pond, such as bass, catfish, karp, perch and bluegill, and i'm measuring the lengths (the variables) of the fish, how do can I estimate the probability that the length of a bass I catch will be greater than the length of all the other types of fish? I think I understand how to compare 2 individual populations but can't seem to figure out how to estimate probability relative to all populations. As opposed to the probability of the bass to a catfish, and then a bass to a karp, etc., I'd like to know if its possible to reasonably estimate the probability of the length of the bass being greater that the lengths of all other populations. Any help would be greatly appreciated! Thanks! Edit: I believe my original solution is incorrect. I treated the events [koi > catfish] and [coy > karp] as independent when they are certainly not. \begin{aligned} P(Y>\max\{X_1,...,X_n\})&=P(Y>X_1,...,Y>X_n)\\ &=\int_{-\infty}^{\infty} P(Y>X_1,...,Y>X_n|Y=y) f_Y(y)dy\\ &=\int_{-\infty}^{\infty} \prod_{i=1}^n \left[ P(Y>X_i|Y=y) \right]f_Y(y)dy\\ &=\int_{-\infty}^{\infty} \prod_{i=1}^n \left[ \Phi \left( \tfrac{y-\bar{x}_n}{\sigma_{x_n}} \right) \right]f_Y(y)dy \end{aligned} I do hope that someone can provide a better solution, as the above expression seems mismatched with the relative simplicity of the question. Let $$Y$$ represent the length of a fish from the population of interest, such as bass, and $$X_i$$ represent the length of fish from another population $$i$$, such as karp or catfish. You want to calculate the probability that the bass is longer than the longest non-bass fish. That is equivalent to the probability that the bass is longer than the carp, and the bass is longer than the catfish, and the bass is longer than the perch, etc. $$P(Y>\max\{X_1,...,X_n\})=P(Y>X_1,...,Y>X_n)$$ Because the lengths of your fish are independently distributed, the probability of all of these events happening is the product of the individual probabilities. $$P(Y>X_1,...,Y>X_n) =\prod_{i=1}^{n} P(Y>X_i)$$ So the probability that bass is longer than all of your other fish is found by multiplying the probabilities that the bass is larger than each other type of fish. That leaves only the problem of calculating the probability that a fish from one normal distribution is longer than a fish from another normal distribution. That is, $$P(Y>X_i)$$. To calculate this probability we rewrite it (ignoring the subscript) in the form $$P(Y>X)=P(Y-X>0)$$ Thankfully, the distribution of $$Y-X$$ is simple in the case where $$X$$ and $$Y$$ are normally distributed. That is, $$X \sim N(\mu_{X},\sigma_{X})$$ and $$Y \sim N(\mu_{Y},\sigma_{Y})$$. We can use the following facts: • Any linear combination of independent normal random variables (ie. $$aX+bY$$) is itself a normal random variable. • $$\mathbb{V}(aX+bY)=a^2\mathbb{V}(X)+b^2\mathbb{V}(Y)$$ for any uncorrelated random variables $$X$$ and $$Y$$. • $$\mathbb{E}(aX+bY) = a\mathbb{E}(X)+b\mathbb{E}(Y)$$ for any random variables $$X$$ and $$Y$$. In this problem, the difference in the lengths of the two fish $$D=Y-X=(1)X+(-1)Y$$ is a linear combination of the two lengths, $$X$$ and $$Y$$. Therefore, using the facts above, we find that the distribution of the difference in lengths is $$D\sim N(\mu_Y-\mu_X,\sigma^2_X+\sigma^2_Y)$$ The probability that this difference is greater than zero is $$P(D>0)=1-P(D<0)=1-F_D(0)=1-\Phi \left(\frac{0-\mu_D}{\sigma_D} \right)$$ In terms of $$X$$ and $$Y$$ this is $$P(Y-X>0)=1-\Phi \left(\frac{\mu_X-\mu_Y}{\sqrt{\sigma^2_X+\sigma^2_Y}}\right)$$ The final solution, in all its glory, would then be: $$P(Y>\max\{X_1,...,X_n\})=\prod_{i=1}^{n} 1-\Phi \left(\frac{\mu_{X_i}-\mu_Y}{\sqrt{\sigma^2_{X_i}+\sigma^2_Y}}\right)$$ • Presumably your operator "$\cap$" means ordinary multiplication of numbers, because both its arguments (being probabilities) are numbers. Maybe there's a typo there? "This extends to" hides the content of the answer--it needs elaboration. The meaning of "alternatively" is not evident and so needs elaboration, too. – whuber Commented Jul 2, 2020 at 20:05 • Thanks @whuber. Hopefully, the edited answer is clearer. Commented Jul 2, 2020 at 20:45 • It is, thank you (+1). I can't help thinking, though, that the OP might welcome some words about how the individual probabilities $P(Y\gt X_i)$ might be estimated or calculated. – whuber Commented Jul 2, 2020 at 20:59 • One thing i'm struggling to understand, is after I find the product that the bass is larger than the karp, the catfish, etc., I do the same for each fish (the karp being larger than all others, the catfish being larger than all others, etc.). Wouldn't the sum of the probabilities of each fish being larger than all others be equal to 1? i'm not getting anywhere close to that, maybe i'm not understanding why it wouldn't equal 1? Surely one of the fish will be larger than all others? I can provide numbers and show what i'm coming up with if that helps. Commented Jul 9, 2020 at 15:43 • @mc_chief Thank you for that excellent observation. My answer is very likely mistaken. I believe I treat the case where [koi > catfish] and [coy > karp] are independent events. In reality, they are not. I'll correct this in a new answer ASAP. Commented Jul 9, 2020 at 17:31
1,641
5,845
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.03125
4
CC-MAIN-2024-26
latest
en
0.928478
# How can I estimate the probability of a random variable from one population being greater than all other random variables from unique populations? Lets assume I have samples from 5 unique populations. Let's also assume I have a mean and standard deviation from each of these populations, they are normally distributed and completely independent of one another. How can I estimate the probability that a sample of one of the populations will be greater than a sample from each of the other 4 populations? For a example, if I have 5 types of fish (the populations) in my pond, such as bass, catfish, karp, perch and bluegill, and i'm measuring the lengths (the variables) of the fish, how do can I estimate the probability that the length of a bass I catch will be greater than the length of all the other types of fish? I think I understand how to compare 2 individual populations but can't seem to figure out how to estimate probability relative to all populations. As opposed to the probability of the bass to a catfish, and then a bass to a karp, etc., I'd like to know if its possible to reasonably estimate the probability of the length of the bass being greater that the lengths of all other populations. Any help would be greatly appreciated! Thanks! Edit: I believe my original solution is incorrect. I treated the events [koi > catfish] and [coy > karp] as independent when they are certainly not. \begin{aligned} P(Y>\max\{X_1,...,X_n\})&=P(Y>X_1,...,Y>X_n)\\ &=\int_{-\infty}^{\infty} P(Y>X_1,...,Y>X_n|Y=y) f_Y(y)dy\\ &=\int_{-\infty}^{\infty} \prod_{i=1}^n \left[ P(Y>X_i|Y=y) \right]f_Y(y)dy\\ &=\int_{-\infty}^{\infty} \prod_{i=1}^n \left[ \Phi \left( \tfrac{y-\bar{x}_n}{\sigma_{x_n}} \right) \right]f_Y(y)dy \end{aligned} I do hope that someone can provide a better solution, as the above expression seems mismatched with the relative simplicity of the question. Let $$Y$$ represent the length of a fish from the population of interest, such as bass, and $$X_i$$ represent the length of fish from another population $$i$$, such as karp or catfish. You want to calculate the probability that the bass is longer than the longest non-bass fish. That is equivalent to the probability that the bass is longer than the carp, and the bass is longer than the catfish, and the bass is longer than the perch, etc. $$P(Y>\max\{X_1,...,X_n\})=P(Y>X_1,...,Y>X_n)$$ Because the lengths of your fish are independently distributed, the probability of all of these events happening is the product of the individual probabilities. $$P(Y>X_1,...,Y>X_n) =\prod_{i=1}^{n} P(Y>X_i)$$ So the probability that bass is longer than all of your other fish is found by multiplying the probabilities that the bass is larger than each other type of fish. That leaves only the problem of calculating the probability that a fish from one normal distribution is longer than a fish from another normal distribution. That is, $$P(Y>X_i)$$. To calculate this probability we rewrite it (ignoring the subscript) in the form $$P(Y>X)=P(Y-X>0)$$ Thankfully, the distribution of $$Y-X$$ is simple in the case where $$X$$ and $$Y$$ are normally distributed. That is, $$X \sim N(\mu_{X},\sigma_{X})$$ and $$Y \sim N(\mu_{Y},\sigma_{Y})$$. We can use the following facts: • Any linear combination of independent normal random variables (ie. $$aX+bY$$) is itself a normal random variable. • $$\mathbb{V}(aX+bY)=a^2\mathbb{V}(X)+b^2\mathbb{V}(Y)$$ for any uncorrelated random variables $$X$$ and $$Y$$. • $$\mathbb{E}(aX+bY) = a\mathbb{E}(X)+b\mathbb{E}(Y)$$ for any random variables $$X$$ and $$Y$$. In this problem, the difference in the lengths of the two fish $$D=Y-X=(1)X+(-1)Y$$ is a linear combination of the two lengths, $$X$$ and $$Y$$.
Therefore, using the facts above, we find that the distribution of the difference in lengths is $$D\sim N(\mu_Y-\mu_X,\sigma^2_X+\sigma^2_Y)$$ The probability that this difference is greater than zero is $$P(D>0)=1-P(D<0)=1-F_D(0)=1-\Phi \left(\frac{0-\mu_D}{\sigma_D} \right)$$ In terms of $$X$$ and $$Y$$ this is $$P(Y-X>0)=1-\Phi \left(\frac{\mu_X-\mu_Y}{\sqrt{\sigma^2_X+\sigma^2_Y}}\right)$$ The final solution, in all its glory, would then be: $$P(Y>\max\{X_1,...,X_n\})=\prod_{i=1}^{n} 1-\Phi \left(\frac{\mu_{X_i}-\mu_Y}{\sqrt{\sigma^2_{X_i}+\sigma^2_Y}}\right)$$ • Presumably your operator "$\cap$" means ordinary multiplication of numbers, because both its arguments (being probabilities) are numbers.
https://stats.stackexchange.com/questions/372895/how-parameters-formulated-for-simple-regression-model
1,717,074,164,000,000,000
text/html
crawl-data/CC-MAIN-2024-22/segments/1715971667627.93/warc/CC-MAIN-20240530114606-20240530144606-00683.warc.gz
473,174,625
39,760
# How parameters formulated for Simple Regression Model I am reading Simple Regression Model from this book, Section 6.5 (page 267 in downloaded pdf, 276 if viewed online). The author starts with below equation for a simple linear regression model, $$Y_i = \alpha_1 + \beta x_i + \varepsilon_i$$ And then after few lines, he lets for conveience that, $$\alpha_1 = \alpha - \beta\overline{x}$$ so that, $$Y_i = \alpha + \beta(x_i - \overline{x}) + \varepsilon_i$$ where $$\overline{x} = \dfrac{1}{n}\sum\limits_{i=1}^nx_i$$ My questions: 1. It is not convincing to bring in $$\overline{x}$$ just for convenience sake in the equation. Can any one please explain the logic behind bring that in the equation? 2. After above equation, the author says, $$Y_i$$ is equal to a nonrandom quantity, $$\alpha + \beta(x_i - \overline{x})$$, plus a mean zero normal random variable $$\varepsilon_i$$. Does that mean, $$\alpha + \beta(x_i - \overline{x})$$ has no randomness involved in that? Kindly help. 1. $$\alpha_1$$s in two equations are different. Let $$\alpha_2$$ be the $$\alpha$$ in the second equation, then $$\alpha_1 = \alpha_2 + \beta \bar x$$ At the time that the computer was not popular or had no computer, the line was fit by using calculators. Bringing in $$\bar x$$ is really simplified the computation. 1. From the first equation, $$\epsilon$$ is the only random component. So source of randomness of $$Y$$ is $$\epsilon$$, the other parts $$\alpha + \beta x$$ are known or unknown constant. • I just corrected $\alpha_1$ to $\alpha$ in 2nd equation. Still the reason is not convincing that it simplified the computation. Can you kindly elaborate further? How could $\overline{x}$ suddenly enter the equation without an associated mathematical logic. Oct 20, 2018 at 18:32 • Let $z_i=x_i-\bar x$, then (1) $\sum z_i = 0$ vs calculating $\sum x_i$, (2) $\sum z_i^2$ is easier easier than $\sum x_i^2$, and (3) $\sum z_iY_i$ is easier easier than $\sum x_iY_i$. introducing $\bar x$ into equation does not change anything in equation, similar to $+ a - a$ , which we used to proof something in math. Oct 20, 2018 at 18:43 • $Y_i = \alpha_1 + \beta x_i + \varepsilon_i$ ==> $Y_i = \alpha_1 + \beta x_i + \varepsilon_i - \beta \bar x + \beta \bar x$ ==> $Y_i = (\alpha_1 +\beta \bar x) + \beta (x_i - \bar x) + \varepsilon_i$ ==> $Y_i = \alpha + \beta (x_i - \bar x) + \varepsilon_i$ Oct 20, 2018 at 18:56
731
2,417
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.71875
4
CC-MAIN-2024-22
latest
en
0.873219
# How parameters formulated for Simple Regression Model I am reading Simple Regression Model from this book, Section 6.5 (page 267 in downloaded pdf, 276 if viewed online). The author starts with below equation for a simple linear regression model, $$Y_i = \alpha_1 + \beta x_i + \varepsilon_i$$ And then after few lines, he lets for conveience that, $$\alpha_1 = \alpha - \beta\overline{x}$$ so that, $$Y_i = \alpha + \beta(x_i - \overline{x}) + \varepsilon_i$$ where $$\overline{x} = \dfrac{1}{n}\sum\limits_{i=1}^nx_i$$ My questions: 1. It is not convincing to bring in $$\overline{x}$$ just for convenience sake in the equation. Can any one please explain the logic behind bring that in the equation? 2. After above equation, the author says, $$Y_i$$ is equal to a nonrandom quantity, $$\alpha + \beta(x_i - \overline{x})$$, plus a mean zero normal random variable $$\varepsilon_i$$. Does that mean, $$\alpha + \beta(x_i - \overline{x})$$ has no randomness involved in that? Kindly help. 1. $$\alpha_1$$s in two equations are different. Let $$\alpha_2$$ be the $$\alpha$$ in the second equation, then $$\alpha_1 = \alpha_2 + \beta \bar x$$ At the time that the computer was not popular or had no computer, the line was fit by using calculators. Bringing in $$\bar x$$ is really simplified the computation. 1. From the first equation, $$\epsilon$$ is the only random component. So source of randomness of $$Y$$ is $$\epsilon$$, the other parts $$\alpha + \beta x$$ are known or unknown constant. • I just corrected $\alpha_1$ to $\alpha$ in 2nd equation. Still the reason is not convincing that it simplified the computation. Can you kindly elaborate further? How could $\overline{x}$ suddenly enter the equation without an associated mathematical logic. Oct 20, 2018 at 18:32 • Let $z_i=x_i-\bar x$, then (1) $\sum z_i = 0$ vs calculating $\sum x_i$, (2) $\sum z_i^2$ is easier easier than $\sum x_i^2$, and (3) $\sum z_iY_i$ is easier easier than $\sum x_iY_i$. introducing $\bar x$ into equation does not change anything in equation, similar to $+ a - a$ , which we used to proof something in math.
Oct 20, 2018 at 18:43 • $Y_i = \alpha_1 + \beta x_i + \varepsilon_i$ ==> $Y_i = \alpha_1 + \beta x_i + \varepsilon_i - \beta \bar x + \beta \bar x$ ==> $Y_i = (\alpha_1 +\beta \bar x) + \beta (x_i - \bar x) + \varepsilon_i$ ==> $Y_i = \alpha + \beta (x_i - \bar x) + \varepsilon_i$ Oct 20, 2018 at 18:56
https://math.stackexchange.com/questions/487102/why-is-the-map-gl-nk-times-gl-nk-to-gl-nk-regular
1,627,633,953,000,000,000
text/html
crawl-data/CC-MAIN-2021-31/segments/1627046153934.85/warc/CC-MAIN-20210730060435-20210730090435-00078.warc.gz
401,291,441
38,180
# Why is the map: $GL_n(K)\times GL_n(K) \to GL_n(K)$ regular? Let $K$ be a field and $GL_n(K)$ the set of all invertible $n$ by $n$ matrices over $K$. Let $m: GL_n(K)\times GL_n(K) \to GL_n(K)$ be the usual multiplication of matrices. Why the map $m$ is regular? Thank you very much. • The maps are just polynomials in the entries? In particular, if you identify $\text{GL}_n(K)$ as the set of pairs $(A,B)$ in $K^{n^2}$ such that $AB=1$ (where you use regular matrix multiplication), then this is obvious. – Alex Youcis Sep 8 '13 at 5:02 • @AlexYoucis, thank you very much. But what is the product of $(A, B)$ and $(C, D)$ in $K^{n^2}$? – LJR Sep 8 '13 at 5:07 • I am not saying you should. If $R$ is an algebraic ring, then you show that $R^\times$ is a variety by identifying it with the set $(x,y)$ in $R^2$ with $xy=1$. For example, $k^\times$ is an affine $k$-variety, isomorphic to the set $xy=1$ in $\mathbb{A}^2$. – Alex Youcis Sep 8 '13 at 5:12 ## 1 Answer First forget $GL_n(K)$ and work in $M_n(K)$. The multiplication map $$M_n(K)\times M_n(K)\to M_n(K)$$ is polynomial in the entries: $$((x_{ij})_{ij}, (y_{kl})_{kl})\mapsto (\sum_{r} x_{ir}y_{rl})_{il},$$ so it is a regular map. When you restrict to $GL_n(K)$, you get a regular map $$GL_n(K)\times GL_n(K)\to M_n(K).$$ As the multiplication lands in $GL_n(K)$, you get the statement you want to prove. • thank you very much. But the multiplication is the multiplication of matrices. Why it is a polynomial? – LJR Sep 8 '13 at 11:13 • @IJR: a matrix $(x_{ij})_{ij}$ is viewed as element of $K^{n^2}$. – Cantlog Sep 8 '13 at 11:41
549
1,601
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.671875
4
CC-MAIN-2021-31
latest
en
0.777803
# Why is the map: $GL_n(K)\times GL_n(K) \to GL_n(K)$ regular? Let $K$ be a field and $GL_n(K)$ the set of all invertible $n$ by $n$ matrices over $K$. Let $m: GL_n(K)\times GL_n(K) \to GL_n(K)$ be the usual multiplication of matrices. Why the map $m$ is regular? Thank you very much. • The maps are just polynomials in the entries? In particular, if you identify $\text{GL}_n(K)$ as the set of pairs $(A,B)$ in $K^{n^2}$ such that $AB=1$ (where you use regular matrix multiplication), then this is obvious. – Alex Youcis Sep 8 '13 at 5:02 • @AlexYoucis, thank you very much. But what is the product of $(A, B)$ and $(C, D)$ in $K^{n^2}$? – LJR Sep 8 '13 at 5:07 • I am not saying you should. If $R$ is an algebraic ring, then you show that $R^\times$ is a variety by identifying it with the set $(x,y)$ in $R^2$ with $xy=1$.
For example, $k^\times$ is an affine $k$-variety, isomorphic to the set $xy=1$ in $\mathbb{A}^2$.
http://math.stackexchange.com/questions/tagged/linear-algebra+determinant
1,406,757,197,000,000,000
text/html
crawl-data/CC-MAIN-2014-23/segments/1406510271654.40/warc/CC-MAIN-20140728011751-00388-ip-10-146-231-18.ec2.internal.warc.gz
177,665,836
23,837
# Tagged Questions 30 views ### The determinant of adjugate matrix Why does $\det(\text{adj}(A)) = 0$ if $\det(A) = 0$? (without using the formula $\det(\text{adj}(A)) = \det(A)^{n-1}.)$ 65 views ### Determinant of the linear map given by conjugation. Let $S$ denote the space of skew-symmetric $n\times n$ real matrices, where every element $A\in S$ satisfies $A^T+A = 0$. Let $M$ denote an orthogonal $n\times n$ matrix, and $L_M$ denotes the ... 65 views ### Maximum determinant of a $m\times m$ - matrix with entries $1..n$ I want to find the maximal possible determinant of a $m\times m$ - matrix A with entries $1..n$. Conjecture 1 : The maximum possible determinant can be achieved by a matrix only ... 64 views ### Surprising necessary condition for a “shift-invariant” determinant Let $A$ be a $4\ x\ 4$ binary matrix and $Z=\pmatrix {s&s&s&s \\ s&s&s&s \\s&s&s&s \\s&s&s&s}$ Then $\det(A+Z)=\det(A)=1\$ (independent of s, so ... 87 views ### Simple proof that a $3\times 3$-matrix with entries $s$ or $s+1$ cannot have determinant $\pm 1$, if $s>1$. Let $s>1$ and $A$ be a $3\times 3$ matrix with entries $s$ or $s+1$. Then $\det(A)\ne \pm 1$. The determinant has the form $as+b$ with integers $a$,$b$ and it has to be proven that $a>0$ if ... 32 views ### Determinant of a matrix shifted by m Let $A$ be an $n\times n$ matrix and $Z$ be the $n\times n$ matrix, whose entries are all $m$. Let $S$ be the sum of all the adjoints of $A$. Then my conjecture is $\det(A+Z)=\det(A)+Sm$ , in ... 31 views ### Relation on the determinant of a matrix and the product of its diagonal entries? Let $A$ be a $3\times 3$ symmetric matrix, with three real eigenvalues $\lambda_1,\lambda_2,\lambda_3$, and diagonal entries $a_1,a_2,a_3$, is it true that \begin{equation*} \det ... 105 views ### Prove that if the sum of each row of A equals s, then s is an eigenvalue of A. [duplicate] Consider an $n \times n$ matrix $A$ with the property that the row sums all equal the same number $s$. Show that $s$ is an eigenvalue of $A$. [Hint: Find an eigenvector] My attempt: By definition: ... 34 views ### How to factor and reduce a huge determinant to simpler form? Linear Algebra So, I have learned about cofactor expansion. But the cofactor expansion I know doesn't reduce the number of rows and colums to one matrix. I usually pick a colum, multiply each element in the column ... 48 views 37 views ### Determinant (or positive definiteness) of a Hankel matrix I need to prove that the Hankel matrix given by $a_{ij}=\frac{1}{i+j}$ is positive definite. It turns out that it is a special case of the Cauchy matrices, and the determinant is given by the Cauchy ... 85 views ### Find the expansion for $\det(I+\epsilon A)$ where $\epsilon$ is small without using eigenvalue. I'm taking a linear algebra course and the professor included the problem that prove $$\rm{det}(I+\epsilon A) = 1 + \epsilon\,\rm{tr}\,A + o(\epsilon)$$ Since the professor hasn't covered the ... 17 views ### Bound on the degree of a determinant of a polynomial matrix I want to implement a modular algorithm for computing the determinant of a square Matrix with multivariate polynomials in $\mathbb{Z}$ as components (symbolically). My idea is first to reduce the ... In order get the determinant of\begin{pmatrix} \lambda-n-1 & 1 & 2 & 2 & 1 & 1 & 1& 1 & \cdots &1 & 1 \\ 1 & \lambda-2n+4 & 1 & 2 & 2 &2 ... ### Prove or disprove : $\det(A^k + B^k) \geq 0$ This question came from here. As the OP hasn't edited his question and I really want the answer, I'm adding my thoughts. Let $A, B$ be two real $n\times n$ matrices that commute and \$\det(A + ...
1,081
3,648
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0}
3.625
4
CC-MAIN-2014-23
latest
en
0.712426
# Tagged Questions 30 views ### The determinant of adjugate matrix Why does $\det(\text{adj}(A)) = 0$ if $\det(A) = 0$? (without using the formula $\det(\text{adj}(A)) = \det(A)^{n-1}. )$ 65 views ### Determinant of the linear map given by conjugation. Let $S$ denote the space of skew-symmetric $n\times n$ real matrices, where every element $A\in S$ satisfies $A^T+A = 0$. Let $M$ denote an orthogonal $n\times n$ matrix, and $L_M$ denotes the ... 65 views ### Maximum determinant of a $m\times m$ - matrix with entries $1..n$ I want to find the maximal possible determinant of a $m\times m$ - matrix A with entries $1..n$. Conjecture 1 : The maximum possible determinant can be achieved by a matrix only ... 64 views ### Surprising necessary condition for a “shift-invariant” determinant Let $A$ be a $4\ x\ 4$ binary matrix and $Z=\pmatrix {s&s&s&s \\ s&s&s&s \\s&s&s&s \\s&s&s&s}$ Then $\det(A+Z)=\det(A)=1\$ (independent of s, so ... 87 views ### Simple proof that a $3\times 3$-matrix with entries $s$ or $s+1$ cannot have determinant $\pm 1$, if $s>1$. Let $s>1$ and $A$ be a $3\times 3$ matrix with entries $s$ or $s+1$. Then $\det(A)\ne \pm 1$. The determinant has the form $as+b$ with integers $a$,$b$ and it has to be proven that $a>0$ if ... 32 views ### Determinant of a matrix shifted by m Let $A$ be an $n\times n$ matrix and $Z$ be the $n\times n$ matrix, whose entries are all $m$. Let $S$ be the sum of all the adjoints of $A$. Then my conjecture is $\det(A+Z)=\det(A)+Sm$ , in ... 31 views ### Relation on the determinant of a matrix and the product of its diagonal entries? Let $A$ be a $3\times 3$ symmetric matrix, with three real eigenvalues $\lambda_1,\lambda_2,\lambda_3$, and diagonal entries $a_1,a_2,a_3$, is it true that \begin{equation*} \det ... 105 views ### Prove that if the sum of each row of A equals s, then s is an eigenvalue of A. [duplicate] Consider an $n \times n$ matrix $A$ with the property that the row sums all equal the same number $s$. Show that $s$ is an eigenvalue of $A$. [Hint: Find an eigenvector] My attempt: By definition: ... 34 views ### How to factor and reduce a huge determinant to simpler form? Linear Algebra So, I have learned about cofactor expansion. But the cofactor expansion I know doesn't reduce the number of rows and colums to one matrix. I usually pick a colum, multiply each element in the column ... 48 views 37 views ### Determinant (or positive definiteness) of a Hankel matrix I need to prove that the Hankel matrix given by $a_{ij}=\frac{1}{i+j}$ is positive definite. It turns out that it is a special case of the Cauchy matrices, and the determinant is given by the Cauchy ... 85 views ### Find the expansion for $\det(I+\epsilon A)$ where $\epsilon$ is small without using eigenvalue.
I'm taking a linear algebra course and the professor included the problem that prove $$\rm{det}(I+\epsilon A) = 1 + \epsilon\,\rm{tr}\,A + o(\epsilon)$$ Since the professor hasn't covered the ... 17 views ### Bound on the degree of a determinant of a polynomial matrix I want to implement a modular algorithm for computing the determinant of a square Matrix with multivariate polynomials in $\mathbb{Z}$ as components (symbolically).
https://math.stackexchange.com/questions/3274426/prove-sum-k-1n-frac-lefth-kp-right2kp-frac13h-np3-h
1,652,948,047,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652662526009.35/warc/CC-MAIN-20220519074217-20220519104217-00385.warc.gz
448,028,897
65,653
# Prove $\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}=\frac13((H_n^{(p)})^3-H_n^{(3p)})+\sum_{k=1}^n\frac{H_k^{(p)}}{k^{2p}}$ Find $$\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}\,,$$ where $$H_k^{(p)}=1+\frac1{2^p}+\cdots+\frac1{k^p}$$ is the $$k$$th generalized harmonic number of order $$p$$. Cornel proved in his book, (almost) impossible integral, sums and series, the following identity : $$\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}=\frac13\left(\left(H_n^{(p)}\right)^3-H_n^{(3p)}\right)+\sum_{k=1}^n\frac{H_k^{(p)}}{k^{2p}}$$ using series manipulations and he also suggested that this identity can be proved using Abel's summation and I was successful in proving it that way. other approaches are appreciated. I am posting this problem as its' importance appears when $$n$$ approaches $$\infty$$. using Abel's summation $$\ \displaystyle\sum_{k=1}^n a_k b_k=A_nb_{n+1}+\sum_{k=1}^{n}A_k\left(b_k-b_{k+1}\right)$$ where $$\displaystyle A_n=\sum_{i=1}^n a_i$$ letting $$\ \displaystyle a_k=\frac{1}{k^p}$$ and $$\ \displaystyle b_k=\left(H_k^{(p)}\right)^2$$, we get \begin{align} S&=\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}=\sum_{i=1}^n\frac{\left(H_{n+1}^{(p)}\right)^2}{i^p}+\sum_{k=1}^n\left(\sum_{i=1}^k\frac1{i^p}\right)\left(\left(H_k^{(p)}\right)^2-\left(H_{k+1}^{(p)}\right)^2\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}+\sum_{k=1}^n\left(H_k^{(p)}\right)\left(\left(H_k^{(p)}\right)^2-\left(H_{k+1}^{(p)}\right)^2\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}+\sum_{k=1}^{n+1}\left(H_{k-1}^{(p)}\right)\left(\left(H_{k-1}^{(p)}\right)^2-\left(H_{k}^{(p)}\right)^2\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}-\sum_{k=1}^{n+1}\left(H_{k-1}^{(p)}\right)\left(\frac{2H_k^{(p)}}{k^p}-\frac1{k^{2p}}\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}-\sum_{k=1}^{n}\left(H_{k}^{(p)}-\frac1{k^p}\right)\left(\frac{2H_k^{(p)}}{k^p}-\frac1{k^{2p}}\right)-\left(H_{n}^{(p)}\right)\left(\frac{2H_{n+1}^{(p)}}{(n+1)^p}-\frac1{{(n+1)}^{2p}}\right)\\ &=\underbrace{\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}-\left(H_{n}^{(p)}\right)\left(\frac{2H_{n+1}^{(p)}}{(n+1)^p}-\frac1{{(n+1)}^{2p}}\right)}_{\Large\left(H_n^{(p)}\right)^3}-2S+3\sum_{k=1}^n\frac{H_k^{(p)}}{k^{(2p)}}-H_n^{(3p)}\\ &=-2S+3\sum_{k=1}^n\frac{H_k^{(p)}}{k^{(2p)}}+\left(H_n^{(p)}\right)^3-H_n^{(3p)} \end{align} which follows $$S=\frac13\left(\left(H_n^{(p)}\right)^3-H_n^{(3p)}\right)+\sum_{k=1}^n\frac{H_k^{(p)}}{k^{2p}}$$ • Keeping track of the summation indices it seems like there is small typo or error while going from the upper bound $n$ to $n+1$ as the lower bound remains uneffected afterall. Is this intended, since I cannot make sense out it right now. Jun 26, 2019 at 7:29
1,237
2,690
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
4.15625
4
CC-MAIN-2022-21
latest
en
0.42594
# Prove $\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}=\frac13((H_n^{(p)})^3-H_n^{(3p)})+\sum_{k=1}^n\frac{H_k^{(p)}}{k^{2p}}$ Find $$\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}\,,$$ where $$H_k^{(p)}=1+\frac1{2^p}+\cdots+\frac1{k^p}$$ is the $$k$$th generalized harmonic number of order $$p$$. Cornel proved in his book, (almost) impossible integral, sums and series, the following identity : $$\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}=\frac13\left(\left(H_n^{(p)}\right)^3-H_n^{(3p)}\right)+\sum_{k=1}^n\frac{H_k^{(p)}}{k^{2p}}$$ using series manipulations and he also suggested that this identity can be proved using Abel's summation and I was successful in proving it that way. other approaches are appreciated. I am posting this problem as its' importance appears when $$n$$ approaches $$\infty$$.
using Abel's summation $$\ \displaystyle\sum_{k=1}^n a_k b_k=A_nb_{n+1}+\sum_{k=1}^{n}A_k\left(b_k-b_{k+1}\right)$$ where $$\displaystyle A_n=\sum_{i=1}^n a_i$$ letting $$\ \displaystyle a_k=\frac{1}{k^p}$$ and $$\ \displaystyle b_k=\left(H_k^{(p)}\right)^2$$, we get \begin{align} S&=\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}=\sum_{i=1}^n\frac{\left(H_{n+1}^{(p)}\right)^2}{i^p}+\sum_{k=1}^n\left(\sum_{i=1}^k\frac1{i^p}\right)\left(\left(H_k^{(p)}\right)^2-\left(H_{k+1}^{(p)}\right)^2\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}+\sum_{k=1}^n\left(H_k^{(p)}\right)\left(\left(H_k^{(p)}\right)^2-\left(H_{k+1}^{(p)}\right)^2\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}+\sum_{k=1}^{n+1}\left(H_{k-1}^{(p)}\right)\left(\left(H_{k-1}^{(p)}\right)^2-\left(H_{k}^{(p)}\right)^2\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}-\sum_{k=1}^{n+1}\left(H_{k-1}^{(p)}\right)\left(\frac{2H_k^{(p)}}{k^p}-\frac1{k^{2p}}\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}-\sum_{k=1}^{n}\left(H_{k}^{(p)}-\frac1{k^p}\right)\left(\frac{2H_k^{(p)}}{k^p}-\frac1{k^{2p}}\right)-\left(H_{n}^{(p)}\right)\left(\frac{2H_{n+1}^{(p)}}{(n+1)^p}-\frac1{{(n+1)}^{2p}}\right)\\ &=\underbrace{\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}-\left(H_{n}^{(p)}\right)\left(\frac{2H_{n+1}^{(p)}}{(n+1)^p}-\frac1{{(n+1)}^{2p}}\right)}_{\Large\left(H_n^{(p)}\right)^3}-2S+3\sum_{k=1}^n\frac{H_k^{(p)}}{k^{(2p)}}-H_n^{(3p)}\\ &=-2S+3\sum_{k=1}^n\frac{H_k^{(p)}}{k^{(2p)}}+\left(H_n^{(p)}\right)^3-H_n^{(3p)} \end{align} which follows $$S=\frac13\left(\left(H_n^{(p)}\right)^3-H_n^{(3p)}\right)+\sum_{k=1}^n\frac{H_k^{(p)}}{k^{2p}}$$ • Keeping track of the summation indices it seems like there is small typo or error while going from the upper bound $n$ to $n+1$ as the lower bound remains uneffected afterall.
https://math.stackexchange.com/questions/1575088/find-the-value-of-the-series-sum-limits-n-1-infty-fracn2n
1,653,555,834,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652662604495.84/warc/CC-MAIN-20220526065603-20220526095603-00269.warc.gz
455,171,661
60,563
# Find the value of the series $\sum\limits_{n=1}^ \infty \frac{n}{2^n}$ [duplicate] Find the value of the series $\sum\limits_{n=1}^ \infty \dfrac{n}{2^n}$ The series on expanding is coming as $\dfrac{1}{2}+\dfrac{2}{2^2}+..$ I tried using the form of $(1+x)^n=1+nx+\dfrac{n(n-1)}{2}x^2+..$ and then differentiating it but still it is not coming .What shall I do with this? • This might help – user297008 Dec 14, 2015 at 12:36 • Looks like the derivative of a geometric series to me Dec 14, 2015 at 12:37 • See this for other ideas. Dec 14, 2015 at 12:38 • Just differentiate $\frac{1}{2(1-x)}=\frac12\sum x^n$ and set $x=\frac12$. Dec 14, 2015 at 12:39 $$\sum_{n=1}^{\infty}\frac{n}{2^n}=\lim_{m\to\infty}\sum_{n=1}^{m}\frac{n}{2^n}=\lim_{m\to\infty}\frac{-m+2^{m+1}-2}{2^m}=$$ $$\lim_{m\to\infty}\frac{-2^{1-m}+2-2^{-m}m}{1}=\frac{0+2-0}{1}=\frac{2}{1}=2$$
369
864
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.09375
4
CC-MAIN-2022-21
latest
en
0.62588
# Find the value of the series $\sum\limits_{n=1}^ \infty \frac{n}{2^n}$ [duplicate] Find the value of the series $\sum\limits_{n=1}^ \infty \dfrac{n}{2^n}$ The series on expanding is coming as $\dfrac{1}{2}+\dfrac{2}{2^2}+..$ I tried using the form of $(1+x)^n=1+nx+\dfrac{n(n-1)}{2}x^2+..$ and then differentiating it but still it is not coming .What shall I do with this? • This might help – user297008 Dec 14, 2015 at 12:36 • Looks like the derivative of a geometric series to me Dec 14, 2015 at 12:37 • See this for other ideas. Dec 14, 2015 at 12:38 • Just differentiate $\frac{1}{2(1-x)}=\frac12\sum x^n$ and set $x=\frac12$.
Dec 14, 2015 at 12:39 $$\sum_{n=1}^{\infty}\frac{n}{2^n}=\lim_{m\to\infty}\sum_{n=1}^{m}\frac{n}{2^n}=\lim_{m\to\infty}\frac{-m+2^{m+1}-2}{2^m}=$$ $$\lim_{m\to\infty}\frac{-2^{1-m}+2-2^{-m}m}{1}=\frac{0+2-0}{1}=\frac{2}{1}=2$$
https://math.stackexchange.com/questions/2455714/prove-int-0-pi-2-x-left-sin-nx-over-sin-x-right4-mathrmdxn2-pi2
1,563,631,112,000,000,000
text/html
crawl-data/CC-MAIN-2019-30/segments/1563195526517.67/warc/CC-MAIN-20190720132039-20190720154039-00153.warc.gz
470,548,607
35,894
# Prove $\int_0^{\pi/2} x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x<{n^2\pi^2\over 8}$ Prove $$\int_0^{\pi/2} x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x<{n^2\pi^2\over 8}.$$ My attempt: \begin{align} \int_0^{\pi/2} x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x & =\sum_{k=1}^n \int_{{k-1\over 2n}\pi}^{{k\over 2n}\pi}x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x\\ & \leq\sum_{k=1}^n \left({\pi\over 2}\right)^4 \int_{{k-1\over 2n}\pi}^{{k\over 2n}\pi}\left({\sin^4nx\over x^3}\right)\mathrm{d}x \quad (\text{use } \sin x\geq {2\over \pi}x ) \tag{1}\label{1}\\ &= \left({\pi\over 2}\right)^4 n^2\sum_{k=1}^n \int_{{k-1\over 2}\pi}^{{k\over 2}\pi}\left({\sin^4x\over x^3}\right)\mathrm{d}x \quad (\text{use } x\to {x\over n}).\\ \end{align} Is my direction right? If right, how can I prove the following $$\sum_{k=1}^n \int_{{k-1\over 2}\pi}^{{k\over 2}\pi}\left({\sin^4x\over x^3}\right)\mathrm{d}x\leq\int_0^{+\infty}\left({\sin^4x\over x^3}\right)\mathrm{d}x \leq {2\over \pi^2}.$$ I use Mathematica to calculate the integral $\int_0^{+\infty}\left({\sin^4x\over x^3}\right)\mathrm{d}x\simeq 0.7>{2\over\pi^2}$, hence my process (\ref{1}) seems to be wrong. • I supposed $n$ positive integer. Is it right? – Raffaele Oct 3 '17 at 12:54 • @Raffaele Yes it is. – yahoo Oct 3 '17 at 13:01 The term $\left(\frac{\sin nx}{\sin x}\right)^4$ is associated with the Jackson kernel. Your inequality is indeed just a minor variation on Lemma 0.5 in the linked notes, and it can be proved through the same technique: expand $|x|$ as a Fourier cosine series over $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$, do the same for $\left(\frac{\sin nx}{\sin x}\right)^4$, then apply orthogonality/Bessel's inequality. • Great answer! The question is the first exercise of a chapter about integral, I thought it would be easy for me. By reading the material you give, I should estimate the $\int_0^{\pi\over 2n}$ part by using $\sin x\geq {2\over\pi}x$ and the rest using $x\geq {k\pi\over 2n}$. The first part it self still larger than the right hand side. So I need to give a more concise estimate rather than $\sin x\geq {2\over\pi} x$. By the way, if I accept the answer, how can I ask more people to find if there is a more simpler answer? – yahoo Oct 3 '17 at 14:38
909
2,274
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
4.1875
4
CC-MAIN-2019-30
latest
en
0.606609
# Prove $\int_0^{\pi/2} x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x<{n^2\pi^2\over 8}$ Prove $$\int_0^{\pi/2} x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x<{n^2\pi^2\over 8}.$$ My attempt: \begin{align} \int_0^{\pi/2} x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x & =\sum_{k=1}^n \int_{{k-1\over 2n}\pi}^{{k\over 2n}\pi}x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x\\ & \leq\sum_{k=1}^n \left({\pi\over 2}\right)^4 \int_{{k-1\over 2n}\pi}^{{k\over 2n}\pi}\left({\sin^4nx\over x^3}\right)\mathrm{d}x \quad (\text{use } \sin x\geq {2\over \pi}x ) \tag{1}\label{1}\\ &= \left({\pi\over 2}\right)^4 n^2\sum_{k=1}^n \int_{{k-1\over 2}\pi}^{{k\over 2}\pi}\left({\sin^4x\over x^3}\right)\mathrm{d}x \quad (\text{use } x\to {x\over n}).\\ \end{align} Is my direction right? If right, how can I prove the following $$\sum_{k=1}^n \int_{{k-1\over 2}\pi}^{{k\over 2}\pi}\left({\sin^4x\over x^3}\right)\mathrm{d}x\leq\int_0^{+\infty}\left({\sin^4x\over x^3}\right)\mathrm{d}x \leq {2\over \pi^2}.$$ I use Mathematica to calculate the integral $\int_0^{+\infty}\left({\sin^4x\over x^3}\right)\mathrm{d}x\simeq 0.7>{2\over\pi^2}$, hence my process (\ref{1}) seems to be wrong. • I supposed $n$ positive integer. Is it right? – Raffaele Oct 3 '17 at 12:54 • @Raffaele Yes it is. – yahoo Oct 3 '17 at 13:01 The term $\left(\frac{\sin nx}{\sin x}\right)^4$ is associated with the Jackson kernel.
Your inequality is indeed just a minor variation on Lemma 0.5 in the linked notes, and it can be proved through the same technique: expand $|x|$ as a Fourier cosine series over $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$, do the same for $\left(\frac{\sin nx}{\sin x}\right)^4$, then apply orthogonality/Bessel's inequality.
https://cstheory.stackexchange.com/questions/46185/computing-3d-viewpoint-of-a-set-of-non-intersecting-segments
1,621,182,458,000,000,000
text/html
crawl-data/CC-MAIN-2021-21/segments/1620243991224.58/warc/CC-MAIN-20210516140441-20210516170441-00603.warc.gz
226,243,171
36,906
# Computing 3D viewpoint of a set of non-intersecting segments Consider the following problem: we are given a finite set of bounded line-segments in $${\mathbb R}^3$$, and we want to decide whether there exists a point $$p\in {\mathbb R}^3$$ from which no two segments obscure one another. Can this be done efficiently? ### Problem statement: More precisely and formally: we are given $$n$$ line segments $$\ell_1,\ldots,\ell_n$$, where each segment is defined as $$\ell_i=\{tu_i+(1-t) v_i: t\in [0,1]\}$$ with $$u_i,v_i\in {\mathbb Q}^3$$ (we assume rational coordinates). We wish to decide whether there exists a point $$p\in {\mathbb R}^3$$ such the lines connecting $$p$$ with each point on the lines are distinct, and if so, compute it. Is there an efficient solution? Is there a hardness lower bound? UPDATE: Given the lack of answers so far, what about the case where the line segments connect two adjacent point in a 3D $$k\times k\times k$$ grid ? Then, they are all parallel to some axis, they are all of length 1, etc. Does this make it significantly easier? ### Inefficient solution: Observe that for each pair of lines $$\ell_1,\ell_2$$, the points from which the lines do obscure each other can be described as a polyhedron defined as the intersection of 4 half-spaces: for every 3 points in {u_1,v_1,u_2,v_2}, the hyperplane defined by them is a boundary such that on one side of it, the lines do not obscure each other. Thus, we can represent the set of "bad points" (those from which at least one pair obscure each other) as a union of $$n^2$$ polyhedra (not necessarily disjoint). Then, all we need is to test it's complement for emptiness. This can be done e.g. using Fourier-Motzkin quantifier elimination, whose complexity is quite bad. On top of this, we first need to convert a CNF representation to DNF, which may involve an exponential blowup.
490
1,878
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.53125
4
CC-MAIN-2021-21
latest
en
0.908188
# Computing 3D viewpoint of a set of non-intersecting segments Consider the following problem: we are given a finite set of bounded line-segments in $${\mathbb R}^3$$, and we want to decide whether there exists a point $$p\in {\mathbb R}^3$$ from which no two segments obscure one another. Can this be done efficiently? ### Problem statement: More precisely and formally: we are given $$n$$ line segments $$\ell_1,\ldots,\ell_n$$, where each segment is defined as $$\ell_i=\{tu_i+(1-t) v_i: t\in [0,1]\}$$ with $$u_i,v_i\in {\mathbb Q}^3$$ (we assume rational coordinates). We wish to decide whether there exists a point $$p\in {\mathbb R}^3$$ such the lines connecting $$p$$ with each point on the lines are distinct, and if so, compute it. Is there an efficient solution? Is there a hardness lower bound? UPDATE: Given the lack of answers so far, what about the case where the line segments connect two adjacent point in a 3D $$k\times k\times k$$ grid ? Then, they are all parallel to some axis, they are all of length 1, etc. Does this make it significantly easier? ### Inefficient solution: Observe that for each pair of lines $$\ell_1,\ell_2$$, the points from which the lines do obscure each other can be described as a polyhedron defined as the intersection of 4 half-spaces: for every 3 points in {u_1,v_1,u_2,v_2}, the hyperplane defined by them is a boundary such that on one side of it, the lines do not obscure each other.
Thus, we can represent the set of "bad points" (those from which at least one pair obscure each other) as a union of $$n^2$$ polyhedra (not necessarily disjoint).
https://math.stackexchange.com/questions/3186954/to-find-an-orthonormal-basis-for-the-row-space-of-a
1,566,754,839,000,000,000
text/html
crawl-data/CC-MAIN-2019-35/segments/1566027330786.8/warc/CC-MAIN-20190825173827-20190825195827-00532.warc.gz
546,275,213
30,621
# To find an orthonormal basis for the row space of $A$. To find an orthonormal basis for the row space of $$A = \begin{bmatrix} 2 & -1 & -3 \\ -5 & 5 & 3 \\ \end{bmatrix}$$. Let $$v_1 = (2\ -1 \ -3)$$ and $$v_2 = (-5 \ \ \ 5 \ \ \ 3)$$. Using the Gram-Schmidt Process, I found an orthonormal basis $$e_1 = \frac{1}{\sqrt{14}} (2\ -1 \ -3)$$ and $$e_2 = \frac{1}{\sqrt{5}} (-1 \ \ \ 2 \ \ \ 0)$$. So an orthonormal basis for the row space of $$A =\{ e_1,e_2\}$$ . IS the solution correct? • Did you try checking if the two vectors you obtained are orthogonal (i.e. their dot product is $0$)? You should also probably show us the steps in your working, so we can see where you went wrong. – Minus One-Twelfth Apr 14 at 2:45 • Even more importantly, have you checked that $v_1$ and $v_2$ are actually elements of the row space? – amd Apr 14 at 3:31 ## 1 Answer Verify your Gram-Schmidt process again. Note that we have $$V_1=X_1$$ and $$V_2 = X_2-\frac {X_2.V_1}{V_1.V_1}V_1$$ My calculations did not match with yours.
371
1,026
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.5
4
CC-MAIN-2019-35
latest
en
0.843742
# To find an orthonormal basis for the row space of $A$. To find an orthonormal basis for the row space of $$A = \begin{bmatrix} 2 & -1 & -3 \\ -5 & 5 & 3 \\ \end{bmatrix}$$. Let $$v_1 = (2\ -1 \ -3)$$ and $$v_2 = (-5 \ \ \ 5 \ \ \ 3)$$. Using the Gram-Schmidt Process, I found an orthonormal basis $$e_1 = \frac{1}{\sqrt{14}} (2\ -1 \ -3)$$ and $$e_2 = \frac{1}{\sqrt{5}} (-1 \ \ \ 2 \ \ \ 0)$$. So an orthonormal basis for the row space of $$A =\{ e_1,e_2\}$$ . IS the solution correct? • Did you try checking if the two vectors you obtained are orthogonal (i.e. their dot product is $0$)? You should also probably show us the steps in your working, so we can see where you went wrong. – Minus One-Twelfth Apr 14 at 2:45 • Even more importantly, have you checked that $v_1$ and $v_2$ are actually elements of the row space? – amd Apr 14 at 3:31 ## 1 Answer Verify your Gram-Schmidt process again.
Note that we have $$V_1=X_1$$ and $$V_2 = X_2-\frac {X_2.V_1}{V_1.V_1}V_1$$ My calculations did not match with yours.
https://math.stackexchange.com/questions/1827111/probability-with-coins
1,560,662,721,000,000,000
text/html
crawl-data/CC-MAIN-2019-26/segments/1560627997731.69/warc/CC-MAIN-20190616042701-20190616064701-00038.warc.gz
527,510,068
33,314
Probability with coins I'm self learning and I stumbled upon the following task, but I struggle to find the solution: Two players flip coins. The first player flips 3 coins, the second player flips 2 coins. The player that gets most tales wins 5 coins. If both players get the same amount of tales, the game starts over. 1. What is the probability of the first player to win on the first attempt? 2. What is the probability of the first player to win the game? 3. How is the prize distributed? My solution: if H=heads, T=tails then on the first attempt the following outcomes are possible: {(HHH, HH), (HHH, HT), (HHH, TH), (HHH, TT), (HHT, HH), (HHT, HT), (HHT, TH), (HHT, TT), (HTH, HH), (HTH, HT), (HTH, TH), (HTH, TT), (THH, HH), (THH, HT), (THH, TH), (THH, TT), (HTT, HH), (HTT, HT), (HTT, TH), (HTT, TT), (THT, HH), (THT, HT), (THT, TH), (THT, TT), (TTH, HH), (TTH, HT), (TTH, TH), (TTH, TT), (TTT, HH), (TTT, HT), (TTT, TH), (TTT, TT)} Total cases: 32; First player wins in 16; Second player in 6; Game is repeated in 10. 1. The probability of the first player to win the game on the first attempt is $\frac {16} {32} = \frac 12$. 2. The probability of the first player to win the game is $\frac {16}{32}\frac {10}{32} = \frac {5}{32}$ ?? I'm not very sure if the second is correct. Is it right to conclude that if the game is repeated $n$ times the chance of the first player to win is the same as if the game is repeated 1 time? • You've got to be careful. For the first player, $HHT$ is a different throw from $HTH$, and they need to be separate entries. If they aren't treated as separate, then the probabilities aren't uniform, they go like this: $P(3H) = 1/8, P(2H) = 3/8, P(1H) = 3/8, P(0H) = 1/8$. – Arthur Jun 15 '16 at 12:27 • Thanks! Modified my question. The cases should be correct now, but is it so with my answer? – Ivan Prodanov Jun 15 '16 at 12:39 • You do not need to list all solutions. Player 1 and player 2 are independently distributed, meaning, the outcome of player 1 does not affect the probability of the outcomes of player 2, and conversely. Player 1 plays a Binomial distribution with $n=3$ attempts and probability of success $p=\frac{1}{2}$. Player 2 plays also a binomial distribution with $p = \frac{1}{2}$, but with $n=2$ attempts. – Lærne Jun 15 '16 at 12:46 • Using binomial distribution for the first answer looks interesting. So in order for the first player to win on the first trial it would be $\binom 3 3p^3(1-p)^0 + \binom 3 2p^2(1-p)^1(1-\binom 2 2p^2(1-p)^0) + \binom 3 1p(1-p)^2\binom 2 0p^0(1-p)^2$ – Ivan Prodanov Jun 15 '16 at 13:13 About the second part, you can think this way: Firstly, in each trial the probability that the first player wins is $\frac{1}{2}$, as you have calculated. The probability of the second person to win a trial is $\frac{3}{16}$. The probability of a draw is $1-\frac{1}{2}-\frac{3}{16}=\frac{5}{16}$. Having the probabilities for a single trial, the probability that the first person wins, in total, is calculated considering the probabilities of the following scenarios: 1- the first person wins in the first trial ($\frac{1}{2}$) 2- the first trial ends in a draw and in the second trial, the first person wins ($(\frac{5}{16})(\frac{1}{2})$) 3- in general, we need to have $n$ draws and one win (for the first person) at the end, which happens with the probability $(\frac{5}{16})^n(\frac{1}{2})$ Since the mentioned scenarios are disjoint, they can be added up to give the final answer $\frac{1}{2}\sum_{i=0}(\frac{5}{16})^i=\frac{1}{2}\frac{1}{1-\frac{5}{16}}=\frac{8}{11}$ For the third part, I think it should be noted what prize distribution is. • Thanks! The third part I believe is related to probability distribution. Any clue on this one? – Ivan Prodanov Jun 15 '16 at 13:26 • We need to have a random variable defined first, so we can calculate the probability distribution. So, the distribution of prize is not well defined. – Med Jun 15 '16 at 13:36
1,245
3,977
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.125
4
CC-MAIN-2019-26
latest
en
0.937384
Probability with coins I'm self learning and I stumbled upon the following task, but I struggle to find the solution: Two players flip coins. The first player flips 3 coins, the second player flips 2 coins. The player that gets most tales wins 5 coins. If both players get the same amount of tales, the game starts over. 1. What is the probability of the first player to win on the first attempt? 2. What is the probability of the first player to win the game? 3. How is the prize distributed? My solution: if H=heads, T=tails then on the first attempt the following outcomes are possible: {(HHH, HH), (HHH, HT), (HHH, TH), (HHH, TT), (HHT, HH), (HHT, HT), (HHT, TH), (HHT, TT), (HTH, HH), (HTH, HT), (HTH, TH), (HTH, TT), (THH, HH), (THH, HT), (THH, TH), (THH, TT), (HTT, HH), (HTT, HT), (HTT, TH), (HTT, TT), (THT, HH), (THT, HT), (THT, TH), (THT, TT), (TTH, HH), (TTH, HT), (TTH, TH), (TTH, TT), (TTT, HH), (TTT, HT), (TTT, TH), (TTT, TT)} Total cases: 32; First player wins in 16; Second player in 6; Game is repeated in 10. 1. The probability of the first player to win the game on the first attempt is $\frac {16} {32} = \frac 12$. 2. The probability of the first player to win the game is $\frac {16}{32}\frac {10}{32} = \frac {5}{32}$ ? ? I'm not very sure if the second is correct. Is it right to conclude that if the game is repeated $n$ times the chance of the first player to win is the same as if the game is repeated 1 time? • You've got to be careful. For the first player, $HHT$ is a different throw from $HTH$, and they need to be separate entries. If they aren't treated as separate, then the probabilities aren't uniform, they go like this: $P(3H) = 1/8, P(2H) = 3/8, P(1H) = 3/8, P(0H) = 1/8$. – Arthur Jun 15 '16 at 12:27 • Thanks! Modified my question. The cases should be correct now, but is it so with my answer? – Ivan Prodanov Jun 15 '16 at 12:39 • You do not need to list all solutions. Player 1 and player 2 are independently distributed, meaning, the outcome of player 1 does not affect the probability of the outcomes of player 2, and conversely. Player 1 plays a Binomial distribution with $n=3$ attempts and probability of success $p=\frac{1}{2}$. Player 2 plays also a binomial distribution with $p = \frac{1}{2}$, but with $n=2$ attempts. – Lærne Jun 15 '16 at 12:46 • Using binomial distribution for the first answer looks interesting. So in order for the first player to win on the first trial it would be $\binom 3 3p^3(1-p)^0 + \binom 3 2p^2(1-p)^1(1-\binom 2 2p^2(1-p)^0) + \binom 3 1p(1-p)^2\binom 2 0p^0(1-p)^2$ – Ivan Prodanov Jun 15 '16 at 13:13 About the second part, you can think this way: Firstly, in each trial the probability that the first player wins is $\frac{1}{2}$, as you have calculated. The probability of the second person to win a trial is $\frac{3}{16}$. The probability of a draw is $1-\frac{1}{2}-\frac{3}{16}=\frac{5}{16}$.
Having the probabilities for a single trial, the probability that the first person wins, in total, is calculated considering the probabilities of the following scenarios: 1- the first person wins in the first trial ($\frac{1}{2}$) 2- the first trial ends in a draw and in the second trial, the first person wins ($(\frac{5}{16})(\frac{1}{2})$) 3- in general, we need to have $n$ draws and one win (for the first person) at the end, which happens with the probability $(\frac{5}{16})^n(\frac{1}{2})$ Since the mentioned scenarios are disjoint, they can be added up to give the final answer $\frac{1}{2}\sum_{i=0}(\frac{5}{16})^i=\frac{1}{2}\frac{1}{1-\frac{5}{16}}=\frac{8}{11}$ For the third part, I think it should be noted what prize distribution is.
https://math.stackexchange.com/questions/3087270/proof-verification-the-orthogonal-complement-of-the-column-space-is-the-left-nu
1,620,285,752,000,000,000
text/html
crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00489.warc.gz
417,044,126
38,592
# Proof Verification: the orthogonal complement of the column space is the left nullspace Can someone please check my proof and my definitions. Let $$A \in \mathbb{R}^{n \times m}$$ be my matrix. The left null space of $$A$$ is written as, $$\mathcal{N}(A^\top) = \{x \in \mathbb{R}^n| A^\top x = 0\}$$ The orthogonal complement of the column space $$\mathcal{C}(A)$$ is written as, $$\mathcal{C}(A)^\perp = \{x \in \mathbb{R}^n | x^\top y = 0, \forall y \in \mathcal{C}(A)\}$$ We want to show that $$\mathcal{N}(A^\top) = \mathcal{C}(A)^\perp$$ First, we show, $$\mathcal{N}(A^\top) \subseteq \mathcal{C}(A)^\perp$$ Let $$x \in \mathcal{N}(A^\top)$$, then $$A^\top x = 0 \implies x^\top A = 0^\top \implies x^\top Av= 0^\top v, \forall v \in \mathcal{C}(A) \implies x^\top y = 0 , y = Av$$, $$\implies x \in C(A)^\perp$$. Next, we show, $$\mathcal{N}(A^\top) \supseteq \mathcal{C}(A)^\perp$$ Let $$x \in C(A)^\perp$$, then $$x^\top y = 0$$, forall $$y \in C(A)$$. But $$y = Av, \forall v \in \mathbb{R}^n$$. Hence, $$x^\top y = x^\top Av = v^\top A^\top x.$$ For all $$v \neq 0, A^\top x = 0$$, hence $$x \in \mathcal{N}(A^\top)$$. I'm pretty confident about the first proof. But the second proof is a bit more rough. Can someone please check for me. $$y \in C(A)$$ means that there exists (at least one) $$v$$ of appropriate dimension such that $$y = Av$$. So we can say: For $$x \in C(A)^{\perp}$$, then $$x^T y = 0$$ for every $$y \in C(A)$$. For every $$y \in C(A)$$, we can express $$y = Av$$ for some (nonzero) $$v$$. So we can always express $$x^T y$$ as $$x^T Av$$. So $$x^T y = x^T (A v) = (x^T A) v = (A^T x)^T v = 0^T v = 0$$ for $$v \neq 0$$, so we must have $$A^T x = 0$$, i.e., $$x \in N(A^T)$$.
694
1,724
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 33, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.953125
4
CC-MAIN-2021-21
latest
en
0.551844
# Proof Verification: the orthogonal complement of the column space is the left nullspace Can someone please check my proof and my definitions. Let $$A \in \mathbb{R}^{n \times m}$$ be my matrix. The left null space of $$A$$ is written as, $$\mathcal{N}(A^\top) = \{x \in \mathbb{R}^n| A^\top x = 0\}$$ The orthogonal complement of the column space $$\mathcal{C}(A)$$ is written as, $$\mathcal{C}(A)^\perp = \{x \in \mathbb{R}^n | x^\top y = 0, \forall y \in \mathcal{C}(A)\}$$ We want to show that $$\mathcal{N}(A^\top) = \mathcal{C}(A)^\perp$$ First, we show, $$\mathcal{N}(A^\top) \subseteq \mathcal{C}(A)^\perp$$ Let $$x \in \mathcal{N}(A^\top)$$, then $$A^\top x = 0 \implies x^\top A = 0^\top \implies x^\top Av= 0^\top v, \forall v \in \mathcal{C}(A) \implies x^\top y = 0 , y = Av$$, $$\implies x \in C(A)^\perp$$. Next, we show, $$\mathcal{N}(A^\top) \supseteq \mathcal{C}(A)^\perp$$ Let $$x \in C(A)^\perp$$, then $$x^\top y = 0$$, forall $$y \in C(A)$$. But $$y = Av, \forall v \in \mathbb{R}^n$$. Hence, $$x^\top y = x^\top Av = v^\top A^\top x.$$ For all $$v \neq 0, A^\top x = 0$$, hence $$x \in \mathcal{N}(A^\top)$$. I'm pretty confident about the first proof. But the second proof is a bit more rough. Can someone please check for me. $$y \in C(A)$$ means that there exists (at least one) $$v$$ of appropriate dimension such that $$y = Av$$. So we can say: For $$x \in C(A)^{\perp}$$, then $$x^T y = 0$$ for every $$y \in C(A)$$. For every $$y \in C(A)$$, we can express $$y = Av$$ for some (nonzero) $$v$$. So we can always express $$x^T y$$ as $$x^T Av$$.
So $$x^T y = x^T (A v) = (x^T A) v = (A^T x)^T v = 0^T v = 0$$ for $$v \neq 0$$, so we must have $$A^T x = 0$$, i.e., $$x \in N(A^T)$$.
https://math.stackexchange.com/questions/874631/finding-cut-off-point-for-utility-function/874967
1,718,852,005,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198861880.60/warc/CC-MAIN-20240620011821-20240620041821-00426.warc.gz
335,602,211
36,859
# Finding cut-off point for utility function OK, so apologies for the easy question, but I'm new to this! This is somewhere between elementary algebra, and beginner's game theory. The question comes from a paper I read here (see p. 193): http://home.uchicago.edu/~sashwort/valence.pdf The following is a utility function for an individual comparing two alternatives (call them L and R). The individual, $i$, prefers L to R when: $V_L - (x^* - x_L)^2 > V_R - (x^* - x_R)^2$ So far so good. The difficulty I'm having is figuring out how we can get from here to a cutoff rule, such that $i$ will prefer L if and only if: $x^* < \hat{x}(x_L,x_R,v_L,v_R)$ The paper says that this can be accomplished via "straightforward algebra" to reach: $\hat{x}(x_L,x_R,v_L,v_R) = \frac{1}{2}(x_R + x_L) + \frac{V_L - V_R}{2(X_R-X_L)}$ Sadly, for me, this algebra ain't so straightforward. If anyone could walk me through the steps to reach this point (or point out how I should approach this) that'd be great. Of course, in the SO tradition, anything more general that can help make this question more applicable to others is also very welcome. Thanks! -- EDIT: posted this q this morning, and have had some views but no nibbles... anyone got any suggestions? Thanks so much! • 1. Expand both sides. 2. Cancel the $(x^*)^2$ that appears on both sides. 3. Solve for $x^*$. 4. Simplify, remembering that $(x_L^2-x_R^2)=(x_L+x_R)(x_L-x_R)$. 5. Drop the "algebraic-geometry" tag! :) Commented Jul 22, 2014 at 16:49 • Thanks so much - that's really great! Commented Jul 22, 2014 at 17:20 Just to avoid cumbersome effects, use $x_*$ instead of $x^*$. Then we have (step by step) $V_R - (x_* - x_R)^2 < V_L - (x_* - x_L)^2$, $V_R - x_{*}^2 - x_{R}^2 + 2x_* x_R < V_L - x_{*}^2 - x_{L}^2 + 2x_* x_L$, $V_R - x_{R}^2 + 2x_* x_R < V_L - x_{L}^2 + 2x_* x_L$, $2x_* x_R - 2x_* x_L + x_{L}^2 - x_{R}^2 < V_L - V_R$, $2x_* ( x_R - x_L) < (V_L - V_R) + (x_{R}^2 - x_{L}^2)$, $x_* < \frac{(V_L - V_R)}{2 ( x_R - x_L)} + \frac{1}{2}(x_{R} + x_{L})$. As somebody suggested, drop the algebraic topology tag. ;) I hope it helps! • Many thanks for this! It's great. Just one thing: in the last step, when you divide $(V_R - V_R)$ by $2(X_R - X_L)$, why is the other term $(X_R^2 - X_L^2)$ not also divided by $(X_R - X_L)$? I understand the $\frac{1}{2}$ part but don't understand where the other bit goes! Apologies for confusion. Many thanks. Commented Jul 22, 2014 at 17:23 • Set $x_R = a$ and $x_L =b$. Then you have $\frac{a^2 - b^2}{2(a-b)}$. But this is nothing more than $\frac{(a - b)(a+b)}{2(a-b)}$, and you simplify to get $\frac{(a+b)}{2}$. Commented Jul 22, 2014 at 17:36 • Ah I see. In which case, I think the last term in the last line ought to be $\frac{1}{2}(x_R + x_L)$ i.e., without the square term on $x_R$ and $x_L$? Commented Jul 22, 2014 at 22:35 • Indeed, I corrected the typo. Commented Jul 23, 2014 at 6:23 • Great, many thanks again. Commented Jul 23, 2014 at 8:35
1,035
2,971
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.984375
4
CC-MAIN-2024-26
latest
en
0.849944
# Finding cut-off point for utility function OK, so apologies for the easy question, but I'm new to this! This is somewhere between elementary algebra, and beginner's game theory. The question comes from a paper I read here (see p. 193): http://home.uchicago.edu/~sashwort/valence.pdf The following is a utility function for an individual comparing two alternatives (call them L and R). The individual, $i$, prefers L to R when: $V_L - (x^* - x_L)^2 > V_R - (x^* - x_R)^2$ So far so good. The difficulty I'm having is figuring out how we can get from here to a cutoff rule, such that $i$ will prefer L if and only if: $x^* < \hat{x}(x_L,x_R,v_L,v_R)$ The paper says that this can be accomplished via "straightforward algebra" to reach: $\hat{x}(x_L,x_R,v_L,v_R) = \frac{1}{2}(x_R + x_L) + \frac{V_L - V_R}{2(X_R-X_L)}$ Sadly, for me, this algebra ain't so straightforward. If anyone could walk me through the steps to reach this point (or point out how I should approach this) that'd be great. Of course, in the SO tradition, anything more general that can help make this question more applicable to others is also very welcome. Thanks! -- EDIT: posted this q this morning, and have had some views but no nibbles... anyone got any suggestions? Thanks so much! • 1. Expand both sides. 2. Cancel the $(x^*)^2$ that appears on both sides. 3. Solve for $x^*$. 4. Simplify, remembering that $(x_L^2-x_R^2)=(x_L+x_R)(x_L-x_R)$. 5. Drop the "algebraic-geometry" tag! :) Commented Jul 22, 2014 at 16:49 • Thanks so much - that's really great! Commented Jul 22, 2014 at 17:20 Just to avoid cumbersome effects, use $x_*$ instead of $x^*$. Then we have (step by step) $V_R - (x_* - x_R)^2 < V_L - (x_* - x_L)^2$, $V_R - x_{*}^2 - x_{R}^2 + 2x_* x_R < V_L - x_{*}^2 - x_{L}^2 + 2x_* x_L$, $V_R - x_{R}^2 + 2x_* x_R < V_L - x_{L}^2 + 2x_* x_L$, $2x_* x_R - 2x_* x_L + x_{L}^2 - x_{R}^2 < V_L - V_R$, $2x_* ( x_R - x_L) < (V_L - V_R) + (x_{R}^2 - x_{L}^2)$, $x_* < \frac{(V_L - V_R)}{2 ( x_R - x_L)} + \frac{1}{2}(x_{R} + x_{L})$. As somebody suggested, drop the algebraic topology tag. ;) I hope it helps! • Many thanks for this! It's great. Just one thing: in the last step, when you divide $(V_R - V_R)$ by $2(X_R - X_L)$, why is the other term $(X_R^2 - X_L^2)$ not also divided by $(X_R - X_L)$? I understand the $\frac{1}{2}$ part but don't understand where the other bit goes! Apologies for confusion. Many thanks.
Commented Jul 22, 2014 at 17:23 • Set $x_R = a$ and $x_L =b$.
https://dsp.stackexchange.com/questions/54772/system-function-h-omega-relationship-to-odd-and-even-components-of-hn
1,631,972,780,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780056476.66/warc/CC-MAIN-20210918123546-20210918153546-00451.warc.gz
284,420,026
38,467
# system function $H(\omega)$ relationship to odd and even components of h[n] What qualities of $$h[n]$$ are necessary for: $$H(e^{j\omega}) = DTFT\{h_{even}[n]\} + j\ DTFT\{h_{odd}[n]\}$$ Do all real / causal h[n] have the property that: $$H(e^{j\omega}) = DTFT\{h_{even}[n]\} + j\ DTFT\{h_{odd}[n]\}$$ where: $$h_{even}[n] = \frac{1}{2}(h[n] + h[-n])$$ $$h_{odd}[n] = \frac{1}{2}(h[n] - h[-n])$$ The DTFT relationships $$x_{even}[n]=\frac12\left(x[n]+x^*[-n]\right)\Longleftrightarrow\textrm{Re}\left\{X(e^{j\omega})\right\}$$ and $$x_{odd}[n]=\frac12\left(x[n]-x^*[-n]\right)\Longleftrightarrow j\,\textrm{Im}\left\{X(e^{j\omega})\right\}$$ hold for any sequence $$x[n]$$ for which the DTFT exists. There is no assumption about $$x[n]$$ being real-valued or causal (note the complex conjugation $$^*$$ in the definition of even and odd signals). If $$x[n]$$ is real-valued you can leave out the conjugation. Note that the DTFT of the odd part $$x_{odd}[n]$$ equals $$j$$ times the imaginary part of the DTFT $$X(e^{j\omega})$$, so you have $$X(e^{j\omega})=\textrm{DTFT}\{x_{even}[n]\}+\textrm{DTFT}\{x_{odd}[n]\}$$ (without a $$j$$ on the right-hand side). • thanks, makes sense now. any suggestion for title? Jan 12 '19 at 15:55 • @MrCasuality: If your question has been answered you can accept this answer by clicking on the green check mark to its left, thanks. Jan 12 '19 at 17:07
501
1,404
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.578125
4
CC-MAIN-2021-39
latest
en
0.733083
# system function $H(\omega)$ relationship to odd and even components of h[n] What qualities of $$h[n]$$ are necessary for: $$H(e^{j\omega}) = DTFT\{h_{even}[n]\} + j\ DTFT\{h_{odd}[n]\}$$ Do all real / causal h[n] have the property that: $$H(e^{j\omega}) = DTFT\{h_{even}[n]\} + j\ DTFT\{h_{odd}[n]\}$$ where: $$h_{even}[n] = \frac{1}{2}(h[n] + h[-n])$$ $$h_{odd}[n] = \frac{1}{2}(h[n] - h[-n])$$ The DTFT relationships $$x_{even}[n]=\frac12\left(x[n]+x^*[-n]\right)\Longleftrightarrow\textrm{Re}\left\{X(e^{j\omega})\right\}$$ and $$x_{odd}[n]=\frac12\left(x[n]-x^*[-n]\right)\Longleftrightarrow j\,\textrm{Im}\left\{X(e^{j\omega})\right\}$$ hold for any sequence $$x[n]$$ for which the DTFT exists. There is no assumption about $$x[n]$$ being real-valued or causal (note the complex conjugation $$^*$$ in the definition of even and odd signals). If $$x[n]$$ is real-valued you can leave out the conjugation.
Note that the DTFT of the odd part $$x_{odd}[n]$$ equals $$j$$ times the imaginary part of the DTFT $$X(e^{j\omega})$$, so you have $$X(e^{j\omega})=\textrm{DTFT}\{x_{even}[n]\}+\textrm{DTFT}\{x_{odd}[n]\}$$ (without a $$j$$ on the right-hand side).
http://math.stackexchange.com/questions/99199/solution-of-fredholm-integral-equation-of-the-first-kind-with-symmetric-rational
1,419,124,574,000,000,000
text/html
crawl-data/CC-MAIN-2014-52/segments/1418802770554.119/warc/CC-MAIN-20141217075250-00095-ip-10-231-17-201.ec2.internal.warc.gz
181,670,018
16,170
solution of Fredholm integral equation of the first kind with symmetric rational kernel How can be solved this Fredholm first kind integral equation: $$f(x)=\frac{1}{\pi}\int_{0}^{\infty}\frac{g(y)}{x+y}dy$$ - The equation $$f(x)=\frac{1}{\pi}\int_{0}^{\infty}\frac{g(y)}{x+y}\mathrm{d}y$$ has solution \begin{align} y(x) &= \frac{1}{2 i} \lim_{\epsilon \to 0^+} \left\{f(-x-i\epsilon)-f(-x+i\epsilon)\right\} \\ &= \frac{1}{\sqrt{x}} \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k)!} \left(\frac{\pi}{x} \frac{\mathrm{d}}{\mathrm{d}x}\right)^{2k} \left\{\sqrt{x}f(x)\right\}. \end{align} Source: Polyanin and Manzhirov, Handbook of Integral Equations, section 3.1-3, #17. Numerous other sources are cited below the entry there. - You could try a Mellin transform. Since $\int _{0}^{\infty }\!{\frac {{x}^{s-1}}{x+y}}{dx}={y}^{s-1}\pi \,\csc \left( \pi \,s \right)$ for $y > 0$ and $0 < \Re s < 1$, the Mellin transforms of $f$ and $g$ satisfy $Mf(s) = \csc(\pi s) Mg(s)$ for $0 < \Re s < 1$. You might then try inverting $Mg(s)$ using the inversion formula $$g(y) = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} Mf(s) \sin(\pi s)\ ds$$ where $0 < c < 1$, under appropriate convergence assumptions. -
472
1,204
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
3.671875
4
CC-MAIN-2014-52
latest
en
0.553492
solution of Fredholm integral equation of the first kind with symmetric rational kernel How can be solved this Fredholm first kind integral equation: $$f(x)=\frac{1}{\pi}\int_{0}^{\infty}\frac{g(y)}{x+y}dy$$ - The equation $$f(x)=\frac{1}{\pi}\int_{0}^{\infty}\frac{g(y)}{x+y}\mathrm{d}y$$ has solution \begin{align} y(x) &= \frac{1}{2 i} \lim_{\epsilon \to 0^+} \left\{f(-x-i\epsilon)-f(-x+i\epsilon)\right\} \\ &= \frac{1}{\sqrt{x}} \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k)!} \left(\frac{\pi}{x} \frac{\mathrm{d}}{\mathrm{d}x}\right)^{2k} \left\{\sqrt{x}f(x)\right\}. \end{align} Source: Polyanin and Manzhirov, Handbook of Integral Equations, section 3.1-3, #17. Numerous other sources are cited below the entry there. - You could try a Mellin transform. Since $\int _{0}^{\infty }\! {\frac {{x}^{s-1}}{x+y}}{dx}={y}^{s-1}\pi \,\csc \left( \pi \,s \right)$ for $y > 0$ and $0 < \Re s < 1$, the Mellin transforms of $f$ and $g$ satisfy $Mf(s) = \csc(\pi s) Mg(s)$ for $0 < \Re s < 1$.
You might then try inverting $Mg(s)$ using the inversion formula $$g(y) = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} Mf(s) \sin(\pi s)\ ds$$ where $0 < c < 1$, under appropriate convergence assumptions.
http://math.stackexchange.com/questions/435026/algebraic-divison
1,469,759,211,000,000,000
text/html
crawl-data/CC-MAIN-2016-30/segments/1469257829325.58/warc/CC-MAIN-20160723071029-00053-ip-10-185-27-174.ec2.internal.warc.gz
157,137,543
17,591
# Algebraic Divison Is there a way to break the left hand side expression such that it takes the the right hand side form? $(a+b)/(c+d)=a/c+b/d+k$ Where $k$ is some expression. - Yes, and that expression would be $(a+b)/(c+d) - a/c - b/d$. Are you looking for something less stupid or more specific? – Patrick Da Silva Jul 3 '13 at 3:17 Solve for $k$, as Patrick indicated: \begin{align} k&=\frac{a+b}{c+d}-\frac{a}{c}-\frac{b}{d}\\ &=\frac{cd(a+b)-ad(c+d)-bc(c+d)}{cd(c+d)}\\ &=\frac{acd+bcd-acd-ad^2-bc^2-bcd}{cd(c+d)}\\ &=\frac{-ad^2-bc^2}{cd(c+d)} \end{align} In the words of lots of movie cops over the years, "Move along, folks, there's nothing to see here." @jessica: one additional thing to note is that you need $c$ and $d$ non-zero. – James Jul 3 '13 at 13:21 @James: and also $c\ne-d$, else the original expression is undefined. – Rick Decker Jul 3 '13 at 13:42 Here's an old chestnut related to your problem. Take $64/16$, cancel the 6s, and you get $4/1$ which happens to be the right answer. Too bad it doesn't work in general. – Rick Decker Jul 4 '13 at 14:05
366
1,079
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
4.09375
4
CC-MAIN-2016-30
latest
en
0.823834
# Algebraic Divison Is there a way to break the left hand side expression such that it takes the the right hand side form? $(a+b)/(c+d)=a/c+b/d+k$ Where $k$ is some expression. - Yes, and that expression would be $(a+b)/(c+d) - a/c - b/d$. Are you looking for something less stupid or more specific?
– Patrick Da Silva Jul 3 '13 at 3:17 Solve for $k$, as Patrick indicated: \begin{align} k&=\frac{a+b}{c+d}-\frac{a}{c}-\frac{b}{d}\\ &=\frac{cd(a+b)-ad(c+d)-bc(c+d)}{cd(c+d)}\\ &=\frac{acd+bcd-acd-ad^2-bc^2-bcd}{cd(c+d)}\\ &=\frac{-ad^2-bc^2}{cd(c+d)} \end{align} In the words of lots of movie cops over the years, "Move along, folks, there's nothing to see here."
https://math.stackexchange.com/questions/2025090/finding-the-area-bounded-by-two-curves
1,571,601,951,000,000,000
text/html
crawl-data/CC-MAIN-2019-43/segments/1570986718918.77/warc/CC-MAIN-20191020183709-20191020211209-00157.warc.gz
602,732,631
32,525
# Finding the area bounded by two curves Find the area of the region bounded by the parabola $$y = 4x^2$$, the tangent line to this parabola at $$(2, 16)$$, and the $$x$$-axis. I found the tangent line to be $$y=16x-16$$ and set up the integral from $$0$$ to $$2$$ of $$4x^2-16x+16$$ with respect to $$x$$, which is the top function when looking at the graph minus the bottom function. I took the integral and came up with $$\frac{4}{3}x^3-8x^2+16x$$ evaluated between $$0$$ and $$2$$. This came out to be $$\frac{32}{3}$$ but this was the incorrect answer. Can anyone tell me where I went wrong? Hint: After drawing it, note that you have to calculate $\int_0^1 4x^2\;dx + \int_1^2 4x^2-16x+16\;dx$. • I got $\frac{8}{3}$. I'm sorry but did you do it right? – Rodrigo Dias Nov 22 '16 at 0:22 • Any time! ${}{}$ – Rodrigo Dias Nov 22 '16 at 0:28 The tangent crosses the $x$ axis at $x=1$, so your integral is including (with the plus sign) also the triangle made by the tangent below the $x$ axis. The correct way is to integrate only the parabola for $x=0 \cdots 2$ (which is $32/3$ and then subtract the area of the triangle$(1,0),(2,16),(2,0)$, which is $8$, so the net area is $8/3$ .
406
1,191
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.953125
4
CC-MAIN-2019-43
latest
en
0.923894
# Finding the area bounded by two curves Find the area of the region bounded by the parabola $$y = 4x^2$$, the tangent line to this parabola at $$(2, 16)$$, and the $$x$$-axis. I found the tangent line to be $$y=16x-16$$ and set up the integral from $$0$$ to $$2$$ of $$4x^2-16x+16$$ with respect to $$x$$, which is the top function when looking at the graph minus the bottom function. I took the integral and came up with $$\frac{4}{3}x^3-8x^2+16x$$ evaluated between $$0$$ and $$2$$. This came out to be $$\frac{32}{3}$$ but this was the incorrect answer. Can anyone tell me where I went wrong? Hint: After drawing it, note that you have to calculate $\int_0^1 4x^2\;dx + \int_1^2 4x^2-16x+16\;dx$. • I got $\frac{8}{3}$. I'm sorry but did you do it right? – Rodrigo Dias Nov 22 '16 at 0:22 • Any time! ${}{}$ – Rodrigo Dias Nov 22 '16 at 0:28 The tangent crosses the $x$ axis at $x=1$, so your integral is including (with the plus sign) also the triangle made by the tangent below the $x$ axis.
The correct way is to integrate only the parabola for $x=0 \cdots 2$ (which is $32/3$ and then subtract the area of the triangle$(1,0),(2,16),(2,0)$, which is $8$, so the net area is $8/3$ .
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
9