12 September 2017
Version
This material is not, and will not be,
patented nor copyrighted by the authors.
I did my
undergraduate work in Engineering Physics at Cornell University. I did my
graduate work in Nuclear Engineering at the University of Michigan. I taught for four years at the U.S. Naval
Nuclear Power School in five different courses, of which I partially or fully
revised four. I advanced from newly
commissioned Ensign to full Lieutenant (O-3;
called Captain in the other services) in a brisk three years.
I taught
Physics and Advanced Algebra at Lewis S. Mills High School, Region 10, CT, and
was named Best New Teacher, but returned to Engineering for financial
reasons.
I have
worked in or visited 49 states and 35 countries. I have held Secret, Top Secret, L, and
Q-Weapons clearances. I have been shot
at and traveled the former USSR with my own armed KGB “minder”. I have managed or significantly contributed
to some of this nation’s most successful power projects, including Calvert
Cliffs (Maryland) and Palo Verde (Arizona).
I have been
at Governor’s House, a nursing home in Simsbury CT, for seven years because of
spinal stenosis. I am now serving my
fifth term as President of the Residents Council. I am working on about a dozen projects
including Grading, Cosmology, and Pyramids.
Building Pyramids
Using Very Long Ropes
12 September 2017
Version
Simsbury CT 06070
Flint MI 48503
For literally thousands of years, people have speculated
about how the ancient Egyptians built the pyramids at Giza. Credible speculations have included long
ramps, ramps spiraled around the pyramid itself, plus varieties of lever
systems and cranes. Other speculations
have run from anti-gravity devices to assistance by extraterrestrials. We offer here an engineering speculation and
invite comment.
Making rope is a fully "scalable"
activity: if you know how to make a 20
foot long rope, you also know how to make a 200 or a 2000 foot long rope. Suppose you ran such a very long rope up one
side of a partially completed pyramid, across the top, down the other side, and
out to a large pulling team, say with 200 men in the pulling team. Suppose each man could pull 50 pounds
force. (This is not very
challenging: if facing in the direction
of pulling, just lean forward; or better yet, as in a tug of war, turn toward
the pyramid and lean backward. An even
easier way to sustain this force would be to attach cross boards to the pulling
rope, and have the pullers pull it from a nearly sitting position, much like a
rowing crew, but with no strain on their arms or backs; Pharaoh's OSHA
inspector will be pleased!) However
done, there is then a 10,000 pound force pull available at the other end of the
rope, which will easily pull an average 5000 pound block up the partially
complete side, across the partially complete top, and directly into place, even
with a generous allowance for friction.
(More realistically, 80 pullers per rope should suffice.)
Suppose the pullers move at 2 feet/second (1.36
miles/hour). The block will then move
from the staging area at the base, up the side, and into place in a few
minutes. (A representative pull would be
300 feet up the side and 200 feet along the top, consuming less than five
minutes.) At most, the men will each be
expending (2 feet/second) x (50 pounds force) / (550 foot
pounds/sec/horsepower) = 0.18 horsepower, which is sustainable a few minutes at
a time. Many pulling teams can work
parallel ropes at the same time, so that net emplacement rates of at least one
block per minute be could readily
achieved. At this emplacement rate, working
12 hours per day, 365 days per year, they could have completed the estimated
2.5 million block structure in 10 years, even allowing for inevitable
delays. Oversize blocks could be
emplaced using multiple ropes.
Even allowing for 20 pulling teams, only 1600 to 4000
pullers would be needed. If available,
horses or oxen could also be used for pulling.
Since the primary work is walking back and forth on the Giza plateau,
conditions for personnel need not be murderous:
it could be done by ordinary workers or by soldiers between
campaigns. Since the same pulling paths
could be used year after year, someone would probably decide to install ridged
paving blocks to improve traction.
Lightweight sun shades and abundant water for evaporative cooling in the
summer would probably also appear.
Quick release wooden strongbacks would allow quick
engagement of blocks in the staging area and quick release upon
emplacement. Direction changes of the
ropes at both upper edges of the pyramid and at the far base could be achieved
with limited friction by passing them over gently curved, polished granite
bearing blocks, probably with water as a lubricant. Friction between the moving block and the
partially completed side and top would probably be reduced by pouring water
ahead of the moving block, but no special skids or rollers are necessary: the new block would slide directly against
the finish blocks on the sides and the lower blocks on the top.
This method requires no exotic technology: the ancient Egyptians clearly had heavy
ropes. Nor does it require forgetting
anything exotic: merely forgetting a
"trick of the trade". Neither
is there anything exotic to find: when
complete, the very long ropes would simply be cut into shorter lengths and used
for other purposes. The method works
identically from the base to the capstone.
It seems likely that most of the blocks were quarried
and shaped using copper or bronze saws and abrasives in an adjoining limestone
quarry. This activity pace is harder to
estimate, but it is also clearly achievable using technology they had, and it
can also be done in parallel by many teams.
If the block preparation process can be done by a comparably sized
group, then the overall project should require no more than 8000 men, which is
considerably more manageable and economically sustainable than the 100,000 man
force sometimes postulated.
Footnotes
1. Due credit to
others -- it's another ramp theory: Work in engineering is
inevitably based on prior work and experience, and this speculation is no
exception: it's another ramp
theory. In this case, the side of the
pyramid itself is the ramp for the blocks.
This avoids the time and material for building separate ramps, ensures
that the ramp is always ready for use for a new layer, and avoids the delays
inherent in narrow ramps. Short ramps at
the base reported by others would be useful for the first few layers, and
thereafter could serve to transition blocks from horizontal to the slope of the
pyramid side. If the casing blocks are
placed during assembly on all four sides, and if the ropes are periodically
shifted from side to side, the wear on the casing blocks will be evenly
distributed, and will actually serve to polish their outer layer.
Several experts, including
Jean-Philippe Lauer and Dr. J.D. Degreef, have proposed pulling from some
location on the pyramid remote from the block itself. This is clearly possible, on one face, stage
blocks at the 90 but we believe that pulling from the plateau instead has
significant advantages, most notably avoiding needlessly walking up and down
the pyramid, allowing simultaneous use of as many pulling teams as desired and
of any size desired, and using a physiologically efficient pulling
posture. Together with using the pyramid
sides as ramps, we believe that the ancient Egyptians would have realized this
early in construction. As best we can
tell, however, we are the first authors to
suggest this particular combination of techniques. We would be interested in seeing any earlier
discussions of this combination.
2. Sliding friction
of the block: Since limestone is a relatively soft material, any
bottom surface irregularity on the sliding block will quickly erode to a smooth
finish. Thereafter, there will be smooth
limestone sliding against smooth limestone, lubricated by a slurry of limestone
dust and water, which is pretty slippery stuff.
We plan on experimental verification, but our engineering judgment is
that the resulting coefficient of sliding friction will be relatively low.
3. Sliding friction
and wear on the rope: Our guess is that the pyramid builders provided a wear
layer on the outside of the ropes, possibly as a tightly wrapped spiral of
small diameter rope, replaced as needed.
A somewhat higher technology alternative to polished granite bearing
blocks for rope direction changes would be large wooden or granite pulleys or
drums, probably with lubricated copper or bronze sleeve bearings. These would be similar in concept and
technology to chariot wheels, which were well known then, although designed and
sized for far larger loads. Such pulleys
or drums would substantially reduce both friction and rope wear.
4. Power expenditure
by pullers: A key element of our proposal is that the pullers just
walk back and forth on the Giza plateau, rather than needlessly struggling up
and down a ramp themselves. This means
that we can estimate their maximum power expenditure fairly closely by looking
simply at the sustained power delivered to friction and to elevating the block
as it moves up the side of the pyramid.
The estimate provided above is a maximum output of 0.18 horsepower for a
period of several minutes at a time. (A
very brief period of higher force and higher output to get the block moving
would not be significant, and indeed, would be further eased by the natural
elasticity of a long rope as it takes up tension at the start of a pull.)
To put this in perspective, consider the case of a
modern, 180 pound man walking up typical seven inch rise steps at the measured
pace of one step a second. (Go try this
if it isn't obvious that such a pace is easily sustained; the exercise will do
you more good than thinking about it!)
Such an individual is doing (180 pounds) x (7/12 foot/step) x (1
step/second) / (550 foot-pounds/second/horsepower) = 0.19 horsepower. In other words, he is working harder than
even the maximum required of pyramid block pullers in this model.
In practice, this means that the actual pulling crews
could have been smaller, perhaps much smaller depending on the exact friction
coefficients, or that assembly could have been even faster. This would certainly have been sorted out
early in construction by experience, and thereafter done at an efficient overall
pace.
5. Sequence: They reset the main rope using a trailing rope behind the new block and
a small group of men in the face base staging area.
29 December 2016 Version
Robert M. Hartranft
Simsbury CT 06070
How many rays emanate from her crown,
and what do they represent? The easy
part of the answer is SEVEN.
Not knowing that value, I assumed that the rays were somehow
tied to the Tired and the Poor. The usual reference is to the seven continents
and the seven seas, but I find the smooth uniformity of the rays highly
dissimilar to the highly irregular continents and seas. I think better as picked by Isaac Newton in
reducing the visible light spectrum to just:
1.Red 2.Orange 3.Yellow 4.Green 5.Blue 6.Indigo 7.Violet
A Law-Abiding Cosmology
Model
16 October 2017
Version
Robert M. Hartranft Scott W. Hartranft
Simsbury CT 06070 Aloha OR 97006
The Abstract
Suppose the
laws of physics – including conservation of mass-energy – are truly
invariant: the same at every time and
place; before, during, and after the Big Bang; inside, outside, and on the
boundaries of black holes; etc. No
clever contrivances like “Cosmic Inflation” to get the universe expanding. No physics magic like “Dark Energy” to
accelerate expansion. The laws of
physics were, are, and always will be, everywhere in the universe, as they are
here and now.
The two
Figures below then follow:
PHYSICAL PROPERTY |
Positive Components |
Negative Components |
Net For Universe |
Electric Charge |
(+) Charges |
(-) Charges |
0 |
Magnetic Pole |
North Poles |
South Poles |
0 |
Rotation |
Clockwise |
Counter-Clockwise |
0 |
Normal Mass |
Matter |
Unmatter |
0 |
Anti-Mass |
Anti-Matter |
Anti-Unmatter |
0 |
Figure
1 “Cancellation” To Empty Void
0
PHYSICAL PROPERTY |
Positive Mass |
Negative Mass |
Normal Mass |
|
|
Anti-Mass |
|
|
Annihilation
To 2 Unphotons Annihilation
To 2 Photons
Figure
2
The
Big Bang produced two exactly concentric, mutually gravitationally repulsive,
inter-meshed, expanding spheres: one of positive mass Matter and Anti-Matter;
the other of negative mass Unmatter and Anti-Unmatter. The resulting cosmology was:
MAJOR EVENTS OF THE ERA |
Time From Big Bang
Start |
Initial
Condition: Empty Void |
Undefined |
Laws of
Physics Created |
No “clocks” |
Big Bang
Began, Strong Nuclear Repulsion |
0 |
Nucleons,
Atoms Appeared, Separated |
Hours,
years |
Initial ±Mass Segregation, Small
Acceleration |
Hours,
years |
First
Generation Stars Formed |
|
Black Holes
Formed, Aggregated |
~105
years? |
Supermassive
Black Holes Moved, Spun |
|
Galaxies
Formed |
|
Hartranft
Speculated |
14x109
years |
Figure
3
The Paper
Suppose the
laws of physics – including conservation of mass-energy – are truly
invariant: the same at every time and
place; before, during, and after the Big Bang; inside, outside, and on the
boundaries of black holes; etc. No
clever contrivances like “Cosmic Inflation” to get the universe expanding. No physics magic like “Dark Energy” to
accelerate expansion. The laws of
physics were, are, and always will be, everywhere in the universe, as they are
here and now.
The two
Figures below then follow:
PHYSICAL PROPERTY |
Positive Components |
Negative Components |
Net For Universe |
Electric Charge |
(+) Charges |
(-) Charges |
0 |
Magnetic Pole |
North Poles |
South Poles |
0 |
Rotation |
Clockwise |
Counter-Clockwise |
0 |
Normal Mass |
Matter |
Unmatter |
0 |
Anti-Mass |
Anti-Matter |
Anti-Unmatter |
0 |
Figure
1 “Cancellation” To Empty Void
0
PHYSICAL PROPERTY |
Positive Mass |
Negative Mass |
Normal Mass |
|
|
Anti-Mass |
|
|
Annihilation
To 2 Unphotons Annihilation
To 2 Photons
Figure
2
The
Big Bang produced two exactly concentric, mutually gravitationally repulsive,
inter-meshed, expanding spheres: one of positive mass Matter and Anti-Matter;
the other of negative mass Unmatter and Anti-Unmatter. The resulting cosmology was:
Figure
3
MAJOR EVENTS OF THE ERA |
Time From Big Bang Start |
Initial
Condition: Empty Void |
Undefined |
Laws of
Physics Created |
No “clocks” |
Big Bang
Began, Strong Nuclear Repulsion |
0 |
Nucleons,
Atoms Appeared, Separated |
Hours,
years |
Initial ±Mass Segregation, Small
Acceleration |
Hours,
years |
First
Generation Stars Formed |
|
Black Holes
Formed, Aggregated |
~105
years? |
Supermassive
Black Holes Moved, Spun |
|
Galaxies
Formed |
|
Hartranft
Speculated |
14x109
years |
Figure
3
Initial
Conditions: Endless void.
No mass-energy. No laws of
physics. Time was undefined.
Creation
Event: Creation of the laws of physics, including
conservation of mass-energy. But there were no “clocks” – periodic events – to
measure time.
Big Bang began:
Produced two exactly concentric, mutually gravitationally repulsive,
inter-meshed, expanding spheres: one of positive mass Matter and Anti-Matter,
the other of negative mass Unmatter and Anti-Unmatter.
Inertia: Caused by self-gravity; therefore present from the start of
Big Bang.
Initial
Expansion: Zero net external gravitational
force. Zero net external electromagnetic
force. Strong nuclear force – which is
repulsive at very short ranges – drove expansion.
In this era, every point in the
universe – except the
several nucleons thick outermost layer –
experienced the same forces, and therefore the expansion would have seemed to an internal observer to have no
center.
Cancellation was rare in this era because it requires time for a
particle-unparticle pair to form and to decay down to the ground state, from
which cancellation is possible. For
comparison, the half-life of “positronium” (an electron and a positron) is
0.125 nanoseconds. While this is a
short interval under most conditions, it is a very long interval here. The pair would be knocked apart before it
could cancel.
Segregation and Acceleration. Simultaneously with the above, the positive and negative
mass material was separating into distinct regions. Because of gravitational repulsion and
cancellation, the regional boundaries were distinct, and there were no small
inclusions of opposite sign mass material in the various regions. This
separation caused repulsion between unlike regions, which caused accelerating
expansion. After a few hours, nucleons appeared and then
separated. Inertia caused expansion to
continue. After a few years, atoms appeared and then separated.
Inertia caused expansion to continue.
First generation stars formed. Because there was no spin and only small acceleration, but
high material density, they were very large and therefore short-lived. This produced an immense number of both
matter and unmatter black holes. Many of
these then agglomerated into supermassive black holes, which generally then
further grew by absorbing material in an adjacent region with a nearly equal
radius from the Big Bang center. This
caused them to accelerate laterally and to spin. The lateral motion gave them access to
additional material.
Galaxies then appeared as the spinning supermassive black
holes accumulated orbiting material.
All just straightforward application of
invariant laws of physics – neither clever Cosmic Inflation nor magic Dark
Energy!
Footnotes:
F =
ma
Inertia
F = Gm1m2/r2 Gravity
E = mc2
Relativity
are all symmetric for positive and
negative values of m. This suggests a family of negative mass particles (“unmatter"
and “anti-unmatter"
here), with zero net mass-energy for the universe overall. The
positive m matter and anti-matter
gravitationally attract other positive
m material; negative m unmatter and anti-unmatter gravitationally attract other
negative m material; but positive m material gravitationally repels negative m
material. (The parallel to British
physicist P.A.M. Dirac in 1930 is clear and compelling. Dirac made a similar observation about the
electromagnetic equations. The discovery
of the positron followed quickly, followed by the whole anti-matter
family.)
Anti-matter v. unmatter:
It is important to
understand that ordinary anti-matter is still positive mass. For example, when an electron and an
anti-electron (a.k.a. a positron) “annihilate”, they create two 0.51 Mev gamma
rays, precisely equivalent (E = mc2) to the sum of the masses of the
two particles. By contrast, if an
electron and a negative mass unelectron “cancel”, they simply disappear – which
is again equal to the sum of the initial masses – zero.
There are neither “worm holes” nor other cosmic shortcuts in this model: the speed of light limitation applies
everywhere and always except to the Creator himself.
Black holes are not singularities in this model: they are simply quantum
mechanical regions of particularly intense gravity. The laws of physics are the same inside,
outside, and at the boundary of a black hole, whether of matter or unmatter.
John 1:1
in the King James Version of the Bible
provides an elegant summary of the universe after the creation of the laws of
physics but before the Big Bang began:
“In the beginning was the
Word,
and the Word was with God,
and the Word was God.”
Or in the notation of modern mathematical
physics:
Ʃ
universe (m) = 0
We believe that the model proposed here can be
described in finite-element analysis computer code to explore aspects of
cosmology never before susceptible to such analysis. We hope others with more experience in
finite-element analysis will agree and do those analyses.
This model vs. “physics
miracles”. We also note that
most current cosmology models require three “major physics miracles”:
1. The creation of an all-positive-mass
universe, which is a huge violation of the
conservation of mass-energy.
2. “Cosmic
inflation” to allow initial expansion despite gravity. And,
3. “Dark
Energy”, an unspecified material which somehow causes accelerating expansion in the current era.
Most
current models also require “minor physics miracles” to explain:
Earlier
appearance of stars than predicted.
1. Earlier appearance of galaxies than
predicted.
2. Earlier appearance of galactic spins
than predicted.
3. More sharp1y defined regional
boundaries than predicted. And,
4. Absence of small islands of visible
matter in the “voids”.
None of these
“miracles” are necessary in this model because the combination of gravitational
attraction and repulsion is more effective at agglomerating material than
attraction alone. Even the Creation
Event, which here is the creation of the laws of physics, is unopposed.
Two
verification experiments: Looking for locations in a sky map
where unphotons from an ungalaxy have “cancelled” the positive energy photon
"mist" should work. Such
locations would be small black dots in the sky map, stable in both time and
position.
A direct imaging camera may also
work. The detector pixel could be
supplied electrons at elevated energy.
Any transitions to ground state without photon emission should be mostly
from unphoton absorption. The lens would
be just a drilled block of the same material held at the elevated energy.
Unresolved:
Note that this model does not resolve Why the Big Bang occurred, nor the Masses of the individual spheres.
By default, these are left to the Creator to decide.
Origin
of this model: This model is based on broadly accepted
physics and cosmology work, combined with rigorous application of symmetry
principles. It is not based
on any specific prior work, nor do we know of any comparable
work. We would greatly appreciate
references to comparable work by others.
About the
authors: Both authors are graduates of the Cornell University College
of Engineering: Robert in Engineering
Physics in 1966, and Scott in Electrical and Computer Engineering in 2001. Robert is Scott’s father. This work was made
possible by the tireless support of Dr. Martha Hartranft (Robert’s wife, Scott’s
mother).
Theological
Physics
Robert M. Hartranft
Scott W. Hartranft
Simsbury CT 06070
Aloha OR 97006
In an
earlier paper, the authors proposed a model where the universe is composed of
exactly equal amounts of positive mass-energy and negative mass-energy, now
segregated into two exactly concentric, intermeshed, mutually repulsive, expanding
spheres. We consider here the theological
implications of the laws of physics.
Physics: The
laws of physics appear to be identical everywhere in the universe.
Theology: Monotheism – there is only one Creator.
Physics: In this model, the laws of physics are invariant
throughout the entire history of the universe, with no special contrivances
like “cosmic inflation” or “dark energy”. This is a profound difference between
this model and the currently popular models.
Theology: The Creator is consistent about all things at all
times.
Physics: The laws are quantum mechanical rather than Newtonian
deterministic.
Theology: Free will exists, together with its necessary
companion – evil. Each location in the
universe will evolve in a unique manner, no matter how similar initially: there will be unplanned, interesting things
to see. If the Creator wishes to change
or direct matters at a given location, He can do so in a completely
undetectable manner by changing one quantum at a time, or He can make His power
evident.
Physics: Nothing
can travel faster than the speed of light, and that speed is slow compared to
the size of even a single galaxy.
Theology: Local independence is preserved even if an advanced
civilization devotes huge resources to communication or transport. (Note that this model has no “worm holes” or
other shortcuts.) But for the same
reason, virtually the entire history of the universe is readily seen with
telescopes: the Creator’s work is on
display to all.
Physics: There
are myriad planets, but each – including the unusual Earth-moon system – is
unique because of quantum mechanics.
Only Earth is truly Earth-like.
Theology: We see nothing in Physics which directly answers Psalm
8 –
“What
is man, that thou art mindful of him?”
– but neither do we see anything which refutes the premise
of the question. It seems significant
that humans have both the ability and the technology to see and understand the
universe.
Physics: As a general
pattern, the laws of physics appear to be few, simple, and understandable.
Theology: The
Creator means His work to be understood.
Physics: The
Totalitarian Principle – “Everything not forbidden is compulsory.” Except for this, an endless perfect void
would fulfill all the laws of physics.
Theology: The Creator clearly favored action over
inaction, even amidst uncertainty and risking evil.
In this model, the true Creation Event is the creation of
the laws of physics: the Big Bang was
simply an allowed event. John 1:1 seems
an elegant summary:
“In the beginning was the
Word,
and the Word was with God,
and the Word was God.”
All this is speculation, not rigorous proof. But the pattern is fascinatingly familiar.
This work
was made possible by the tireless support of Dr. Martha Hartranft (Robert’s
wife, Scott’s mother).
30 October 2017 Version
Robert M.
Hartranft Scott W. Hartranft
Simsbury CT 06070
Aloha OR 97006
Inertial
mass and gravitational mass seem like unrelated properties, but are identical even
in extremely precise experiments. We
suggest here that inertia is simply self-gravity. We further suggest that the graviton has mass
zero.
Suppose a
very large physicist decided that the planet Earth should be treated as a
particle – the “earthon”. The earthon
has very high inertia because it has self-gravity. At the other extreme, a very small physicist,
working at Plank scale – 10-35 meters – would have the same
conclusion about particles like electrons and quarks, whose immense relative
size would cause them to interact with their own gravitons.
In short,
inertia is the result of self-gravity.
Inertial mass is equal to gravitational mass because they derive from
the same process – gravity: self-gravity
in one case; external gravity in the other.
Consider now the photon and the postulated graviton. The photon is the carrier of the
electromagnetic force, but it has zero electric charge, and does not itself
experience the electromagnetic force. For
example, a light beam can pass through an intense magnetic field with no effect
on either the light or the field.
By analogy, the graviton should have zero net mass, even
including relativistic effects, and should not itself experience gravity.
In current models, a zero mass-energy particle has no
properties of any kind, and simply does not exist. In our model, however, it can be a composite
particle, with equal amounts of positive and negative mass material. If each part has spin 1, then the graviton is
a mass 0, spin 2 particle.
This would provide an intuitively natural basis for
gravitation: “source” mass (the sun, for
example) would emit an endless series of gravitons in random directions. The emissions would probably occur in pairs
to avoid a spin change in the emitting matter.
Since the gravitons have zero mass, this continues with no change to the
sun, exactly as observed.
We will leave for later a model of the interaction with the
“distant” mass (the Earth, for example), except to say that the gravitons must
continue in a straight line forever or until cancelled by an ungraviton.
In combination, these models show why mass increases as v approaches c
Xxxxxxxxxxxxxxxxxxxxxxxxxxx
more later
Forward gravitons v v Rearward gravitons
Particle
Properties
29 December 2016 Version
Robert M. Hartranft
Scott W. Hartranft
Simsbury CT 06070
Aloha OR 97006
Like particles appear to have identical properties
everywhere and always. For example, the
charge on all electrons is exactly equal.
In the model presented earlier, the Creation Event is asserted to be the
creation of the laws of physics. These may
have included all particle properties as explicitly defined values, or they may
have been established in some measure by the process. Such a scenario would
probably have involved a forced uniformity just after the Big Bang began by
reason of the small size of the universe (say at 10-15 meters
diameter), with those values then ”frozen” by an expansion process where every
point experienced the same forces.
However done, the effect simplifies life for physicists and
theologians alike:
Physics: As a general
pattern, the laws of physics appear to be few, simple, and understandable.
Theology: The
Creator means His work to be understood.
This work
was made possible by the tireless support of Dr; Martha Hartranft (Robert’s
wife, Scott’s mother).
Roughly UConn -
Storrs
7 November 2016 Version
Robert M. Hartranft
Simsbury CT 06070
Mechical vibration of the author in the low Hertz range produces
remarkably wide-ranging, local and systemic, functionally significant
results: eliminated severe edema in both
legs; increased local fexibility; increased strength system-wide; increased endurance;
improved cognition; increased libido; and even improved visual acuity. After four years of treatment, the effects
continue, still sustained even without vibrating for a day, and still free of
any apparent side effects.
The vibration is done by a bed “massager”,
$72 from Amazon as shown. Initially the
individual vibration heads were held in close contact (one to three cloth
layers) with my legs using Velcro strips, rather than beneath the mattress.
Later, I had one set duct-taped to my bed frame, and another set duct-taped to
the frame of my wheelchair. In the current arrangement, there are three
vibration heads in each sock, wiuh the seventh in my left hand. Based on experience, some of the connections are now strenghened with duct tape.
The heads are small, eccentric-mass units, a technology which is simple,
reliable, and easily modified for amplitude and frequency in the low Hz
range: for example, I have used a much
larger version on an 800 ton nuclear component.
The most credible explanation I can offer is that the low Hz mechanical
vibration may mix the chemicals in the inter-neuron areas, causing a prompt
(under one second) increase in the neuron-to-neuron electrical
conductivity. Sustained shaking produces
a longer term effect (“half life” of perhaps five hours), perhaps caused by
increased chemical transport across cell walls.
Design history and
considerations: About four years
ago, I had a Medtronic Baclofen pump implanted in my abdomen. When it was turned on two weeks later, it
started pumping at 50 RPM. In the “fundamental”
frequency mode – one cycle of pump rotation produces one vibration cycle --
that meant
50 Revolutions per Minute / 60 Seconds per
Minute
= 0.83 vibrations per second
which by chance, is relatively close to the resonant frequency of my
brain, which I estimate to be about 7 Hz.
In under a second, I had increased cognition and greater visual
acuity. The attending osteopath, Dr.
Matthew Raymond, of Southington CT, stated that about one quarter of his pump
patients report similar experiences. It
took me a week to realize this had to be a mechanical effect. I then showed that I could duplicate or
amplify the effect with a simple bed massage shaker.
That unit transmitted enough power down the legs of the bed and into the
floor that I felt the effect even sitting in my wheelchair. That allowed a series of planned and
accidental experiments demonstrating immediately beneficial effects for a
resident with a nearly immobile leg, a resident with a broken hip, and a resident
with significant dementia.
Most comparable work has used kilohertz or megahertz electromagnetic
stimulation, often introduced using conductors placed deep in the brain. The method here is far quicker, easier,
cheaper, and less risky. I believe its effect is similar to that produced by
light exercise like walking.
Calculation of
payback period: Depending on the
specific equipment configuration selected and the payback methodology used, payback period is a matter of single-digit days. It is inefficient to calculate how many
days: just buy one and try it.
High School
Grades, College Admissions, and Scholarships
Roughly Granby
Memorial HS Roughly UConn -
Storrs · · Candidate’s Predicted Percentile
Class Rank at specified College Candidate’s HS Percentile Class Rank, with or without
grade renormalization Equivalent mean
SAT (M+CR) at High School Linear extrapolation of
25th and 75th percentile college values.
Calculation using Candidate’s
equivalent mean HS SAT (M+CR) and linear extrapolation of the College’s
published 25th and 75th percentile college values.
Calculation
using Candidate’s equivalent mean HS SAT (M+CR) and linear extrapolation of
the College’s published 25th and 75th percentile
values (·). Calculation
using Candidate’s equivalent mean HS SAT (M+CR) and linear extrapolation of
the College’s published 25th and 75th percentile
values (·). Calculation
using Candidate’s equivalent mean HS SAT (M+CR) and linear extrapolation of
the College’s published 25th and 75th percentile
values (·).
(.
(.
(.
Summary: Existing high school transcripts do not contain enough
information to quantitatively predict a candidate’s performance at a specific
college. The easiest improvement is to
use the existing grades, but convert the class rank to percentile rank. A further
improvement
would be to retrieve all the students in all the classes taken by the candidate
and recalculate the percentile class rank based on the candidate’s actual
competitors. In either event, the basis
of the prediction is the use of the mean SAT (M + CR) for the candidate’s high
school class rank rather than the candidate’s individual scores. This change allows evaluation of the
candidate in a meaningful context.
The problem: At first glance, it seems that a student’s high school
grades should be a strong and easily evaluated predictor of that student’s
college grades. But there are over 25,000 high schools in the U.S., and grades
actually vary by school, by year, by teacher, and by course (a group I will
call here the grade “cohort”). There are
over 150,000 new cohorts every year, and while local cohort data is well known
to each high school, I cannot find a single instance of cohort data being made
public, let alone being provided to colleges in admission applications
It
is therefore impossible for a college to quantitatively predict a student’s
likely performance, and even a good guess (a process some ironically call
“gut-ology”) is practical only with well-established high school/college
pairs. For the same reason, colleges
cannot put all their candidates on a single, quantitatively-sound scale.
Standardized
tests like the SAT and the ACT provide nationally consistent results, but are
relatively poor predictors of college grades.
As is frequently noted, there is a considerable difference between doing
well on a multiple choice test Saturday morning and doing well in a nine month
long course with many and varied requirements.
In
short, the entire college application/admission process relies on information –
the high school transcript – which is hard to interpret at best, and frequently
misleading.
Some useful tools and patterns: Academic records are now almost universally digitally
stored and readily retrieved for analysis.
“PowerSchool” – or its competitors – can easily retrieve all the data
for all the cohorts where a student has received a grade, allowing calculation
of a student’s class rank whenever desired, and with an algorithm which
excludes arbitrary premiums for “honors” courses.
Patterns
of class rank vs. SAT are very stable at both high schools and colleges: while individual SAT (Math + Critical
Reading) scores show considerable scatter, the best-fit SAT score line barely
changes from class to class. The mean
SAT (M+CR) of the fiftieth percentile student in a class, for example, is
typically within four SAT points of that value for both the preceding and the
following class. The shape of the curve
is similarly stable, especially at colleges:
nearly linear over all but the highest and lowest class ranks.
(This
averaging technique is precisely analogous to the method almost universally
used in quantum mechanics to find easily observed values by averaging over many
quantized values: each student being
analogous to a quantum particle like a photon.
This made the calculation trivial in my mind, but totally unfamiliar to
anyone outside quantum physics:
about 99.9% of the population, I would guess. For 16 years, I used the technique unnoted
and unexplained,until I finally realized that real people never use or
encounter this method.).
Therefore the mean equivalent
SAT values for a given percentile class rank can be taken from those of the
most recently graduated class at that high school.
Further,
most American colleges publish the mean SAT scores for their 25th
and 75th percentile first year students, and tables of these values
are updated every year. Like the earlier
values, these values are typically stable from year to year: about 200 SAT points apart, with only slowly
changing median values.
By most reports, a typical college
application gets about eight minutes of review.
Within those precious 480 seconds, the reviewer must assess both the
quantitative information like GPA, and the subjective information like essays
and recommendations. Making the quantitative
task sound and simple is surely beneficial to all in a process which literally
shapes lives, costs the nation many billions of dollars a year, and has so much
good and bad potential. This is
particularly true as both students and colleges look internationally or nationwide
rather than just statewide or regionally.
A proposed solution:
The
patterns above allow direct prediction of a candidate’s first year class rank at
a college by finding the class rank at
that college with the same mean SAT as the candidate’s high school class
rank: see the graph above. Note that this automatically adjusts for:
1. The academic aptitude of the
candidate’s actual high school competitors and potential college competitors.
2. The grading patterns of the specific
high school cohorts where the candidate earned grades, and,
3. The grading pattern at the college
under consideration.
Voila! Local murk becomes specific college clarity.
Note that
this method avoids the need to change either the overall grading pattern at the
school nor the many cohort patterns.
It is
interesting to compare the tracking of potential new students against the
(non-) tracking of graduates. Districts
understandably work hard to determine exactly who will appear the first day of
school, and what each student’s program for that year will be. Parents happily cooperate, since they also
want things to go smoothly.
By contrast,
after graduation, most Districts have only casual contact with students, with
little or no quantitative reporting.
Their college performance is known merely as accumulated anecdotes,
ignoring the opportunity to measure their high school preparation by their
first year performance in college. This
makes it harder for students at low grading schools like Simsbury High School
to gain admission to colleges attended by their performance peers. And it significantly reduces their chances
for scholarships.
DRAFT — For Illustration Only
SIMSBURY HIGH SCHOOL
PRINTED LETTERHEAD ON 32 LB. BOND PAPER
***
INCLUDE IN APPLICATION TO “SOMEWHERE COLLEGE” ***
Valid only if embossed here with Official Simsbury HS School Seal.
Susan
A. Example
Equivalent Nationally Normed GPA
4.10
Projected First Year Class Rank at
Somewhere
College
66th percentile from bottom
Prepared 22
June 2015.
Please see
following chart for the basis of these results.
/signed/
Hope R.
Eternal
Guidance
Counselor
Grading Analysis
Flowchart
Applicant’s
Projected First Year College Class Rank Local
Class Rank of
Applicant Weighting by course credits Local
Course Grades of Applicant Grades and
Class Ranks of all students in those school-year-course-teacher
cohorts National SAT v. GPA Curve
from College Board data SAT v.
Local Class Rank Curve – most recently graduated class SAT
Value equivalent to Local Class Rank Mean SAT Scores for
College’s 25th and 75th percentiles Applicant’s
Equivalent Nationally Normed GPA
The page has been viewed 306 times |