Monday, February 27, 2017
Spock
So, who did Nimoy download his memories to before he died? Let's go terraform Mars and make a cloned body so he can come back.
Friday, February 24, 2017
Tidbits
Your argument is hogwash. You get a headbanging sky rat.
Wednesday, February 22, 2017
Berenstein Bears
It's Berenstein, guys. If you remember it with an "A," you are an
alt-universe quantum figment intruding into reality and distorting it,
not an actual human being. If you're allowed to remain, the universe
will collapse. I'm so sorry.
Monday, February 20, 2017
Left Handed
"Ha! You're out of ammo!"
"But I know something you don't."
"What?"
"But I know something you don't."
"What?"
"I'm not left handed!"
"Huh? That doesn't work with bullets."
"It does when that's where I'm holding my spare clip. And where I've been reloading since you let me start talking."
"OH CR - " *BANG*
"Huh? That doesn't work with bullets."
"It does when that's where I'm holding my spare clip. And where I've been reloading since you let me start talking."
"OH CR - " *BANG*
Saturday, February 18, 2017
Mood Music 5
I find assembling these playlists is very therapeutic.
Friday, February 17, 2017
Tale of the Bladesmith
Years ago I used to make swords. I made swords with other bladesmiths. It was a close knit group - one had to prove their dedication to earn entry. The reward was mentoring and access to splendid materials for sword making. One day, those bladesmiths exiled me from their forge, fearing I'd bring them dishonor.
Then, years later, they announced a change. All would be welcome in their forge to learn and use their unique ornaments and embellishments and to craft fresh blades from iron that sat dormant. Encouraged, I found others and opened a new forge, next to theirs, one that used the items they'd graciously offered to all.
The goal of making the swords was not revenge. I sought not to stab those who'd banished me, but to demonstrate my worth. For I had long practiced bladesmithing on my own, in the wilds, and found new techniques they did not have. I would combine these techniques with the untapped potential of the resources they offered freely
But as the forge was opened, they came, anger clouding their mind. As they marched, they stumbled, and fell upon the swords I'd made. They ruined themselves for no reason other than anger and rage, perpetuating a grudge that was now one-sided. Their blood lay on the ground before their forge, frightening away all they'd hoped would join them. The fires of their furnaces went dark.
And now only the one they'd banished remained.
Hate is a sword that cuts its wielder worst of all.
Then, years later, they announced a change. All would be welcome in their forge to learn and use their unique ornaments and embellishments and to craft fresh blades from iron that sat dormant. Encouraged, I found others and opened a new forge, next to theirs, one that used the items they'd graciously offered to all.
The goal of making the swords was not revenge. I sought not to stab those who'd banished me, but to demonstrate my worth. For I had long practiced bladesmithing on my own, in the wilds, and found new techniques they did not have. I would combine these techniques with the untapped potential of the resources they offered freely
But as the forge was opened, they came, anger clouding their mind. As they marched, they stumbled, and fell upon the swords I'd made. They ruined themselves for no reason other than anger and rage, perpetuating a grudge that was now one-sided. Their blood lay on the ground before their forge, frightening away all they'd hoped would join them. The fires of their furnaces went dark.
And now only the one they'd banished remained.
Hate is a sword that cuts its wielder worst of all.
Tidbits
That feeling when you tell everyone you brought your own lunch, so they go off on their own - but it turns out you forgot it at home in the fridge.
Wednesday, February 15, 2017
What Kind of Dystopia?
"What kind of dystopia is it?"
"Think tumblr, but fascist."
"So...regular tumblr, basically."
"Think tumblr, but fascist."
"So...regular tumblr, basically."
Monday, February 13, 2017
Libraries in the Apocalypse
Video game idea: managing a library in a post apocalyptic setting.
Determine what to trade for what books, based on community interest, so
you can keep the library going and encourage patrons to help defend it.
There would be no money system, only bartering. You survive by allowing people access to knowledge in exchange for things. You can also trade those things to others who have books in order to get them. However, if you trade for a book that's so-so, you may not have enough for a real gem that comes to you later.
there are also raiders who want to steal what you have (not the books, your trading goods). If you're popular with your patrons and have been keeping up on fresh books they want to read, they'll help defend you. If you haven't been listening to them, they won't do as much.
Different segments of the population want different types of books and each segment will have different pluses and minuses when it comes to what they give for access and what they do for you when they're happy.
Minimal graphics - it's mostly text based. Easily doable on an 8-bit style. No real moving art or action sprites. Trickiest part would be balancing the different equations and play strategies. Identifying the minimum viable product would also be interesting.
There would be no money system, only bartering. You survive by allowing people access to knowledge in exchange for things. You can also trade those things to others who have books in order to get them. However, if you trade for a book that's so-so, you may not have enough for a real gem that comes to you later.
there are also raiders who want to steal what you have (not the books, your trading goods). If you're popular with your patrons and have been keeping up on fresh books they want to read, they'll help defend you. If you haven't been listening to them, they won't do as much.
Different segments of the population want different types of books and each segment will have different pluses and minuses when it comes to what they give for access and what they do for you when they're happy.
Minimal graphics - it's mostly text based. Easily doable on an 8-bit style. No real moving art or action sprites. Trickiest part would be balancing the different equations and play strategies. Identifying the minimum viable product would also be interesting.
Saturday, February 11, 2017
Fifty Shades of Webbing
An observation about the 50 Shades movie that will either ruin it for you or make it awesome:
Now and then, Jamie Dornan (Christian Grey) will do a half-smile expression. This makes him look remarkably like Tobey Maguire. So all through the movie, I am wondering when Grey is going to put on a Spider-Man costume.
Other jokes that ran through my mind:
Mr. Parker will see you now.
When they do the upside down kiss in this movie, it'll be Mary Jane who's suspended.
Spidey has a new use for his webbing.
The safe word is "amazing."
And so on.
Now and then, Jamie Dornan (Christian Grey) will do a half-smile expression. This makes him look remarkably like Tobey Maguire. So all through the movie, I am wondering when Grey is going to put on a Spider-Man costume.
Other jokes that ran through my mind:
Mr. Parker will see you now.
When they do the upside down kiss in this movie, it'll be Mary Jane who's suspended.
Spidey has a new use for his webbing.
The safe word is "amazing."
And so on.
Friday, February 10, 2017
Tidbits
Austin ISD wanted names for their new Social Emotional Learning module. My recommendation?
SEL-fie.
SEL-fie.
Wednesday, February 8, 2017
Acceptable Beauty
There is nothing wrong with the creation and enjoyment of beauty.
That is, the appreciation of what one considers beautiful. This can vary
from person to person and there is a certain excitement when an artist
or producer shares their vision of beauty with others. It both
communicates their vision of the world and allows those who share that
vision an outlet for what they might not have been able to express on
their own.
For example, if I see an artist hold up something that is ugly and blighted and call it beautiful, I know that person has a very demented outlook on the world. Sometimes this can be fun - I was always a fan of JTHM, despite the deliberately ugly portrayals of humanity, because it was darkly humorous. But I know better than to declare it beautiful - it isn't (to me), and that's the point. There's also art I find unenjoyably ugly. Much of R. Crumb's work falls into that category along with large chunks of modern faire.
For example, if I see an artist hold up something that is ugly and blighted and call it beautiful, I know that person has a very demented outlook on the world. Sometimes this can be fun - I was always a fan of JTHM, despite the deliberately ugly portrayals of humanity, because it was darkly humorous. But I know better than to declare it beautiful - it isn't (to me), and that's the point. There's also art I find unenjoyably ugly. Much of R. Crumb's work falls into that category along with large chunks of modern faire.
On the flip side, I enjoy stirring epic pieces and beautiful images and
the artists capable of them. And there are people who regard that the
same way I see Crumb. This is fine, since it's a matter of opinion - I
just wouldn't trust them to create an art history field trip for me.
What there is too much of are people who want impose and enforce their vision of "acceptable" beauty on everyone. They want to limit what can and cannot be regarded as attractive and stirring. Some of them take the form of fashion police who want to mock women who wear glasses or who eschew greasy face paint. Others scream obscenity and oppression at artists who draw women with curved forms in provocative poses.
It's one thing to find that a piece of art doesn't stir you. It's quite another to determine that no one should ever have a chance to be impacted by that same piece. The former means you need to keep looking. The latter is the impulse of the tyrant - the mark of an ugly soul.
What there is too much of are people who want impose and enforce their vision of "acceptable" beauty on everyone. They want to limit what can and cannot be regarded as attractive and stirring. Some of them take the form of fashion police who want to mock women who wear glasses or who eschew greasy face paint. Others scream obscenity and oppression at artists who draw women with curved forms in provocative poses.
It's one thing to find that a piece of art doesn't stir you. It's quite another to determine that no one should ever have a chance to be impacted by that same piece. The former means you need to keep looking. The latter is the impulse of the tyrant - the mark of an ugly soul.
Monday, February 6, 2017
Adventures of Dr. Tomoe
Adventures of my Dr. Tomoe cosplay:
Ran into a Jesus cosplayer. Yelled, "You're not the Messiah!"
With Sailor Deadpool: "I'll see you in season 3!"
Ran into a Jesus cosplayer. Yelled, "You're not the Messiah!"
With Sailor Deadpool: "I'll see you in season 3!"
An encounter with Maes Hughes: "Your daughter may be cute, but mine can blow up the planet! Ha ha!"
Sunday, February 5, 2017
The Track of Intellectual Intolerance
The violent hate of ANTIFA is something we saw coming. It's the logical conclusion of the track that began years ago when college campuses began to normalize shouting down opposing viewpoints. After some thought, I've come up with a basic framework for the progression. A more expert social scientist can fill in the details.
First, you learn it's better mock and deride than argue. Opposing views don't have to be understood if they're "crazy" or "stupid."
Then, you move on to shouting them down and shutting them up. If they're not worth learning about, they're not worth allowing to be expressed.
Next, control the physical space, disallow the views you deem evil, and shove out anyone who dares trespass with foreign knowledge. This is where the physical violence begins, albeit cloaked as defense.
Finally, hunt down anyone who refuses to remain silent. Make them be silent. Punch them in the face to shut them up. Kill them if they persist.
Once all the thought criminals are dead, you've won. Fascism achieved.
We are sadly at that last stage. it remains to be seen how far hate groups like ANTIFA will go as they seek to beat down any they dislike (all of whom, so far, have not been actual Nazis).
First, you learn it's better mock and deride than argue. Opposing views don't have to be understood if they're "crazy" or "stupid."
Then, you move on to shouting them down and shutting them up. If they're not worth learning about, they're not worth allowing to be expressed.
Next, control the physical space, disallow the views you deem evil, and shove out anyone who dares trespass with foreign knowledge. This is where the physical violence begins, albeit cloaked as defense.
Finally, hunt down anyone who refuses to remain silent. Make them be silent. Punch them in the face to shut them up. Kill them if they persist.
Once all the thought criminals are dead, you've won. Fascism achieved.
We are sadly at that last stage. it remains to be seen how far hate groups like ANTIFA will go as they seek to beat down any they dislike (all of whom, so far, have not been actual Nazis).
Friday, February 3, 2017
Tidbits
What they say: "We need tax reform for corporations."
What they mean: "We want things to cost more because we hate poor people."
What they mean: "We want things to cost more because we hate poor people."
Wednesday, February 1, 2017
Pursuit of Happiness Maximization
Here's a theoretical construct I very loosely have in my head. It
starts with functions to define happiness, creates a mechanism, and then
tackles the fundamental issue of individual decision making vs. central
planning. This has very likely been done before, but I find it
interesting to work it out myself.
Define a lifetime happiness function. It's the average of the satisfaction resulting from every choice you make in your life span. Higher overall satisfaction, higher happiness.
Function: H(h) = sigma(h(i))/n, i=0..n
h(i) is the satisfaction from a given choice, i, and n is the total number of choices you make in your lifetime.
Pursuit of happiness means as a general goal is pretty standard, so maximizing this utility function is a very safe bet. Note that this is more general than profit maximization, since satisfaction with a choice doesn't necessarily entail material gain. This accounts for things like self-sacrifice.
That's the top level. For scale, h(i) varies from 0 to 42, where 42 is maximum happiness and 0 is total misery. A neutral mood is 21. (42, being the meaning of life, is clearly the best choice for number here.)
What determines the value of h(i)? Resolution of tension, which is defined here as the difference between the expected outcome, E, and the observed outcome, O (straight out of estimation theory). Assuming every choice is made with the intent of being satisfied with the outcome (assumption of rationality) at least nominally, you'd get:
0 = What was observed fell well short of what was expected
21 = What was observed matched expectations
42 = What was observed exceeded expectations
This works for tallying up someone's existing happiness based on the past. Most people strive for H=21, since we're smart enough to know that it won't be sunshine and lollipops everyday. We know that some choices will be 0, so we seek to maximize individual choices when possible to compensate and keep the average at 21 or higher as much as possible. So we have local maximization with a goal of influencing a moving average upwards.
What about predictions? If the goal is to make each choice so that it maximizes happiness, we need a way of predicting what will do so.
Choices, by their nature, are games of incomplete information. There will always be things the person making the choice does not know and which could, potentially, result in an h(i)=0 situation. The good news is that for every choice similar to ones made previously, the unknowns will tend to decrease (via experience). We also know that a person who strives for the best each time they play a game of incomplete information will trend toward the maximum value over time.
Let's take this construct and see how altering the decision structure influences it. Up to now, it's the person whose happiness is on the line making all choices that affect h(i). Let's turn that over to an external faction. Assume a 3rd party now controls the choices of another. We'll call the 3rd party's happiness G and their individual happiness measured by g(i). we'll refer to the person they make choices for as H with h(i) as the happiness result.
Note that G does not directly set the value of h(i) - what they do is make the choice and H's reaction determines h(i). The expectations, E, that determine the final happiness, h(i), are set by H, not G. G, however, controls its own g(i) expectation values.
Let's set G's goals simply: they want to make H happy. When when h(i) is 21 or higher, g(i) is 21 or higher. What happens? G will try to make choices that it predicts will result in a strong h(i) value. It will base this likely on communication with H about preferences and expectations. However, this process will be guaranteed inefficient: there will be latent variables G cannot anticipate that H could, since only H is completely aware of its own history and mind.
Still, over time, G will be able to approximate a good h(i) value, since it will learn H's preference through testing and become better capable at predicting. This will, however, take longer than H by itself. H only needs to deal with one set of unknowns - those of uncertainty involving the circumstances of the choice. G has to deal with those as well as unknowns about H's expectations.
Complicate things further: now G has to manage the choices of not one H, but 100 H's (H1...H100), all of them unique. G's happiness, g(i), is now based on the aggregate happiness of those G makes decisions for. Even assuming every H makes the same choices in the same order as the others (a simplification that does not hold in the real world), G now has to deal with not just the unknowns of the choice, but now 100 sets of unknown behavioral preference variables. Every single set has to be learned individually over time through testing, consuming more bandwidth.
Now increase this to a thousand H's. A million. More.
Economy of scale requires G to make approximations. Instead of trying to perfectly learn each H, it goes for averages. After all, its own g(i) is satisfied by the overall score. Hit a bell curve with an average at 21 and G is happy. Never mind that means 50% of the H's could very well end up with final happiness tallies of less than 21. Even if G is smart and mixes up who gets what payoffs so there isn't one subset that always gets less than 21, there will still be H's who get H<21 and some who are very near 0. This could, interestingly, lower G's own happiness to less than 21, as well.
Compare this to a model where every H makes their own calls. Without the extra layer of unknowns, each individual is able to trend toward 21 faster than with interference from G. This shorter time frame increases the likelihood of H=21 being the norm. At the very least, it should be sufficient (and there's hand waving here) to make it so the likelihood is greater than when G makes choices for H. This should also hold (more hand waving) when G only makes some choices for H.
Questions and additions for later:
1. What if G has more knowledge about choice outcomes than H? How much would they need to justify interference? And wouldn't communication be more ethical?
2. Ethical and moral constraints, such as disallowing harming or stealing from others to increase h(i).
3. Role of information sharing between H's and increasing the speed of optimizing h(i).
4. Could low levels of happiness within a subset indicate bad choice methods brought on by misinformation?
5. Tyranny. What happens when G's happiness is maximized for things other than H's well being.
6. Regrets. When h(i) is maximized locally in time, but drops in value outside of that time frame when satisfaction criteria changes with time.
There's obviously a lot more work needed to make this conclusive, but it's a start. A mathematical/game theory way to prove that central planning will always be more inefficient at making others satisfied with their lives than letting them make their own choices would be wonderful. I think this may already exist, but it's fun to create my own system.
Define a lifetime happiness function. It's the average of the satisfaction resulting from every choice you make in your life span. Higher overall satisfaction, higher happiness.
Function: H(h) = sigma(h(i))/n, i=0..n
h(i) is the satisfaction from a given choice, i, and n is the total number of choices you make in your lifetime.
Pursuit of happiness means as a general goal is pretty standard, so maximizing this utility function is a very safe bet. Note that this is more general than profit maximization, since satisfaction with a choice doesn't necessarily entail material gain. This accounts for things like self-sacrifice.
That's the top level. For scale, h(i) varies from 0 to 42, where 42 is maximum happiness and 0 is total misery. A neutral mood is 21. (42, being the meaning of life, is clearly the best choice for number here.)
What determines the value of h(i)? Resolution of tension, which is defined here as the difference between the expected outcome, E, and the observed outcome, O (straight out of estimation theory). Assuming every choice is made with the intent of being satisfied with the outcome (assumption of rationality) at least nominally, you'd get:
0 = What was observed fell well short of what was expected
21 = What was observed matched expectations
42 = What was observed exceeded expectations
This works for tallying up someone's existing happiness based on the past. Most people strive for H=21, since we're smart enough to know that it won't be sunshine and lollipops everyday. We know that some choices will be 0, so we seek to maximize individual choices when possible to compensate and keep the average at 21 or higher as much as possible. So we have local maximization with a goal of influencing a moving average upwards.
What about predictions? If the goal is to make each choice so that it maximizes happiness, we need a way of predicting what will do so.
Choices, by their nature, are games of incomplete information. There will always be things the person making the choice does not know and which could, potentially, result in an h(i)=0 situation. The good news is that for every choice similar to ones made previously, the unknowns will tend to decrease (via experience). We also know that a person who strives for the best each time they play a game of incomplete information will trend toward the maximum value over time.
Let's take this construct and see how altering the decision structure influences it. Up to now, it's the person whose happiness is on the line making all choices that affect h(i). Let's turn that over to an external faction. Assume a 3rd party now controls the choices of another. We'll call the 3rd party's happiness G and their individual happiness measured by g(i). we'll refer to the person they make choices for as H with h(i) as the happiness result.
Note that G does not directly set the value of h(i) - what they do is make the choice and H's reaction determines h(i). The expectations, E, that determine the final happiness, h(i), are set by H, not G. G, however, controls its own g(i) expectation values.
Let's set G's goals simply: they want to make H happy. When when h(i) is 21 or higher, g(i) is 21 or higher. What happens? G will try to make choices that it predicts will result in a strong h(i) value. It will base this likely on communication with H about preferences and expectations. However, this process will be guaranteed inefficient: there will be latent variables G cannot anticipate that H could, since only H is completely aware of its own history and mind.
Still, over time, G will be able to approximate a good h(i) value, since it will learn H's preference through testing and become better capable at predicting. This will, however, take longer than H by itself. H only needs to deal with one set of unknowns - those of uncertainty involving the circumstances of the choice. G has to deal with those as well as unknowns about H's expectations.
Complicate things further: now G has to manage the choices of not one H, but 100 H's (H1...H100), all of them unique. G's happiness, g(i), is now based on the aggregate happiness of those G makes decisions for. Even assuming every H makes the same choices in the same order as the others (a simplification that does not hold in the real world), G now has to deal with not just the unknowns of the choice, but now 100 sets of unknown behavioral preference variables. Every single set has to be learned individually over time through testing, consuming more bandwidth.
Now increase this to a thousand H's. A million. More.
Economy of scale requires G to make approximations. Instead of trying to perfectly learn each H, it goes for averages. After all, its own g(i) is satisfied by the overall score. Hit a bell curve with an average at 21 and G is happy. Never mind that means 50% of the H's could very well end up with final happiness tallies of less than 21. Even if G is smart and mixes up who gets what payoffs so there isn't one subset that always gets less than 21, there will still be H's who get H<21 and some who are very near 0. This could, interestingly, lower G's own happiness to less than 21, as well.
Compare this to a model where every H makes their own calls. Without the extra layer of unknowns, each individual is able to trend toward 21 faster than with interference from G. This shorter time frame increases the likelihood of H=21 being the norm. At the very least, it should be sufficient (and there's hand waving here) to make it so the likelihood is greater than when G makes choices for H. This should also hold (more hand waving) when G only makes some choices for H.
Questions and additions for later:
1. What if G has more knowledge about choice outcomes than H? How much would they need to justify interference? And wouldn't communication be more ethical?
2. Ethical and moral constraints, such as disallowing harming or stealing from others to increase h(i).
3. Role of information sharing between H's and increasing the speed of optimizing h(i).
4. Could low levels of happiness within a subset indicate bad choice methods brought on by misinformation?
5. Tyranny. What happens when G's happiness is maximized for things other than H's well being.
6. Regrets. When h(i) is maximized locally in time, but drops in value outside of that time frame when satisfaction criteria changes with time.
There's obviously a lot more work needed to make this conclusive, but it's a start. A mathematical/game theory way to prove that central planning will always be more inefficient at making others satisfied with their lives than letting them make their own choices would be wonderful. I think this may already exist, but it's fun to create my own system.
Subscribe to:
Posts (Atom)