‘During the brief, but very interesting Q&A session, Lethem argued that internet culture brought the “closet into the open”, that is, it gave ephemera, trivialities, and everyday activities “A new kind of visibility”. “People have always been producing weird stuff and have always been engaging in arcane activities,” Lethem remarked. “What is really new is the fact the now we can see it. We can see it all. We can quantify what we do — or not do — online.” Lethem mentioned the uncanny ability to track, in real time, “how many books I am not selling on Amazon”. “Reality has acquired a new level of measurability”. “The activities we perform in our digital age are not necessarily new. What is new is that. We. Can. See. Them. All.”.’
“Looming over Saylor’s confrontation with Bolenbaugh was the EPA’s September 27 cleanup deadline, and it appears that Enbridge and its contractors were feeling the pressure as it drew near. In early September, after the Michigan Messenger published its exposé on the use of undocumented workers by Hallmark Industrial, another group of workers employed by a different Enbridge contractor came forward with detailed stories of how they had been instructed to conceal oil at the same site. Workers would land on an island, they said, remove all vegetation, and then lay out absorbent pom-poms, all per EPA regulations. But once the top layer of oil was absorbed, they were instructed to rake dirt over the area to make it appear as though it had been dug out. One worker described his supervisor showing him the process step-by-step, concluding with sprinkling a thin layer of dirt on top. “He said, ‘There, now they can’t see it. It is clean,’” the worker told the Messenger. Another worker described being told to cover pockets of oil with leaves and sticks. As a last step, such areas were cordoned off with caution tape.”
“A number of representation schemes have been presented for use within Learning Classifier Systems, ranging from binary encodings to neural networks. This paper presents results from an investigation into using a discrete dynamical system representation within the XCS Learning Classifier System. In particular, asynchronous random Boolean networks are used to represent the traditional condition-action production system rules. It is shown possible to use self-adaptive, open-ended evolution to design an ensemble of such discrete dynamical systems within XCS to solve a number of well-known test problems.”
Sad to hear him still phrasing this simple truth so obscurely: Not
“Because, on the scale of molecular binding site recognition, say a few tens of angstroms in length, height and width and several other features such as polarity, van-der-Waal forces, and so on, there are far fewer effectively different molecular shapes than there are kinds of molecules.“
… but “Because there are fewer stories than there are facts.”
“I’ve been analyzing my process (and the process of those around me) and figuring out how best to structure code for projects on a larger scale. What I’ve found is a process that works equally well for sites small and large.
Learn how to structure your CSS to allow for flexibility and maintainability as your project and your team grows.”
“Crowd algorithms often assume workers are inexperienced and thus fail to adapt as workers in the crowd learn a task. These assumptions fundamentally limit the types of tasks that systems based on such algorithms can handle. This paper explores how the crowd learns and remembers over time in the context of human computation, and how more realistic assumptions of worker experience may be used when designing new systems. We first demonstrate that the crowd can recall information over time and discuss possible implications of crowd memory in the design of crowd algorithms. We then explore crowd learning during a continuous control task. Recent systems are able to disguise dynamic groups of workers as crowd agents to support continuous tasks, but have not yet considered how such agents are able to learn over time. We show, using a real-time gaming setting, that crowd agents can learn over time, and ‘remember’ by passing strategies from one generation of workers to the next, despite high turnover rates in the workers comprising them. We conclude with a discussion of future research directions for crowd memory and learning.”
“In sports competitions, teams can manipulate the result by, for instance, throwing games. We show that we can decide how to manipulate round robin and cup competitions, two of the most popular types of sporting competitions in polynomial time. In addition, we show that finding the minimal number of games that need to be thrown to manipulate the result can also be determined in polynomial time. Finally, we show that there are several different variations of standard cup competitions where manipulation remains polynomial.”
“If the contents of the inaugural issue—which range from an essay arguing that humanists need to understand and interpret quantitative data to a review of the WordSeer text analysis tool—fall outside your usual scholarly domain, then certainly the journal’s editorial and publishing apparatus will piqué your interest. As Dan Cohen explained in a separate blog post, the journal operates under the model of catching the good—of finding substantive and valuable digital humanities work “in whatever format, and wherever, it exists.” Blogs, podcasts, Twitter conversations, slideshows, and so on, these are all venues in which significant and, though I hate to use such an ungainly word, impactful work is being done. The regular and guest editors “catch” this work, and then provide layers of evaluation and review before it appears in JDH.”
“An important problem in computational social choice theory is the complexity of undesirable behavior among agents, such as control, manipulation, and bribery in election systems. These kinds of voting strategies are often tempting at the individual level but disastrous for the agents as a whole. Creating election systems where the determination of such strategies is difficult is thus an important goal. …”
“Tetravex is a widely played one person computer game in which you are given $n^2$ unit tiles, each edge of which is labelled with a number. The objective is to place each tile within a $n$ by $n$ square such that all neighbouring edges are labelled with an identical number. Unfortunately, playing Tetravex is computationally hard. More precisely, we prove that deciding if there is a tiling of the Tetravex board is NP-complete. Deciding where to place the tiles is therefore NP-hard. This may help to explain why Tetravex is a good puzzle. This result compliments a number of similar results for one person games involving tiling. For example, NP-completeness results have been shown for: the offline version of Tetris, KPlumber (which involves rotating tiles containing drawings of pipes to make a connected network), and shortest sliding puzzle problems. It raises a number of open questions. For example, is the infinite version Turing-complete? How do we generate Tetravex problems which are truly puzzling as random NP-complete problems are often surprising easy to solve? Can we observe phase transition behaviour? What about the complexity of the problem when it is guaranteed to have an unique solution? How do we generate puzzles with unique solutions?”
“We consider the age-old problem of allocating items among different agents in a way that is efficient and fair. Two papers, by Dolev et al. and Ghodsi et al., have recently studied this problem in the context of computer systems. Both papers had similar models for agent preferences, but advocated different notions of fairness. We formalize both fairness notions in economic terms, extending them to apply to a larger family of utilities. Noting that in settings with such utilities efficiency is easily achieved in multiple ways, we study notions of fairness as criteria for choosing between different efficient allocations. Our technical results are algorithms for finding fair allocations corresponding to two fairness notions: Regarding the notion suggested by Ghodsi et al., we present a polynomial-time algorithm that computes an allocation for a general class of fairness notions, in which their notion is included. For the other, suggested by Dolev et al., we show that a competitive market equilibrium achieves the desired notion of fairness, thereby obtaining a polynomial-time algorithm that computes such a fair allocation and solving the main open problem raised by Dolev et al.”
“It turns out that we don’t know the procedure. We haven’t got any clue to just how difficult the procedure is. We aren’t computers. We don’t follow procedures. And so comparing the complexity of the manual task, to the complexity of the procedure is invalid.
This is one of the reasons that estimates are so hard, and why we get them wrong so often. We look at a task that seems easy and estimate it on that basis, only to find that writing down the procedure is actually quite intricate. We blow the estimate because we estimate the wrong thing.”
“We investigate higher-order Voronoi diagrams in the city metric. This metric is induced by quickest paths in the L1 metric in the presence of an accelerating transportation network of axis-parallel line segments. …”
“Computer scientists make LDA seem complicated because they care about proving that their algorithms work. And the proof is indeed brain-squashingly hard. But the practice of topic modeling makes good sense on its own, without proof, and does not require you to spend even a second thinking about “Dirichlet distributions.” When the math is approached in a practical way, I think humanists will find it easy, intuitive, and empowering. This post focuses on LDA as shorthand for a broader family of “probabilistic” techniques. I’m going to ask how they work, what they’re for, and what their limits are.”
“No wonder life (i.e., the thing that my once 10-year old niece referred to as “the thing that isn’t fair”) comes to us as a filigree of ash stories. Walking down the street past a couple in conversation, an overheard morpheme, a mere glance at a wrongly buttoned raincoat, sparks a narrative in our imagination. Ask any question beginning with “why?” and the answer will surely be a story, or it will be embedded in a story. Or, at the very least, it will offer a tempting thread for some story that you yourself will hold onto, embellish even, as you try to absorb the answer. We interpolate between such fragments. This is, for many of us, simply the way we think.
What about the “why questions” in science, in logic, in mathematics? We should acknowledge how they are often “what questions” or “how questions” in disguise. Or how they slide down into such questions, as the ever-elusive, ever-illusory quest for an X that actually causes a Y dissolves. Some of the more satisfying answers to scientific “why” questions involves deft rephrasing. “Why is the sky blue?” is replaced by the question “what is the function that describes scattering amplitude as dependent on wave-length”?”