Commenter Archive

Comments by wonkie*

On “What to do?

Michael is the go-to person, but...
Porting Typepad to WordPress Easily

"

First up: What alternative platforms are there? What are their strengths and weaknesses? FYI, both obsidianwings.org and obsidianwings.com appear to be available. So, if we get some blogging software, we can create our own website and run it there. Just a thought.
Second: how do we migrate to whatever new platform we (probably meaning lj) decide on? I believe Michael Cain has already worked out how to back up our past posts and comments. Perhaps he has some insights and advice on the transition.
Third: Is there anything we can do to alert long time but infrequent users as to what is happening and where we are going? Maybe a way to strip email addresses from a couple of decades of comments? Granted, it's a "nice to have" but it would be nice.
If whatever new platform we end up on charges, who pays for it? If we create our own site, the annual registration isn't that much. But if we go again with a commercial platform it might be.
Just a few thoughts off the top of my head.

"

Speaking (obviously) from a position of total ignorance, I'm nonetheless hoping that all the work Michael did before, when Typepad was going through a particularly erratic phase, means we don't have to lose our whole history. I feel (and I bet I'm not alone) that losing ObWi altogether would be a very sad loss. Clearly, moving us somewhere else will mean a lot of work for somebody who knows how to do it - is there anything the rest of us can do to help?

On “I’m forever blowing bubbles

Thanks nous, research hub lets me request a full text, so I'll try that.
I've made a post out of GftNC's depressing news.

"

I see from hilzoy that Typepad is closing down on 30th September. What is going to happen to ObWi??
https://everything.typepad.com/blog/2025/08/typepad-is-shutting-down.html

"

CharlesWT - It usually provides links to its sources. I'll have to add a source links requirement to the prompt.
Links to sources would be helpful, but the deeper issue of transparency involves the selection criteria that results in those sources being included. It's one of the questions I routinely ask my lower division undergraduate students when it comes to their own papers: "what purpose does this citation serve in developing an understanding of the critical perspective from which you are writing?"
Newfield addresses this deeper sense of transparency in the article intro where he writes: "At the same time, critics have identified a set of operational flaws in the ML and deep learning systems now discussed under the “AI” banner. Four of the most discussed are social biases, particularly racism, that become part of both the model and its use; opacity, such that users cannot assess how results were generated; coercion, in that architectures, datasets, algorithms, and the like are controlled by designers and platforms rather than users; and privacy violations, which result from combinations of bias, opacity, and coercion focused on the surveillance, accumulation, and monetization of data. What I'm pointing to here falls under the second and third operational flaws. We don't know why the LLM chose these particular points to amplify. They are as opaque to us as the proprietary systems by which search results get ordered on search engines.
The lack of epistemological and methodological awareness are a deep problem, and these are the reasons why I scoff at Altman's comparison of the latest iteration of ChatGPT as being like interacting with an expert with a Ph.D.. The lack of these deeper levels of awareness are more a marker of someone much earlier in their intellectual development.

"

..., and it does not provide any transparency for the sources of its information...
It usually provides links to its sources. I'll have to add a source links requirement to the prompt.

"

ij - sounds like you are thinking through some of the issues that Christopher Newfield discusses in his Critical AI article "How to Make 'AI' Intelligent; or, The Question of Epistemic Equality." [https://doi.org/10.1215/2834703X-10734076] I don't see that the article is open access, so I'll excerpt a chunk of the intro here to give y'all a sense of what Newfield is arguing...
In this article I will not provide a historical account of how AI research in its various iterations has alleged rather than specified the intelligence it aims to simulate. Instead, I will first suggest why it is so hard for most technologists and the officials they influence to care about rigorous definitions of intelligence before they attribute it to software. I will then analyze one philosopher's rigorous definition for its implications for current debates about AI.
Some of the problem is the size and wealth of the corporate platforms that have dominated the internet and its opportunities for data accumulation and that now want to dominate “AI.” But there is also a deeper and more difficult cultural problem, and that is the highly restricted role of culture itself, or the role of cultural analysis of technology. There have been frequent periods when technologists are able, intentionally or not, to keep cultural analysis from having coauthorship of the meaning, operations, and effects of the technology or of narratives of its future. “The Age of AI” is one of those periods when practitioners who bring culturally grounded skepticism to technological development are less likely to be treated as equal partners than to find themselves unemployed, as did computer scientist Timnit Gebru in 2020 (Simonite 2020a, 2020b).
This is happening in spite of the fact that the long history of asking “What is intelligence?” also belongs to historical and cultural disciplines—philosophy obviously, but also to feminist studies, which radicalized the context dependence of knowledge in standpoint theory, and ethnic studies, which demonstrated the role of ascribed race in structuring epistemological frameworks, just to name two of the many domains of philosophically informed cultural research. Of course major contributions to this question have been made by scientists and technologists, but these have been boundary crossers, traveling back and forth between cultural and technological disciplines and bringing their procedures and findings together. There is much discourse from practitioners affiliated with AI-labeled technology about the technology's power to benefit all humanity. This is not the same as a full discussion among epistemic equals about whether this assertion is actually true, whether our diverse societies want it to be, and what would make it true in ways that diverse societies might want.
This is a deep, complicated, and massively multilateral conversation that will take years or decades. It requires much better public education processes than what we see today in the mainstream media.1 It means continuous travel of informed people among multiple disciplines and synthesis of disparate methods and their results. This is not happening. AI discourse is largely a question of when, not if, and it assumes that technologies will be pushed out to the consuming masses by large corporations and the start-ups they fund on a schedule that they determine. A (problematic) call for a pause in AI research presumed that the revolution of superior machine intelligence is here and that proper management ensures an unspecified flourishing future (Future of Life Institute 2023).2 AI discourse often functions as a manifest destiny about which great minds are said to agree.
At the same time, critics have identified a set of operational flaws in the ML and deep learning systems now discussed under the “AI” banner. Four of the most discussed are social biases, particularly racism, that become part of both the model and its use; opacity, such that users cannot assess how results were generated; coercion, in that architectures, datasets, algorithms, and the like are controlled by designers and platforms rather than users; and privacy violations, which result from combinations of bias, opacity, and coercion focused on the surveillance, accumulation, and monetization of data.
Readers of Critical AI are among those increasingly focused on a fifth operational flaw: much ML research takes place in companies like Google, in which managers have authority over the publication of research results. Famous cases like the one I mentioned above, the firing of Google ethics researcher Timnit Gebru (and her co-lead Margaret Mitchell), suggest that much or most AI research is happening in the absence of academic freedom, which puts researchers at risk while also distorting research results by allowing the suppression of findings that don't fit a rollout narrative or corporate image. Corporate manipulation of research results is a known issue thanks to the automotive, tobacco, chemicals, and fossil fuel industries, among others.
Then there is a sixth issue that I'm considering here—the question of whether “AI” is intelligent in the first place. And there is the related question of why this sixth question is not central to public AI debates.
Reflecting on the work of two authors can help us address these underexamined questions. The first is C. P. Snow and his famous meditations on the divide between “Two Cultures” (which Snow described as scientific vs. literary outlooks but which I will discuss in terms of technological vs. cultural knowledge). The second is programmer and philosopher Brian Cantwell Smith's recent analysis in The Promise of Artificial Intelligence (2019) of two kinds of intelligence: reckoning and judgment. Smith sheds light on the mentality that the editors of this special issue identify as data positivism, but which Smith's notion of “reckoning” helps me talk about more explicitly as computational intelligence of a certain kind. Snow helps explain why culture-based understandings of intelligence are not part of the current debate, and Smith shows what can happen when they are. Although Snow did not intend to, he helped take humanities disciplines out of the future-defining process for several generations. Smith offers us ways of putting them back in.

In the body of the article Newfield fleshes out these points a lot and settles into a discussion of intelligence as not just a matter of number crunching and pattern recognition ('reckoning') but also of what Smith (mentioned above) calls 'registration' (a situated perception of the world and the reckoner's place in that system) and also of judgment (which I will gloss as a commitment to reconciling reckoning and registration in order to test and negotiate a shared understanding that fits both the data and the human relationships that are entangled/implicated in that data, or as Newman interprets Smith: "'The system (knower) must be committed to the known, for starters. That is part of the deference' in which one defers to the object in order to know it. But there's a further matter, in which the knower must be 'committed to tracking things down, going to bat for what is right,' and feeling in some deep way existentially 'bound by the objects' (93). Intelligence, Smith is saying, depends on an underlying awareness of existence, both one's own existence and the existence of the world. For Smith, epistemology is prior to ontology, and we can add that a feeling of existence is prior to them both—and a fundamental precondition of intelligence.)."
In terms of Charles' Veronica Mars example, Grok has "reckoned" what others have communicated about the show, but in assembling its commentary it cannot exercise judgment because it does not understand the cultural systems in which those communications gain or lose their significance, nor situate itself in relation to the various parties involved in the communication.
It has grokked nothing, it has only parsed and systematized according to its own training algorithms.
The Veronica Mars output is a "good" summary in that it assembles together a lot of information and represents it in a way that appears faithful, but it doesn't rise to the level of any critical insight, and it does not provide any transparency for the sources of its information or the reasoning behind its selection. It feels like an intellectual cul-de-sac to me.

"

Seems like Trump could be replaced with that one, with little observable difference.
Au contraire
, it would definitely be easier on the eyes. "Observable" in the literal meaning of the word.

"

Seems like Trump could be replaced with that one, with little observable difference.
Or a monkey that flings tariffs instead of shit.

"

The program was 'Racter', short for 'raconteur' (story teller) but the system allowed only 6 characters for file names, and dates from 1984.
The difference to His Orangeness would be huge since the program at least made no errors in grammar or spelling. And at least the texts published by Chamberlain were far more entertaining than the utterings (in text and speech) of Jabbabonk the Orange.
https://en.wikipedia.org/wiki/Racter

"

I'm skipping Replika, talk to me when the holosuites are available.
LOL

"

The old Eliza chatbot was generally harmless.
Wasn't there another, called something like 'Racktor', that simulated a paranoid schizophrenic?
Seems like Trump could be replaced with that one, with little observable difference.

"

It's amazing that the expenditures on AI may be propping up the economy in the face of Trump's policies on trade and the cratering of tourism to the US.
It somewhat depends on where you are. I've been seeing stories of people using local electric grids being overwhelmed because someone set up an AI (LLM) data center in the area. They are incredible users of electricity. Can be a real issue if your power goes out as a result of the extra load.

"

Bubble or not, the current LLMs and other types of "AI" will have a massive impact even if development hits a wall.
I'm currently using Grok for most things because it seems to have more horsepower behind it than the free versions of other LLMs.
The wordsmiths among you may find much to criticize, but this review of the TV drama series, Veronica Mars, by Grok seems very good to me.
Veronica Mars: Noir, Trauma, and Justice

"

I'm skipping Replika, talk to me when the holosuites are available.

"

using the term AI for LLM was marketing genius
The phenomenon is much more widespread than just slapping the "AI" label on LLMs. Almost anything newsworthy that's done by a computer these days is hyped as "AI."
See, for example, this article about Delta's pricing. Someone more knowledgeable than I (which is probably almost everyone here) can correct me if I'm wrong, but Delta's personalized pricing seems to require a big database of personal data that shouldn't be public in the first place (a whole topic in its own right) and an algorithm for analyzing it. In other words, a bog standard computer program.
Another example: a friend of mine recently checked out the conversational capabilities at one of the well known language teaching sites. (I don't know which one.) He labeled the conversation as "AI"-generated, which may in fact be accurate, but it reminded me of Eliza, which was written ~60 years ago.
And then there's this, about which I got nothin'.
The mention of travel agents and bookstores is a good reminder to skeptical me that "AI" is going to have widespread disruptive effects even if it isn't remotely what it's hyped to be.

On “The Schadenfreude Express

Dubya's regime was malign and incompetent; Bolton was especially malign. But fighting fascism is the priority now.
Thanks to bobbyp for his link on this.

"

IMO John Bolton is appalling in very many ways, and his warmongering often appeared (and was, again IMO) verging on madness.
But Ubu appointed him as one of "the best people" in his first administration, and once he started openly dissing and criticising Ubu and U was POTUS again, and in a position to do something about it, he sicced the FBI on him.
The principle is the thing that matters. This is the sort of thing that does not happen in functioning democracies which respect the rule of law.

"

That is equivalent to supporting former RAF members to overthrow the German government (ok, the German government isn't as bad as the Iranian but the MEK/RAF comparison is apt).

"

If John Bolton appears to be comparatively reasonable we're in trouble...
cf. e.g. his staunch support for the Mujahideen-e-Khalq, or MEK
https://www.aljazeera.com/news/2018/3/29/meks-violent-past-looms-over-us-lobby-for-regime-change-in-iran

"

Besides being a general warmonger, he was a primary cheerleader for the Iraq War. About a year before the US entered Iraq, he wanted to go to war with Cuba for WMDs they didn't have. He stood for about everything bad in Bush's foreign policy.

"

I wonder if Trump going after Fed governor Lisa Cook is an indication that he was having trouble finding even a fig leaf for an attack on Powell.

"

It's hard to have much sympathy for John Bolton. After his role in the Bush administration...
Charles, you've piqued my curiosity. What things did Bolton do under Bush that you disapprove of? I've got my own list, but just wondering what he's done that you disagree with.

*Comment archive for non-registered commenters assembled by email address as provided.