Some people have noticed that GPT-4 changes over time and this can be interpreted as a loss of quality in certain circumstances. This is one of the perils of the black-box nature of the OpenAI models. One commentator noted, “Dude I was literally losing by mind over this. I built an agent for my school that relies on GPT-4 and now it has completely lost its ability to reason…” As a product, ChatGPT combines GPT-4 with many other subsystems; it’s likely that the problem has emerged from some integrations within the loop from prompt to response since the model is not likely to have been retrained. While I’m sure OpenAI will figure this out, it does point to the perils of our “upgrade automatically” culture in which our phones and our computers change behavior, UX, and even capabilities overnight. We love waking up with a secure iPhone, but we are letting go of a lot of agency in the process of accepting systems that change. This is okay if we’re talking about personal devices, but we can’t have a bridge monitoring system or an industrial control system suddenly putting new models of computation into live production systems without extensive testing. You can read a good thread about this on Twitter. There’s also a nice article about the study mentioned in a tweet here. They use the term “behavior drift” for this phenomena.
The percentage of Americans who think that religion causes more problems than it solves has remained stable at 35% from 2008 to 2019. Ryan Burge looks at the sub-scores, however, to notice some trends relating to political partisanship. It’s behind a paywall, but if the topic interests you, the introduction to the article promises good things for the full version.
The famous Systems Development Life Cycle Guidance document from the Department of Justice is a wonder, and people often link to it as an undervalued resource in conceiving and executing a system development project. I think, however, looking at it, that you might never finish a project conducted according to this model. It would require quite a large team and miss every deadline.
There is apparently a “doomsday cult” in coastal Kenya in which people are fasting until they die. So far, the death toll has exceeded 400 people and they are still finding mass graves. The total could rise to 613, based on the number of missing persons in the area. The “pastor” of these people is Paul Mackenzie. He is, as you may expect, not starving. From what I could see of his doctrine online (many sermons were extant and reviewed by reporters from the BBC), he taught a mixture of dispensational premillennialism combined with suspicions that resembled the Jehovah’s Witnesses (satanic or Babylonian origins of religious symbols, etc.). It’s just a sad story to see all the pathologies of independency coalesce in the lives of people who need orthodox Christianity. All of this starts with a kind of congregationalism that doesn’t seek a connection to other churches and an approach to scripture that takes its cues from some of the worst aspects of late 19th century dispensational / independent American theology. Platforming a weirdo never ends well, but also be careful not to be too trigger happy in identifying someone as a weirdo.
Realigning a team’s attention to its purpose (resetting its goals) is the best way to begin correcting a team’s pathologies. This is also what I notice effective, intuitive leaders doing already.
It’s always encouraging to know that there is a broad field in which there is much spadework to do. As a kid I’d lament that I didn’t live in a time where simply cataloguing all the bugs in my yard would be groundbreaking scientific work. Here’s an example in the world of large language AI models – the author outlines the challenges that remain in building and applying LLMs.
Man, could you imagine if most college professors stopped lecturing and just depended upon the shared ignorance of students? This was my pet peeve as a student. I wanted to hear the most well-trained person in the room (the professor) speak. I really appreciated the clarifying questions asked by fellow students, but discussion proper was almost always a waste of time unless we were all focused on a particular text that we all had read and prepared beforehand to discuss. This semester, for the first time I’m teaching a course focused on specific texts, where there will be an expectation of discussion, and I will be on the lookout for maintaining a high level of quality and informational content. My tentative plan is to have a student assigned to present each reading kind of like in a graduate seminar. These will be honors students, so I’m not as worried about the students sharing ignorance. My expectation is that they will do the readings and be capable of insightful interactions.
Recordings of all of the PCA pre-General Assembly seminars are now available online. I’ll try to highlight any that stand out as I get a chance to listen.
Someone is putting together a list of classroom policies for using Generative AI tools in a massive, shared document.
Sorry, contrary to Good Will Hunting’s recommendation of Howard Zinn’s “A People’s History…” it’s a terrible book. It will knock your socks off only if your socks are gullible and materialist. Here’s a terrible commendation of Zinn from the Chronicle of Higher Education.
This essay (profanity warning) actually makes a pretty good point about the lazy way we criticize something by calling it a “religion.” I feel the same way (sometimes) about the word “ideology” and have trouble understanding whether “worldview” and “ideology” are easily distinguishable. At the same time, the word “religion” isn’t that important to Christians. This is because Christ is a person and Christians belong to a real kingdom where he is an actual king with real power. We didn’t accept the ideas of Jesus into our hearts, pace American evangelicalism. We were united, body and soul, to a real person, love for whom animates our actions, including our mental actions. But the editorial does make me think that using “religion” as a pejorative could backfire and also it could just substitute for real engagement with the substance of opposing ideas. I suspect, on the other side of this, that engaging in substance can reach a point of simply getting bogged down. I really don’t care how you want to bring in a socialist utopia, I just don’t believe collectivism can ever work without a lot of coercion.
China is calling for generative AI to adhere to “socialist principles” which means they want the technology to have a point of view. If they are really “all in” on this requirement, this will require at least two approaches that will undermine the utility of large language models. First, they’ll have to limit the training material so that the LLM doesn’t learn to talk in a non-socialist idiom. Secondly, they’ll have to put in explicit controls to ensure that anti-socialist answers aren’t given by the bot. This is not unlike what AI companies in the US are already doing to ensure the bots do not say certain unapproved things, so it is a difference in degree rather than in kind. Still, you can ask GPT to explain how the Wilt Chamberlain thought experiment refutes collectivism and it will give a pretty good answer. That would be a good test case for whatever Baidu or Alibaba creates.
I heard about this cool Wes Anderson looking mask and snorkel.