Month: 2003/11
-
2003 November 26
-
Varied feed polling times versus item urgency in aggregators
The problem with varying the polling interval is that the need varies. It's ok not to poll my little opensource website within 24 hours, but what about the announcements to the civil defence website or local municipal environment alerts, or the nuclear power plant news feed? Source:Comments on The End of RSS Definitely a good point there. For most of the feeds in my daily habit, what I use is an AIMD variation on my polling frequency per feed based on occurrence of new items. For feeds with low-frequency but high-urgency items, a different algorithm should come into play. On the other hand... should incoming alerts with that much urgency really be conveyed via an architecture driven by polling? Here's an excellent case for tying instant messaging systems and pub/sub into the works. [ ... 155 words ... ]
-
Didja get that memo?
[ ... 10 words ... ]
-
2003 November 21
-
Publishing Quick Links in blosxom with del.icio.us via xmlstarlet
In case anyone is interested in using del.icio.us with blosxom in place of my own BookmarkBlogger, get yourself a copy of xmlstarlet and check out this shell script: #!/bin/bash
DATE=${1-
date +%Y-%m-%d
} BLOG="/Users/deusx/desktop/decafbad-entries/links" FN="${BLOG}/"echo ${DATE} | sed -e 'y/0123456789-/oabcdefghij/'
".txt"curl -s -u deusx:HAHAHA 'http://del.icio.us/api/posts/get?dt='${DATE} |
tidy -xml -asxml -q -f /dev/null |
xml sel -t -o "Quick Links" -n
-e 'ul' -m '//post'
-e 'li' -e 'a' -a 'href' -v '@href'
-b -v 'text()' -n > ${FN}touch -d "${DATE} 23:59" ${FN} You could do this with XSLT, but hacking with a REST-ish & XML producing web service entirely in a shell script seemed oddly appealing to me that week. Extending this sort of thing to blogging systems other than blosxom is left as an exercise to the reader. Update: Hmm, looks like one of the blosxom plugins I'm using hates the variables in my code above. So I stuck curly braces in, which seem to get through okay. [ ... 244 words ... ]
-
2003 November 20
-
Building the Recipe Web III
[ ... 971 words ... ]
-
2003 November 18
-
VoodooPad gets an XML-RPC wiki API
You wanted to share the same documents with your coworkers and friends. Now you can. With VoodooPad 1.1, you can view, edit, and save to any wiki that supports the 'vpwiki api'. Source:Flying Meat Software
Funny, I’ve been tinkering with a wiki API along with a few others tinkerers for a year or so now. I wonder if we could get these APIs merged or synched and give VoodooPad access to a slew of wikiware?
[ ... 360 words ... ] -
2003 November 16
-
Building the Recipe Web II
Every once in a while, someone gets ideas about crossing recipes and computers. Of course, I love the idea. Two common ideas we hear a lot are 1) to put recipes in XML format and do all sorts of wonderful things and 2) that kitchen appliances should be smart and you should be able to feed them recipes and have your food made for you. They're both great ideas, but invariably, people underestimate the work involved ("But it's just a recipe!") and overestimate the usefulness ("It would be so cool!"). Source:Troy & Gay Here’s a good response from someone who knows what he’s talking about when it comes to recipes on the web—he’s one of the contributors to the aforementioned RecipeML format and is part of the team responsible for Recipezaar . While I think that recipes as syndicated microcontent could be a good thing, Troy makes some important points here. [ ... 152 words ... ]
-
2003 November 14
-
Building the Recipe Web?
RecipeML is a format for representing recipes on computer. It is written in the increasingly popularExtensible Markup Language - XML. If you run a recipe web site, or are creating a software program -- on any platform -- that works with recipes, then you should consider using RecipeML for coding your recipes! See the FAQs and the new examples for more info. Source:RecipeML - Format for Online Recipes So I'm all about this microcontent thing, thinking recently about recipes since reading Marc Canter's post about them. Actually, I've been thinking about them for a couple of years now, since I'd really like to start cooking some decent meals with the web's help. Oh yeah, and I'm a geek, so tinkering with some data would be fun too. One thing I rarely notice mentioned when ideas like this come up is pre-existing work. Like RecipeML or even the non-XML MealMaster format. Both of these have been around for quite a long time, especially so in the case of MealMaster. In fact, if someone wanted to bootstrap a collection of recipes, you can find a ton (150,000) of MealMaster recipes as well as a smaller archive (10,000) of RecipeML files. Of course, I'm not sure about the copyright situation with any of these, but it's a start anyway. But, the real strength in a recipe web would come from cooking bloggers. Supply them with tools to generate RecipeML, post them on a blog server, and index them in an RSS feed. Then, geeks get to work building the recipe aggregators. Hell, I'm thinking I might even give this a shot. Since I'd really like to play with some RDF concepts, maybe I'll write some adaptors to munge RecipeML and MealMaster into RDF recipe data. Cross that with FOAF and other RDF whackyness, and build an empire of recipe data. The thing I wonder, though, is why hasn't anyone done this already? And why hasn't anyone really mentioned much about what's out there already like RecipeML and MealMaster? It seems like the perfect time to add this into the blogosphere. [ ... 1292 words ... ]
-
2003 November 12
-
The Whuffie Web II
What I believe we are seeing is domain experts seeking each other out. Crossing organizational and philosophical boundaries. Source:Sam Ruby: Whuffie Web ...someone that's G-list globally might be A-list amongst pet owners. Source:Danny Ayers: Whuffie Web A very, very good point that I'd missed at first thought about the Whuffie Web. There's a matter of scale involved here, where the relative A's through Z's are completely different given your choice of grouping. And, where choice of grouping is around topic area, the world's a bit of a smaller place and getting your questions answered is likely much easier. Especially if you've built up some Whuffie in that domain area by generating some useful answers and knowledge yourself. For newcomers to a domain of knowledge, who have lesser stockpiles of Whuffie, they'll hopefully be fortunate enough to find much of what they're looking for chronicled in the archives of blogs of those who've come before. When they don't, though, it can still be a frustrating experience. But, semantic web tech in and of itself doesn't solve the problem where data or knowledge is missing altogether. How could it? So, although I was a bit dismissive at first thought about what Dave Winer wrote, he nonetheless has a good point. Even if the semantic web were richly populated with data and running in full swing, it would still be missing large swaths of Things People Know. And, well, the thing to use in that case is-- wait for it-- People Who Know Things. And the way you hopefully can get to them is by being nice and interesting, then blog the answers or ask the people answering your query to blog it themselves. Then, hopefully, we have blogging tools which can do the bits of pre-digestion to allow that knowledge to be accessed via semantic web machinery to fill in the gaps. This all takes me back to when I first encountered Usenet in my Freshman year of college, and became instantly enamoured with FAQs. It seemed like there was a FAQ for everything: coffee, anime, meditation, Baha'i faith, Objectivism, and hedgehogs. It seems mighty naive to me now, but at the time, I so thought that this was the modern knowledge factory. Through the contentious and anal bickerings of discussion threads on Usenet, and the subsequent meticulous maintenance of FAQ files, every trivial bit about everything within the sphere of human concerns would be documented and verified and available for perusal by interested parties. Netiquette demanded that one pour over the FAQs before entering the conversational fray, so the same ground wouldn't be endlessly rehashed. Approval from one's peers in the group came from generating new and novel things to add to the FAQ, and all were happy. This, of course, summarizes thoughts coming from a Freshman compsci student getting his first relatively unfettered access to the internet, gushing about everything. On the other hand, I have many of the above enthusiasms for the Semantic Web's promises. In a few years, I expect that my enthusiasm will be more even, yet at the same time, I expect there still to be some real uses and benefits to this stuff stabilizing out of it all. Hopefully, it doesn't get obliterated by spam before then, like Usenet, like email, and now (but hopefully not) in-blog discussions. [ ... 554 words ... ]
-
As a child, I would have teased Mark Pilgrim
I see that Mark Pilgrim has posted a picture of himself as a kid, working at an Apple //e. Based on what I wrote this past Summer about being Newly Digital in 1983, I would guess that around the same time I was working on a Commodore 64, and I would have teased him in a relentlessly geeky way about his clearly inferior machine. [ ... 65 words ... ]
-
2003 November 10
-
How about a demo of the Whuffie Web?
Let's do a demo of the Semantic Web, the real one, the one that exists today. Doc Searls has a question about the iQue 3600 hand-held GPS. It is sexy. They say it only works with Windows, but Doc thinks it probably works with Linux too. A couple of thousand really smart people will read this. I'm sure one of them knows the answer. Probably more than one. There's the query. Human intelligence is so under-rated by computer researchers, but when we do our job well, that's what we facilitate. Human minds communicating with other human minds. What could be easier to understand? Source:Scripting News Well, I certainly wouldn't call this the Semantic Web-- more like the Whuffie Web. See, if we were all A-List bloggers, with our own constellations of readers willing to pitch in to answer a question, we could all make queries like the above. A-List bloggers have the big Whuffie. Most everyone else has much less Whuffie, thus their query powers are much less. I somehow doubt that the Whuffie Web, if it were to take off in a big way, would work to equal benefit for everyone. A cousin, the Lazyweb, sometime serves its petitioners well, but it's a fickle and unpredictable thing indeed. Sometimes you get magic, sometimes you get shrugs. This also links into the Whuffie Web, in that Lazyweb contributors will be more likely to service a request if it comes from a Big Time Blogger. It's all about the Whuffie exchange. On the other hand, if this Semantic Web thing were to take off, it'd benefit anyone who could lay hands on the connectivity to acquire the data, and the CPU power to churn through it. The data itself could come from anyone with the connectivity to provide the data, and the brain power to create and assemble it from information and knowledge. No underestimation of human intelligence here. If anything, it's an attempt to better respect the exercise human intelligence, to conserve it, and make it more available. Were the Semantic Web to take off in a big and easy to use way, people could spend more time creating answers and less time answering questions, since the machines do the job of fielding the questions themselves. Of course... without the Whuffie, where's the motivation to provide the data? [ ... 587 words ... ]
-
2003 November 03
-
Reviews in RSS feeds
The RVW specification is a module extension to the RSS 2.0 syndication format. RVW is intended to allow machine-readable reviews to be integrated into an RSS feed, thus allowing reviews to be automatically compiled from distributed sources. In other words, you can write book, restaurant, movie, product, etc. reviews inside your own website, while allowing them to be used by Amazon or other review aggregators. Source:Blogware Implements Distributed Reviews Aww, yeah. Bring on the microcontent. Yay, hooray! This is an XML namespace-based extension to RSS 2.0, and for even more flavor, it uses the work of other pre-existing specs, such as ENT, FOAF, and Dublin Core. This wouldn't be hard at all to slip into an RSS 1.0 feed and an RDF database as well. [ ... 126 words ... ]