It’s 2005 now, soon to be 2006. In the four years since Radio 8, there have been lots of aggregators, but honestly, not so many new ideas.
Source: Dave’s Wordpress Blog » Why I’m working on an aggregator

Dave's got it completely right, and I'm eager to see what he's got brewing. I lived a lot of hours within Radio UserLand, so he was certainly on to something. This seems like another good opportunity to rant a bit about my wishes for aggregators...

...a river of news approach is more discardable, sorta like a daily newspaper. Does anyone get itchy if they don’t read every last story in a newspaper? No! You read what you have time for, which is why there’s an editor who decides what the most important story of the day is, and why journalists are trained to write in reverse-pyramid style (the important facts of the story are always at the beginning).
Source: Scobleizer - Microsoft Geek Blogger » Dave Winer working on new RSS aggregator?

I've brought up the notion of the journalist's inverted pyramid before with respect to news aggregators. This is what I think the River of News (ie. Radio UserLand / newsRiver) is missing, and the 3-Pane Aggregator (ie. NetNewsWire / FeedDemon) can provide. There is no priority in the River of News—while you can generally sort feeds into folders in the 3-Pane Aggregator.

So, in my use of NetNewsWire, I've got a sort of prioritized Swamp of News. I have organized my feeds into a set of folders in prioritized order. I start from the first folder, and work my way down—but if I haven't read everything by the end of the day, I hit "Mark All as Read" with no regrets.

Unfortunately, these articles don't go away automatically and the place can get a little smelly & stagnant. I've got an idea percolating to write an AppleScript to do that for me, but it's a slight pain. Also unfortunately, I can't quite get NNW's combined outline view to work to my satisfaction—so my view's still subdivided up into headline and preview pane chunks, somewhat more like a marshy delta than a rushing river.

What I really want is a River of News view that's got prioritized layers. That is, there should be strata to this River. Tasty foam up toward the top, attention-consuming undertow toward the bottom.

Near the surface, I can sip from all my highest-value feeds—daily comics and amusements, my synthetic popular links feed, server monitor feeds, ego-surfing search feeds, high-signal news feeds.

Further down, I can slip into the top layers of the blogosphere, if I have time. Then, should I happen to get through everything interesting from there, I can start dipping into the high-noise segment of my feeds.

So, some days, I might only have time to sample a few comics and catch the day's hot memes. Other days, I might be able to graze all the way down to my random Feedster search feeds on "Ray Kurzweil" and "Tinderbox". In any case, the river should flow without my urging it on. For each of these strata, I want a River of News user interface, but I want the segmentation provided by folders in a 3-Pane Viewer.

Still, though, there's something missing even from a Stratified River of News. Sometimes, there are things I'd really like to catch from the very bottom of the river that I'll never personally have time to find. It'd be nice if this River had some turbulence and mixing between the strata.

But that's a subject for a follow-up rant involving machine learning, filtering, and adaptivity built into the aggregator...

The problem with feed readers is that, although they help us to organize our information feeds, they don't address the root problem of reducing the amount of information we have to take in. For that, we need so much more: aggregation, filtering, deduplication and probably a bit of old-fashioned human editing as well, all delivered with intuitive, drag-and-drop simplicity into applications that help us organize and respond to the information that matters to us.
Source: » Death of the RSS reader | Software as services | ZDNet.com

I'll leave it at that, for now.

Archived Comments

  • One idea I had was to simply prepackage rss in feedreader form for people. Since so few people use it, or know how to, or will ever bother, one alternative is to simply skip over the learning how and give it to them.

    I've done this with regular, big M, media at 180n.com. I added user voting on top stories to make it bit more interesting, but you see the idea. I call it WebRSS.

  • Your reading method is just like the one I use.

  • Me, I'm quite happy with River-of-News style readers. What I'd really like is if I could mark items as 'boring' so that similar ones (you could use some form of statistical filtering here) would be banished henceforth. Oh, I'd love that, if only so that I could get away from some of the pointless junk that I end up having to scan through on US politics and religion.

  • Yeah, I have Bloglines set up with prioritized folders.

    And there can be a quite a backlog in the lower-priority folders. So I wish there was a way to just "show the latest 50 entries across all the feeds in this folder" so I could read what's fresh without having to read or throw away the backlog.

  • I follow a tremendous number of mom blogs with an incredibly low signal to noise ratio. I don't want to miss the few good posts but what a pain to read all that dreck. I could really use what you're talking about. Are you thinking of building it? Building that in Python might be more useful than porting your Hacking RSS and Atom code to PHP...

  • Besides the inverted pyramid within stories, there's also an inverted pyramid of sorts controlling what you see in the newspaper.

    When I worked for my college paper, I loved to "surf" the AP newswire (on our minicomputer terminals!). Around that time, I read "The Media Lab" by Stewart Brand, who pointed out that even a big newspaper prints no more than about 10 percent of the AP stories on their wire.

    Then, maybe 10 percent of what gets into the paper actually catches the reader's eye. That means that 1 percent of the potential information flow gets to the average reader, and generally, it's not the right 1 percent.

    When I worked for CNN.com, we tried to create a custom news product (called, strangely enough, CustomNews, launched around 1999) that let the reader twiddle a few dials to tune the news that got your attention. You could, for instance, tell the application what sports teams you followed, and set up individualized keywords (I might choose "Apple", "photography", "Honda", "Atlanta", "Tucker, GA"), and stories featuring those keywords would come up in your news.

    CNN had a big advantage -- we could rely on metadata included in the newswire. When you're pulling newsfeeds from the web, you have a much richer tapestry, but you don't know much about the threads.

    The River of News approach looks to concentrate on maximizing flow, but what we really need, as you suggest, is a way to find the fish in the River of News.

    I rely on some keyword feeds from Google News, Feedster, and Blogdigger to get a similar end result, but none of them matches up to a human editor. It's getting to the point, for instance, that I know I can skip surfing Mac news, and rely on John Gruber at Daring Fireball to catch almost anything I would be interested in.

  • Since it'll likely be quite a while before I get around to implementing it myself (and I'm going to even though I don't know how well it'll work), but I suppose I'll just drop my own idea for a next-gen aggregator.

    Aggregator + Bayesian Filter

    Each time you read a post you can mark it as interesting or not. The filter has two buckets, appropriately, the interesting bucket and the uninteresting bucket. Interesting posts should have certain similar features, usually in the form of buzzwords I would guess, which the filter should be reasonably good at picking out. And the interesting vs non-interesting sorting capabilities would act as a way of making the "foam" float to the top. And then one should be able to feel totally unguilty on those busy days when you hit "Mark Uninteresting As Read."

  • Bob: Actually, I included an aggregator with feedback buttons to train a Bayesian filter (Reverend) in Chapter 15 of Hacking RSS and Atom. It's a no-frills implementation of the idea with room for improvement, but that might be a starting point. The output of the system is another feed, including items whose score rise above a certain pre-defined threshold.

    Personally, I think Bayesian filtering is interesting, but I've been trying off and on to find something a little more effective...