After checking out Bill Humphriesā approach to scraping yesterday, I recalled the various things Jon Udell has written about URL-as-command-line and the various places Iāve seen the W3C XSLT Servlet used in XSLT tinkering. I also remembered that thereās an HTML Tidy service offered by W3C as well.
- http://www.thenation.com
http://cgi.w3.org/cgi-bin/tidy?docAddr=http%3A%2F%2Fwww.thenation.com
http://www.w3.org/2000/06/webdata/xslt?
xslfile=http%3A%2F%2Fwww.whump.com%2Fdropbox%2Fnationrss%2Fnation.xsl&
xmlfile=http%3A%2F%2Fcgi.w3.org%2Fcgi-bin%2Ftidy%3F
docAddr%3Dhttp%253A%252F%252Fwww.thenation.com&transform=Submit
Unfortunately, this doesnāt work. In particular, step [#2](/tag/2) fails, the Tidy service reporting a failure in processing the original HTML. I imagine, had that worked, the whole process at step [#3](/tag/3) would be producing RSS. On my command line, HTML Tidy works fine, so Iāve been thinking of throwing together my own web interface to that program and seeing if that works.
If it works, this with the addition of a cache at each stage could allow for what I think is a pretty nifty, all web-based means of scraping news items from web sites.
What would really be nice for apps like this is a better way to express the URLs-within-URLs without escaping and escaping and escaping and... Thinking some very lightweight scripting here, or some LISP-ish expressions would help.
[ ... 322 words ... ]
-
Continuing with making it easier for "Big Pubs" to create RSS feeds. I'm assuming that they have a publishing system, but it wasn't built with RSS in mind, but they want on the bandwagon.
Source: More Like This WebLog: Thursday, 21 August 2003
Using curl, tidy, and XSL to scrape content from HTML pages into an RSS feed. This is basically what I do now with a half-baked Java app using JTidy, XPath, and BeanShell. I keep meaning to release it, but itās too embarassing to share so far. Yet, itās been working well enough to scrape what sites Iām interested in such that I havenāt been too motivated to tidy it up and tarball it. One thing I like better about Bill Humphriesā approach, though, is that it doesnāt use Java :)
[ ... 195 words ... ]
-
Let's face it, email has become unuseable, the latest worm to strike is likely only the tip of the iceberg we're about to collide with. I've never liked the metaphore of an 'inbox', certainly not one that fills up and can't accurately be filtered.
Source: Email is Dead, Long Live Email!
I linked to D.J.Bernsteinās Internet Mail 2000 project a little while back, and I think what Adam Curry says here is along a similar path.
Internet Mail 2000 starts off with the assumption, āMail storage is the senderās responsibility.ā So, you want to send me an email? Post it on your server and tell me to come & get it. When I get the notification, Iāll then decide whether or not I want to bother. There are a lot of details to fill in here, such as secure posting and retrieval, trust and identity, notification mechanisms. But, it certainly would seem to balance out the equation a bit.
How to do it, though, so that things are still at least as simple to use as existing email, such as it is?
[ ... 873 words ... ]
-
In case it had been an annoyance to anyone, Iāve finally gotten around to adding a āRemember my personal infoā cookie to my comment forms. Let me know if it breaks. Otherwise, carry on!
[ ... 35 words ... ]
-
Despite a reading an entry by Srijith
discussing Bayes-based classification as unsuitable
for use in news aggregators, I tied SpamBayes
into my homebrew news aggregator
and have been trying it out this week. I know Iāve been talking about it
for awhile, but procrastination and being busy all round kept me from getting
to it. Funny thing is, when I finally got a chance to really check things out,
the integration was a snap. Iād anticipated a bit of work, but was pleasantly
surprised. I doubt that any other aggregator written in
Python would have a hard time with it.
If, that is, anyone else wants to do it. I already knew it wasnāt
magic pixy dust
but I figured it might be worth a try. I will be eating my dogfood
for awhile with this, but Iām thinking already that whatās good for spam
might not be so good for news aggregators.
Srijithās post
mentions some snags in ignoring some of the semantics of a news item,
such as whether a word appears in the itemās title or information about
the itemās source. I donāt think that this completely
applies to how Iām doing classification, since SpamBayes appears to
differentiate between words found in email headers and the body itself.
When I feed an item to SpamBayes for training and scoring, I represent
it as something like an email message, with headers like date, subject,
from, and an āX-Linkā header for the link. However, even with this,
I think Srijithās got a point when he writes that this method will miss
a lot of available clues for classification.
Unlike Srijithās examples, though, Iām not trying to train my
aggregator to sift entries into any specific categories. So far, Iāve
been trying to get it to discriminate between what I really want to
read, and what Iām not so interested in. So, I figured that something
which can learn the difference between spam and normal email could help.
But, although itās early, Iām noticing a few things about the results and
Iāve had a few things occur to me.
See, in the case of ham vs spam, I really want all the ham and none of
the spam. A method to differentiate between these two should be
optimized toward one answer or the other. SpamBayes offers āI donāt
knowā as a third answer, but itās not geared toward anything else
in-between. However, in measuring something like āinterestā,
inbetween answers are useful. I want all of the interesting stuff,
some of the sort-of interesting stuff, and a little of the rest.
This is also a problem for me in deciding to what I
should give a thumbs up and what gets the thumbs down. Even though
Iāve subscribed to a little over 300 feeds, every item from each of
them is somewhat interesting to me. I wouldnāt have subscribed to the
feed if there wasnāt anything of interest there, so Iāve already
biased the content of what I receive. Some items are more interesting
than others, but the difference between them is nowhere near the
difference of wanted ham vs unsolicited spam. So, I find myself
giving the nod to lots of items, but only turning down a few.
SpamBayes would like equal examples of both, if possible.
Iāll still be playing with this for awhile, but I need to look
around at other machine learning tech. Iām just hacking around,
but the important thing is to try to understand the algorithms
better and know how they work and why. Bayes is in vogue right now,
but as Mark Pilgrim intimated, itās not magic. Itās just āadvancedā :)
In the immortal words of Mark Jason Dominus: āYou canāt just make shit
up and expect the computer to know what you mean, retardo!ā
[ ... 731 words ... ]
-
16:06:23 [neo85] DO I NEED TO GO TO A SPECIFIC FOLDER TO LOAD
THE HTMAL?
...
16:07:37 [Ash] neo85: you may need to clear out the old HTMAL
files first with DELTREE C:\ /y
16:08:10 [Ash] Anyway, then type 'LOAD HTMAL'
16:09:11 [Ash] neo85: Did that work?
16:09:30 [neo85] I PUT IN /Y?
16:09:36 [Ash] Yes.
16:10:02 [neo85] THATS ALL?
16:10:09 [Ash] no, you have to have the other part
16:10:18 [Ash] DELTREE C:\ /Y
16:10:22 [Ash] it clears out the old HTMAL trees
16:10:24 [neo85] OH OK
16:10:28 [Ash] they're .TREE files
16:10:59 [neo85] IT SAYS DELETE SUHDLOG.DAT
16:11:37 [neo85] DETLOG.TXT?
16:11:47 [Ash] yeah, just delete all the trees
...
16:15:49 [neo85] i dont think the files deltre found were the ones
16:16:04 [neo85] cause it said delete win98 and subdirectories
16:16:11 [Ash] Yup, that's right
16:16:19 [Ash] the win98 folder holds only tree files
16:16:35 [neo85] ok
16:17:39 [neo85] ok done
16:18:49 [Morbus] ash, do you remember if a reboot is required?
16:18:58 [Morbus] i keep forgetting, and all my notes are on my
other machine.
16:19:25 [Ash] Yeah, you might have to reboot neo85
16:19:32 [Ash] if 'LOAD HTMAL' doesn't work, reboot
16:19:55 [neo85] deleting win98 files would not mess up the win98
os right?
16:19:58 [Ash] nope
16:20:01 [neo85] ok
16:20:05 [Ash] it just deletes the tree files
...
16:26:43 [Morbus] neo, having any luck with the LOAD command?
16:45:09 [neo85] *** neo85 has quit (Read error: 110 (Connection
timed out))
Source: IRC log of swhack on 2002-04-05
Heh, heh.
[ ... 265 words ... ]
-
23:58:35 [Ash] MorbusIff: Got any tree files?
23:58:39 [MorbusIff] heh
23:58:45 [MorbusIff] uh, tree files?
23:58:48 [MorbusIff] what are tree files?
...
23:59:39 [sbp] yes, you need to run DELTREE to get rid of themSource: IRC log of swhack on 2002-04-23
Heh, heh.
[ ... 44 words ... ]
-
Six remote-controlled surveillance cameras have been set up to transmit live video images of crowd and traffic conditions to handheld and laptop computers carried by cops.
Source: freep.com: Police try spy cameras for better cruise control
This has privacy advocates around here worried. Iām thinking itās a tempest in a teacup, but reading a quote like this is a bit unfortunate:
āWe can zoom in tight enough to read someoneās watch,ā said Jonathan Hollander, chief technology officer for GigaTrans, which designed the system for the use of the Oakland County Sheriffās Department and local police departments along the route.
It also doesnāt help that a Federal investigation into the Detroit Police found that they were āthe most troubled force they have seen in 10 years of scrutinizing police nationwideā. But, as a futurist geek, what I really want to know, having read David Brinās The Transparent Society , is when I get to look for traffic jams up ahead using my own wireless communicator.
[ ... 165 words ... ]
-
Voting is open. OpenPoll Names were vetted until 31 July 2003 while putting out an all-blogs call to vote. Please Blog the Vote.
Source: NameFinalVote - Atom Wiki
Is this final? Gawd, I hope so. Iām stringing too many slash-inated names together these days. :)
I voted for Feedcast, since it seems to be the least ācleverā name yet identifies the concept. It could be used in corp-speak and geek-speak without too much wincing. And itās not an acronym. All good things, in my short span of experience.
[ ... 107 words ... ]
-
And the next thing: at a very specific level, mini-ITX motherboards and cases are The Way To Go. Tiny, cheap, fanless PCs with trailing-edge processors -- only 1GHz -- are nevertheless a really amazingly cool idea, especially when you start thinking in terms of turning them into personal video recorders (running things like FreeVo) or in-car GPS navigation systems. Or Beowulf clusters.
Source: Charlie's Diary (via Boing Boing)
Although I currently am on the low end of disposable income, Iām keeping my eye on tiny cases, motherboards, and just-slightly-slower-than-insanity CPUs for projects just such as these. I want a PVR, a few file servers, maybe a homebrew game console. I also wouldnāt mind buying a pile of OpenBricks for general living-in-the-future purposes around the house, and to experiment with clustering and networking. Would also be neat to learn some hardware hacking again to build some clever devices like this CD changing robot
[ ... 152 words ... ]
-
I joined the Apache project for the software. I stayed for the community. Likewise Perl. The software is interesting, but the people are more interesting. So now that I'm really not even writing much Perl, I'm still involved with the community, to some degree, because they are cool people.
Source: DrBacchus' Journal: Software and community
Iāve been working with Perl for just about 10 years now, and though Iāve been a bit of a stranger lately, I used to be a regular on [#perl](/tag/perl) on several IRC networks. And, when companies I worked for paid for travel as freely as for paper clips, I made rounds at a few conferences. I was lucky enough to meet a few other [#perl](/tag/perl) regulars. I doubt most of them remember me since theyāre a fairly close-nit group, and Iād only made the one appearance, despite constantly swearing Iād make it to a YAPC at some point. But I always thought it was cool as hell to actually have had a beer at the same table in Boston with authors of some of my favorite OāReilly perl books.
But, I got busy, stopped hanging out in IRC so much, and also decided that I needed to expand my horizons and stop being so narrowly focused on one language. I got into Java, Python, shell scripting, and PHP. I started this weblog, and I tried to purposefully keep away from Perl. Of course, I canāt stay away, because Perl code comes out of my fingertips as naturally as breathing when a problem presents itself for solution.
And then thereās community. Iāve yet to find a Java community as quirky and entertaining as that surrounding Perl. Thus, Java bores me. I use it, but itās strictly business, mostly.
When what youāre doing is strictly business, I guess thatās desirable. But when you eat, sleep, and breathe this stuff, having a group of people constantly doing clever things and being odd certainly makes it more rewarding. Itās predictability versus creativity. To get the job done, you want solid and dependable tools. To have fun, you want some challenge and unexpected results.
To me, Perl and its community offers both. I think Python might, also, but Iām not as familiar there. Java and other technologies are mostly business. Maybe this also crosses over into the difference between IT people and CS people, and whether youāre here to work or here to play and get paid.
Hmm.
[ ... 566 words ... ]
-
Wow. It appears that this is the Blog of the day at
GeniusEngineer.com. I've never visited the
site before, but I'm flattered by being chosen just the same.
[ ... 29 words ... ]
-
Oh and while I'm writing about watching video files on my TV, I've
been thinking of getting myself a tape drive. Sure, I'll use it to
actually, finally, back up all the important things I have littered
around my handful of machines. Having established a backup routine
at work, I've gotten to thinking.
How bad an idea would it be to use a tape drive to store TV shows?
I've been capturing them with the VCD format, which gives me around
600MB per hour of show. This fills up my drive pretty quickly,
obviously. I know I really should take sometime to revisit things and
try another video codec, since originally I used VCD because I burned
everything to CD for my DVD player, but now I'm streaming files to my
Powerbook over the network which gives me a lot more flexibility in
recording options.
However, not burning to CD leaves me with a hard drive full of video
that I'm hesitant to delete yet have no good reason to need laying around on a
high speed hard drive. But, burning all that to a spindle-worth
of blank CDs without a Lego Mindstorms based CD-changer robot leaves
me shuddering. I recall reading about a DJ-bot
that did this for playing music, and I notice via Slashdot that someone with a
decent woodshop has provided plans
for such a beast for a CD writer. But I can't afford the Legos at
the present moment, and I'll only end up hurting myself working with
power tools.
Then, I remember that high capacity tapes make backing up lots of data
easy at work. I know that the tape trade off is capacity and price
for speed of access. But, if all I need to do is skip from one file
to the next and only need relatively low bandwidth to stream the file
from the tape, this sounds like a great way to archive video.
Depending on the video compression, maybe I could fit a whole season
or two of a show onto a single tape.
Seems like a good idea, though pricey. But maybe the price and cost
of media would offset the pain in the ass of any other method. I probably
should look more into the price and pain of a DVD burner, but the idea
of more disc-like things laying around worries me.
What do you think?
[ ... 995 words ... ]
-
I don't gush about it very often, but I love my 12" Powerbook. Since
I got it this past March, it has been my primary machine for both work
and home. And other than wishing that there was a 1GB memory module
out for it and grumbling that I've lost one of the rubber footies on
the bottom, I've been extremely happy with it.
And just last night, I was reminded of yet another feature that's made me
glad I got it: the included
Composite/SVHS video adapter.
I'd had the AV cable
for my iBook before it, but the use of the adapter on the Powerbook
has a very important difference: dual display mode.
See, when I connected my iBook up to my home entertainment complex,
I got reduced resolution back on the LCD, and anything I did that went
full screen (ie. playing a DVD or a movie file) took over the machine.
But with the Powerbook, its connection to my television is just a second
desktop, not much different than the second monitor I use at work.
So, while I'm at home on the futon with my girlfriend, I often stream
videos off a PC in the next room that's been recording TV shows for
me, and present them on this second desktop. Most apps I use to view
movies, such as Quicktime Pro
and VideoLAN,
allow me to pick a monitor for fullscreen mode. Meanwhile, the LCD on the
Powerbook is still available for other work while we watch.
It's just a little thing, but it's a thing that lets me get much of the
benefit of a dedicated
Home Theater PC without
having to buy or build a box that looks nice alongside all our
video game consoles.
While I'd still like to take on the project someday, my Powerbook does
just fine for the display and audio end of things, while an aging Windows PC
in the next room snags a few TV shows
for me.
Of course, if all you want is an HTPC, the Powerbook is expensive
overkill. But, if you're shopping for a laptop and want some fringe
benefits, I think this is definitely one that doesn't get much attention.
[ ... 459 words ... ]
-
Macromedia Central provides a safe environment for developers to deploy occasionally-connected applications. Using Macromedia Central, developers can create an application and give it away for free. Or they can sell it to end users using the Try/Buy framework that is part of Central.
Source: Macromedia - DevNet : Macromedia Central: How it Works
Iāve had a bit of enthusiasm for Flash lately. So, this Macromedia Central thing thatās been on its way for a little while now looks very interesting.
But⦠What differentiates it from every other āwidgets on your desktopā or āwidgets in a little boxā technology thatās come before? Remember DoDots ? CNN called them the web without a browser and there was general gushing here and there about it. At one point, I was close to being drafted to write a few promotional games using their SDK, and it seemed nifty enough. No clients bit, though. And all that remains of the company on the web are ghost pages and ex-employee photo albums and reunions. Oh, and I still have a mousepad and a clipboard from the dev kit.
And then thereās Java Web Start and Konfabulator . Have any of these sorts of things really taken off? I mean, they all have their share of nifty things, but has this idea of a centrallized corral of mini-apps ever paid off? Flash is yet another cool technology with which to develop these things, but will Central take off?
Iām not trying to whiz on anyoneās Cheerios, since I honestly think these things are nifty, but then again I like widgets with fun buttons to push.
Update: Hmm... Mike Chambers is inviting questions about Central. Maybe I should wander over there and ask.
[ ... 536 words ... ]
-
Well, I forgot to mention it, but I emailed Google awhile back about their
rejecting my site for
Google ?AdSense. They got back to me and let me in the club, which is
demonstrated by the skyscraper ad to the right. So far, I seem to be on the
road to earning free hosting for the month, if my clickthroughs keep up,
which is more than I'd hoped for. I only hope that if everyone's seeing this
kind of performance, that Google makes some money at it and doesn't have to
cancel the program.
My only complain now is this: Can I get some ad rotation? I'm not sure what
you're seeing, but I've been looking at the same 4 ads for backup solutions
since I first plopped the code in. At first I thought it was neat, since
I'd been talking about backups at the time and the ads seemed an intelligent
complement. But that story's long since scrolled off the page, and nothing
else interesing has come up since. Maybe this is by design, but I expect
my clickthroughs to stop pretty soon.
Now, I have no ambitions to get rich quick via Google. If they happen to
pay out enough to cover my hosting costs, I'm abso-frickin-loutely
ecstatic. So, I won't be spending
much time obsessing over search terms and "borrowing" public domain works to
boosting my ?AdSense revenue,
but it seems like the service could use a little freshening.
Am I missing something?
Update: Heh, funny thing. No sooner do I post this and visit the site to
check out how things look, the Google ad appears to have rotated.
Is someone watching? Heh, heh.
[ ... 413 words ... ]
-
What I've discovered, though, is that my desktop PC, for standard development tasks, is astoundingly faster than my work laptop for just about everything.
Source: rc3.org | Developing on my game box
Personally, though I really do want a new PowerMac G5 I canāt see myself investing much in desk-anchored computing anymore. Not since I got my first laptop, and later my first wireless ethernet card. What I can see myself doing, though, is maybe investing a little bit into a new PC for games, and maybe for a box with lots of storage and CPU power to stick in a closet somewhere and use via network.
Sure, a dirt cheap box tied to the spot via a dozen cables should be able to smoke my lightweight personal computing device⦠but what if I use that stationary box from remote with that lightweight device? Itās client/server all over again, but this time I own both the server and the client.
See⦠thatās where I really think itās at. :)
[ ... 1268 words ... ]
-
Amazon.com Syndicated Content is delivered in RSS format. RSS is a standard format (in XML) for delivering content that changes on a regular basis. Content is delivered in small chunks, generally a synopsis, preview, or headline. Selected categories, subcategories and search results in Amazon.com stores now have RSS feeds associated with them, delivering a headline-view of the top 10 bestsellers in that category or set of search results.
Source: Amazon.com Syndicated Content (via Silicon Valley - Dan Gillmor's eJournal - Amazon Does RSS, Officially)
This is very cool, though the feeds a little hard to find at first. Donāt look for the orange XML or RSS buttons ā use RSS autodiscovery to find the feed associated with a search. (In other words, the URL will be in a link tag in the header of a search results page.)
And though I donāt really want to stir up trouble, I find it strange that Amazon uses RSS v0.91, and that they link to Netscape (an all but defunct entity) and not a spec hosted by UserLand or Harvard.
Anyway, at least theyāre providing feeds in some format!
[ ... 186 words ... ]
-
Privacy issues aside (for the moment), there is a request header called "FROM", RFC 2616 s14.22 describes it.
Now, it does say it should, if given, contain an Internet e-mail address for the human user who controls the requesting user agent. SHOULD isn't MUST though, so what putting the user's homepage there?
It also says "In particular, robot agents SHOULD include this header so that the person responsible for running the robot can be contacted if problems occur on the receiving end."
Source: eric scheid: Atom aggregator behavior (HTTP level) [dive into mark]
Ask a stupid question , get a smart answer .
Last year, I thought it was a good idea to abuse referers in order to leave footprints behind when I consume RSS feeds. Then, this past January, the abuse in the practice was revealed and using the User-Agent header was recommended for this.
So, just for the hell of it, I asked about the User-Agent header for use in the context over at Markās place to see what responses Iād get. The one that seemed most informative was from Eric Scheid as quoted above, referring me to the HTTP/1.1 spec, section 14.22
As per Ericās comment and the spec, the value of a āFromā header SHOULD be an email address, but I would think that using a URL wouldnāt be too much an abuse of this header. Seems like a good idea to stick either the URL to a blog here, or even better, stick the URL to your FOAF file here.
Iād really like to see this get built into aggregators as an option, though not turned on by defauilt for privacyās sake. I like the idea of leaving my name or a trail back to me at the doorstep of people whose feeds Iām reading, and I like the idea of standardizing the practice as cleanly as possible. Using the āFromā header seems to be the best option so far, versus header abuse and User-Agent overloading.
Man. One of these days, I really have to get around to studying those specs in full, rather than just sporadically referencing them. Thank goodness for smart guys like Mark and Eric (among others) who actually take the time to read these things and try to communicate the gist to the rest of us busy developers!
[ ... 701 words ... ]
-
For a change, I feel awake today.
It's ironic that much of my writing in journals and much of my thought goes toward the topic of consciousness and thought itself. I've been studying and contemplating issues of cognition, awareness, and self for as long as I can remember. I've wolfed down self-help books and pop-psych in high school, went on to get a minor degree in psychology proper in college.
I don't use drugs to tinker with my consciousness (other than caffeine, that is), but I've tried various more controlled forms of meditation, visualization, and introspection. I flirted with Dianetics & Scientology (but ran far, far away), employed psycho-cybernetics, got motivated by Anthony Robbins, twisted my inner eye around to see itself with the help of Douglas R. Hofstadter, studied concept-formation and knowledge ala Ayn Rand, considered the multiplicity of self with Marvin Minsky, and explored dreams and archetypes with C. G. Jung. With the help of each influence, I've been stitching together a rough manual to my mind. Just like I've hacked around with computing devices, I've worked to understand and tweak my own mentality.
Oh, but I probably need to explain the irony: For the past few months-- likely the past few years-- I've been suffering from sleep apnea. LIke my father, and his father before him, I've developed a horrible snore and have started fighting a losing struggle with sleepiness. My dad is known for falling asleep constantly: in the midst of conversation, while eating, while getting his haircut, while using a computer. And lately, those have all been things that I've begun to "enjoy". Especially bad has been my tendency to fall asleep at work, and especially dangerous has been me falling asleep whenever I have to drive for more than 10 minutes.
This condition seems to have come upon me so gradually that it's only been recently, with the scare of losing my new job, and missadroit's persistent persuasion, that I finally ackowledged the problem and sought treatment. So, I managed to get an appointment at the University of Michigan Sleep Disorders Clinic, where one evening at the beginning of the month I was covered with wires and sent to bed. About a week later, they called me back to inform me that I had very severe sleep apnea, and was barely getting any sleep at all in a night with about 2-3 breathless episodes per hour.
Within a few days of that news-- yesterday, in fact-- I was given a new toy: The REMstar Pro CPAP System. After one night with the thing, my snoring is gone except for the occasional snort as I become accustomed to a breathing mask, and I feel quite a bit more rested than I have in recent memory. I still feel a bit tired, but that's to be expected: I've got many nights to catch up for.
I'd gone from being able to track "seven, plus or minus two" things at once down to barely one thing at a time, and that was if I didn't doze off in the middle of the task and have to rebuild the thought process when I snapped back awake. The irony of it all is similar to something I was reminded of last week: As it turns out, software needs hardware to run. So, for all my introspective experimentation on myself, and all my attention to consciousness, I've been feeling it slipping away from me lately. As a "software" guy, I can't do much with my "hardware".
So, I'm very happy that I finally-- after much denial and procrastination by me, and after much encouragement and tolerance by missadroit-- called and started the process that ended up with me sleeping through the night again.
And now maybe I can close my eyes and meditate without losing consciousness again.
Now maybe I can be myself again.
(P.S.: Thank you, missadroit. I love you and don't know what I'd do without you.)
[ ... 1117 words ... ]
-
I have my powerbook trackpad set to accept taps as mouse clicks, which makes the behavior identical to the button.
What I'd like to do is set one of them to behave as a second mouse button. That sounds like it should be possible. All of the usual suspects have thus far failed me. I'm surprised there isn't something on versiontracker -- it seems like it would be a popular hac
Source: osxhack: new powerbook - two button mouse from trackpad?
Sounds like a great idea to me. Has it been done? Or has someone realized that itās actually a really bad idea for some reason I havenāt thought of?
[ ... 241 words ... ]
-
Yesterday, I downloaded Mark Pilgrimās Python implementation of Textile and integrated it into the new hackish blog posting feature I added to my aggregator, and it works great. Now, I want Textile in my wiki. I google for it and donāt find much on wikis and Textile together. I wonder how this could be most easily done? In TWiki ? MoinMoin ? KWiki ?
[ ... 175 words ... ]
-
Words to strike terror into the heart of the home's designated computer geek...
Source: Caveat Lector: Iulii 13, 2003 ā Iulii 19, 2003 Archives
This doesnāt happen to me at home, but it strikes terror into me whether Iām at work, visiting relatives, or mistaken for an employee at some computer store. āIt doesnāt workā always seems to be the introduction into a great, murky mystery which usually leads me into wishing I was either a mind reader or had a cluebat on me. :)
[ ... 400 words ... ]
-
RAM and motherboards are the least likely suspect in kernel panics, but if you just have a new system, and or just installed new memory and you get a kernel panic, that's the most likely place to start looking. ... use the Hardware Test CD ...
Source: Mac OS X Kernel Panic FAQ
In my current job as jack-of-all-trades tech guy, I have to deal with everything. Lately, itās been a 15ā PowerBook thatās been having random crashes and happily corrupting its hard drive. Being a software guy, I run every program I can think of: Disk First Aid, DiskWarrior, fsck. Reinstalled Photoshop. Then, tried wiping the machine and installing OS X, which was fine until the installer itself crashed. Kernel panic after kernel panic. At one point, I considered consulting Eliza.
Turns out it was the memory. We found this out by finally running the one bit of software that, as a software focused guy, I hadnāt even conceived of: The Hardware Diagnostics CD.
[ ... 165 words ... ]
-
You might notice a sudden rise in link-quote-comment entries around
here, depending on how well this works for my lazy self. I just threw
together a quick bookmarklet and aggregator-integrated posting hack
for myself, hoping it will be as easy as BookmarkBlogger for noting
down URLs of interest throughout the day. Nothing revolutionary,
just slightly new for me in daily use.
But, I was starting to wish that I could provide a little more info
around my posted links, such as why I was sharing the link and
from where I found it. So, I'll be trying a slightly different
approach. Let me know if it gets annoying.
[ ... 109 words ... ]
-
"feed" is not a very unique name, and if another format were to come
along with the same top level element we would not be able to write a
format driver for it. Our architecture keys off the top-level
element. I suggest changing the top-level element to indicate the
format, and also add a version number so that aggregators can have an
idea of what spec the content provider is using. I imagine Radio is
not the only aggregator that would like to key off the name of the
top-level element.
Source:Radio UserLand: Radio gets some kind of Echo support
Nope, "feed" seems like a pretty poor choice as a name if the goal was
uniqueness in the tag name itself. But, since we have XML namespaces
to ensure uniqueness between vocabularies, we can instead focus on a
clear and simple name that only needs to be unique within the
vocabulary. And as for versioning, why not consider different
versions of a namespace to be entirely different vocabularies,
each with different namespaces?
I did some quick Googling and found the following:
... documents, containing multiple markup vocabularies, pose problems
of recognition and collision. Software modules need to be able to
recognize the tags and attributes which they are designed to process,
even in the face of "collisions" occurring when markup intended for
some other software package uses the same element type or attribute
name.
These considerations require that document constructs should have
universal names, whose scope extends beyond their containing document.
This specification describes a mechanism, XML namespaces, which
accomplishes this.
Source:Namespaces in XML
One of the core features of XML is its ability to deal with changes in
the rules for data (hence the extensible in its name -- Extensible
Markup Language). As changes are made to XML vocabularies, the
creation of multiple versions is inevitable. This makes it necessary
to mark the versions clearly, for human and machine information. The
clear marking of versions can be used for driving validation, or for
branch processing according to the requirements of each version.
You can mark the version of an XML vocabulary in many ways. This
discussion focuses on the use of XML namespaces for marking versions.
Source:Tip: Namespaces and versioning
I haven't looked into RadioUserLand feed handling architecture,
but how difficult would it be to use the namespace and tag together
as key, rather than the tag alone?
[ ... 484 words ... ]
-
This is an ultra-liberal feed parser, suitable for reading RSS and
Pie feeds as produced by weblogs, news sites, wikis, and many other
types of sites.
Source:Dive Into Mark: Feed Parser
As I guessed and
as Mark replied,
his ultra-liberal feed parser now
supports initial Pie (nee nEcho (nee Echo (nee Pie))) feeds.
But you know what else? He left in support for RSS. My news
aggregator remains fully able to read all my feeds even after dropping in his
new code. No breakage here.
[ ... 132 words ... ]
-
Okay, so at my new job I'm the Guy if it has a transistor in it. I'm
developer, sysadmin, and hardware jockey all in one. This is fun
to a certain extent, since it tests pretty much everything I know from
A through Z. And so far, I'm doing okay. Every now and then, though,
I get a bit stumped.
My most recent adventure involves developing a backup routine for the
office. I just got tape backup working on a Linux box for a big
Samba-shared directory that we all work out of. I'm currently winging
it with star and cpio in =CRON=-scheduled scripts that manage a
6-tape rotation for me.
Full backups on alternating tapes on Fridays,
with incrementals inbetween on tapes labeled by the day. I even have
the server eject the tape and IM me a few times until I go change to
the day's tape. Tested recovery, and though it could be smoother, it
is at least possible at the moment. I figure this is pretty good
for my first personal encounter with managing serious backup. I plan
to keep researching and to upgrade software at some point soon.
So, now my boss asks me: "Hey, can you backup this other folder for me?
I don't want to share it, though, and I don't want you to be able to
read the files." This folder contains some important yet sensitive
things like salary information and other things to which I have no
business having access.
My stumper then, is this: How do I grab (or cause to be uploaded) a
folder of files for backup, say as large as 2GB, from a WinXP machine,
without having any access myself to read the file contents. I'll be
able to install whatever I need on the WinXP machine, but the idea is
that, when the bits leave that machine for the Linux backup server,
there should be no way for me to read their contents. But, I must be
able to usefully backup and, in conjunction with the owner of the
files, restore in case of disaster.
Oh yeah, and I have no budget for software. So, I'm trying to work
this out using only free tools.
So, my first though is some sort of encryption on the WinXP machine.
Encrypt with GPG or something, leaving my boss with the secret key
on a floppy and the passphrase in his head. Upload these files
to a special folder on our shared drive, and it all gets backed up
like everything else.
Or, since I don't even really want to know the names or number of
files in this sensitive folder, can I somehow ZIP up the whole
shebang and encrypt that before uploading?
Under Linux, none of this would be much of a problem to me. But,
under WinXP, my knowledge of available tools and means of automation
fail me.
Any hints from out there?
[ ... 1697 words ... ]
-
Tools will start to support necho as well as RSS. The formats will
coexist, just as RSS 0.91 and RDF and RSS 2.0 coexist
today. Furthermore, this coexistence will be transparent, just like
today. Over time, necho will, hopefully, become the standard. In the
meantime, there will not be a major catastrophe of incompatibility...
Eventually, some of the other formats might become less used, and will
be phased out (this is something that is already happening, for
example, with the transition from RSS 0.91 to RSS 2.0). And because,
currently, RSS is being almost exclusively used for updates and
regenerated constantly at each endpoint, there will be little if any
switchover cost, again, as an example of this I put forward the
transition from RSS 0.91 to RSS 2.0 that happened last year.
Obviously, it's on us, the developer community, to add necho support
without disruption, and it's not a problem. After all, we are already
doing it today, and moving most (hopefully all) tools into necho will
eventually reduce work for developers in the future, allowing us to,
finally, concentrate on improving the tools rather than on how to let
them connect to each other.
Source:d2r: why (not)echo is important -- part 2
When I read
Dave's post
that developers were trying to "rip up the pavement, break
everything and start over", I wondered what he was talking about.
(Strangely, I can't find the original posting on Dave's blog. Maybe
the statement was revised
in the face of a later endorsement of the
project?) The reason I was wondering is because nothing broke
on my desktop. Every RSS feed to which I subscribed was still feeding
me RSS, and my home-brew aggregator continued crunching and delivering
my fix.
In fact, my aggregator's RSS consumption is based on
Mark Pilgrim's Ultra-liberal RSS parser.
And, it looks like Mark's been one
of the developers involved in the (not)Echo project. Mark didn't
break anything for me, and couldn't if he wanted to. On the contrary,
he continues to offer his code, and even updated it not more than a
month ago to address link-vs-guid concerns in a useful way. Hell, even
though Mark demonstrated his break with RSS tinkering rather concretely
by implementing a very literal interpretation of the spec, I can still
download
his working RSS parser code.
I'm a user and a developer all at once: I produce RSS, I consume RSS,
I develop with RSS, and yet I'm watching (not)Echo with great interest
and welcome it when it's ready. I fully expect that, in my tinkering,
it'll take me less than a lazy evening's work to put together a
template to publish a (not)Echo feed from my blog, and to add
(not)Echo support to my aggregator. Hell, I might even get another
parser from Mr. Pilgrim to drop into my project. But, as long as
others are still producing and expecting RSS, I'll still accept and
offer RSS. No breakage here. In fact, if I get off my lazy butt,
I'll unfunkify
my own feed and upgrade it to RSS 2.0 while I'm at it.
This isn't really heavyweight stuff here.
Then, I read things like
Jon Udell's Conversation with Mr. Safe
and other worries that the whole technology of web content syndication
and management will be avoided by big money, or even more horribly,
co-opted by big money in the confusion. Has the BBC or the New York Times
expressed any change of heart with their decision to offer their content
in a syndication format? Has the basic tech stopped working? There are no
pieces of sky on my balcony, though I fully admit that I might be too
naive to see them.
See, to me, RSS ain't the thing. Content syndication and aggregation
are the thing, and that's going strong. Are the people with big money
interested in this geeky thing called RSS, or are they interested in
syndication and aggregation? You know, getting their content out
there and read? Do they know that this (not)Echo effort hasn't
actually made RSS-supporting software stop working, nor will it ever?
Just because a bunch of bloggers and tinkerers got together and decided
to start making an alternate format and API doesn't mean that the
existing, mature technology suddenly goes sproing.
In fact, unless or until this upstart (not)Echo project builds
something amazing in terms of in-spec capabilities and vendor support,
the currently working RSS-based tech is a safe bet. And, in fact, I'd
be willing to bet that RSS will still be a force to consider in years
to come, even if (not)Echo introduces some irresistable pull.
Companies like Blogger and ?SixApart would reveal themselves to be run
by morons if they screwed users by dumping RSS overnight. (And that's
ignoring the fact that someone would come along and whip something
up to fix their idiocy somehow.)
And, I'm sure Microsoft or some well-heeled vendor could try stepping
in with a format of their own and try to steamroll it through with
their own blogging tools and aggregation services, but you know,
they're not omnipotent. The Internet didn't go away when MSN was
introduced, and the web full of RSS feeds won't go away even if they
introduce MSNBlogs or some such. It'd take a gigantic fight, lots of
very shiny bits, or many bribes.
I mean, that's what it takes to get my cats to do anything.
[ ... 1028 words ... ]
-
Mr Safe: Tim Bray said you're all washed up, kind of like Charles Goldfarb.
Source:backend.userland.com: Checking in with Mr Safe
Dave Winer has done a tremendous amount of work on RSS and invented
important parts of it and deserves a huge amount of credit for getting
us as far as we have. However, just looking around, I observe that
there are many people and organizations who seem unable to maintain a
good working relationship with Dave.
I regularly get pissed-off at Dave but I really truly do think he's
trying to Do The Right Thing; but there are many people out there who
can't get past being pissed off. This is what life is like.
There's an uncannny echo here, for me. The thing that came before XML
was called SGML. SGML was largely invented, and its landscape
dominated, by a burly, bearded, brilliant New Yorker, Charles
Goldfarb, who is currently making a well-deserved killing bringing out
the Definitive XML Series of books for Prentice-Hall. Charles is
loquacious, persistent, smart, loud-voiced, and nearly always gets his
way.
There were a lot of people out there (still are, I guess) whom Charles
drives completely nuts and just won't work with him. Which is one of
the reasons that, when we invented XML, we felt the need to give it a
new name and a new acronym and so on. Mind you, Charles, who as I said
is no dummy, climbed on board the XML bandwagon about fifteen seconds
after it got rolling and was a major help in getting the thing
finished and delivered.
Source:ongoing: I Like Pie
I'm very confused about this. Dave (or rather, Mr Safe) says that Tim Bray
said something nasty about him here. In fact, Dave says that Tim said
he's all washed up, like Charles Goldfarb.
But as I read it, I'd love to be washed up like Charles Goldfarb,
seeing as he's "currently making a well-deserved killing bringing out
... books for Prentice-Hall", having "climbed on board the XML bandwagon
about fifteen seconds after it got rolling and was a major help
in getting the thing finished and delivered". Sounds like Mr. Goldfarb
is still very active in his community, still considered an authority,
and is being rewarded for it. I hope I'm that kind of washed up someday.
In fact, it'd be pretty keen if that's what people meant if someday
they said, "Les is Dead", though I can't find where Tim said that.
So, where's the nastiness? It's not like Tim took notes from
Mark's spanish lessons
and told Dave to go "chinga tu madre" or "come verga". That's nasty.
Far as I can tell, Tim compared Dave to a guy in another community
who has his own contingent of haters yet is still undeniably a brilliant
guy just trying to Do The Right Thing as he sees it.
Was the nastiness in saying that some people can't "maintain a good
working relationship with Dave"? Or that Tim gets "regularly ...
pissed-off at Dave"? I mean, they're both obviously true. It would
have been nasty and untrue had Tim said that no one can maintain a
good relationship with Dave, because there are also obviously a lot of
people who do. But, Tim didn't say that.
So, as far as I see, I'd personally be happy to have Tim Bray write
about me like this in public.
[ ... 920 words ... ]
-
Just added a small script to the bottom of my weblog to run a scroll.
Source:John Robb's Radio Weblog
Not only that, but you added a scroller to my news aggregator page, too! :) Gah.
While not as dramatic as
Platypus Day, it does
have me adding an item to my TODO list to
more safely consume RSS in my aggregator. I feel like I'm tooling around the blogosphere
with my pants off.
And, it makes me want to get back to working on AgentFrank, so I can insert some
filters to block JavaScript code that hijacks my status bar. Bah. No offense,
since the message itself is worth attention, but scrollers are so... 1998.
Update: And, John Robb
has removed
the scroller. Thanks! I still need to look into securing my aggregator
though. Whether I like status bar scroller or not, my news aggregator
should keep them out anyway.
[ ... 151 words ... ]
-
... there's excellent knowledge in blogs if only we had the tools to extract it.
What sort of tools? Relevance and reputation based feeds and
aggregators for one. The problem of quickly finding what's good from
among the great muck of the blogosphere is, if you ask me, a far more
urgent problem than seeing the correct authorship or harmonizing
dc:date and pubDate before I even read the thing.
... facilitate P2P trading of RSS from desktop to desktop as well as
server to desktop -- you subscribe to 1000 feeds, aggregate them, rate
them (explicitly or by statistical filtering based on past use
patterns) and then rebroadcast your new rated feed. Aggregators could
then /use/ redundant items from feedback loops because each RSS source
has a reputation rating that weights the contained individual item
ranking; repeated items add their rankings.
Source:TeledyN: Echos of RSS
Yes. This is it. This is what I want to see come next from aggregators
and blogs and syndication and all this mess. It's what I've been tinkering
with in small steps for most of a year. It's what I intend BookmarkBlogger
to facilitate, as well as AmphetaOutlines and the homebrew aggregator I'm
hacking around with right now.
At first thought, I'm not sure whether or not building and
republishing RSS (or Echo) feeds is where it's at. But, the more I think
about it, the more it seems perfectly elegant to me. All the elements are
there, except for an extension to capture ratings. Extend aggregators to
consume these rating-enriched feeds, and instead of just spooling the items
up into your view, extract and assimilate the ratings into a growing
matrix of rater versus rated. Apply all the various algorithms to
correlate your rating history with that of others to whose ratings you
subscribe. Mix in a little Bayes along with other machine learning.
As for the interface... well, that's a toughie. At present, I think I could
sneak ratings into my daily routine by monitoring my BookmarkBlogger use and
watching the disclosure triangle clicks and link visits in my AmphetaOutlines
based news aggregator. I could easily see adding an iTunes-like 5-star
rating interface, but unless I get some pretty significant payoff from
painstakingly rating things, I'll never use it. At least in iTunes, I get
to have playlists of my faves automatically jumbled together, if I remember
to use the ratings in the moment.
The cool thing will be when sites like
Technorati and Feedster start
using these ratings, but the even cooler thing is when all that's on
my desktop. This could be easy, though, couldn't it? What do we call
it, Syndicated Whuffie?
(Which reminds me: Eventually, we really gotta get back to the subscription
problem. All these agents polling files everywhere will get to be nasty.
Obviously. This has been talked about already, but little has happened.
We need some ?PubSub, maybe some caches and concentrators. All stuff that's
been mentioned in passing before, and left by the wayside as unsexy.)
[ ... 1086 words ... ]
-
I've written before that I love XML-RPC, and
that it has served me well in the past couple of years. I think it's the right tool for a broad
range of jobs. But, after having studied the spec, and after having implemented it in a handful
of languages for a handful of well-used apps, I think the spec needs just a bit of fixing.
In particular, the spec needs a tweak with regards to this "ASCII limitation". There is confusion
about this, period. I've had to hash this out with clients before, this
was an issue of note while
working out an XML-RPC based Wiki API,
and it's obviously an issue in many other projects. This, of course, includes the current
hubbub surrounding weblog APIs and whatnot.
So, please fix the spec. It shouldn't take long to make this issue a non-issue by some simple
clarification in the main XmlRpc to which everyone refers.
Yes, I know there's a bit of clarification at the end of the spec, involving escaping (
not encoding) < and & along
with the statement that "A string can be used to encode binary data."
Well, yeah, I do that
all the time with Base64. And, since
the spec earlier had called for "ASCII", I assume that's what encoding
binary data means in the context of this spec. To me, encoding implies a transformation
from original form to some other form later requiring decoding.
But, apparently, my interpretation and
the interpretation of others
is wrong on that score. But still, I've been confused, and so have others. Consider this a bug report.
I've been referred by Fredrik Lundh (via Dave Winer),
to "private conversations",
"various public fora", and "early archives for the XML-RPC mailing list". And, again by Fredrik Lundh,
I'm told:
But even if you don't know all this, it's not that hard to figure it out for
yourself. Just make sure you read and digest the entire specification, apply some common sense
to sort out the contradictions, and you'll find that it's pretty obvious that the intent is that
you can use any character allowed by XML.
Well, let's see. I read the whole spec, more than once, and what I figured out for myself with my
"common sense" is what I wrote above. I thought
the spec called for ASCII (as in: ASCII), and assumed that
encoding binary data called for something like Base64. Yes, I realize that XmlRpc is XML, but
when a spec calls for ASCII as a particular part, I assume that there's a reason for it
since that's what the specification specified.
In my experience, specifications are not about common sense, figuring it out, and
connotation. Specifications are
about declaration, clarity, and
denotation.
Yes, I understand that no
spec is perfect, and that many are steaming piles meeting none of the criteria I just mentioned,
but that doesn't alter the goal. A spec can always be made better by revising
with these things in mind, given the input of consumers of the spec. This is what a process
of communication is all about, and specifications are intended as a form of communication.
So, instead of talking about intent and things that have been talked about somewhere
at some time, with the implication that I should just go off and search for these things, can
we just get a clarifying fix to the XmlRpc spec? I don't want to send my clients off to
mailing list and discussion archives, or present XmlRpc with any corrections or caveats. I
want to say, as I have been, "Here, go to xmlrpc.com, read the spec, implement to the API
I emailed you, and get back to me." Only, it'd be nice if the first question is about my API,
not about character encoding.
I've been confused, and so have others. I consider myself a smart person, and I consider most
of the others who have been confused as even smarter. I apologize if my "common sense" is of a
different sort, but that's what you have to deal with in the world of people. As young as I am,
even I've discovered this already.
So, can we just get a clarifying revision of the spec? And if not, why not?
Update: Rock on. After catching up
on a bit of banter over at Sam's place, I see that
the same Fredrik Lundh I quoted before has already begun an
XML-RPC errata page with the goal of clarification.
(I just missed it in my daily reading thus far.) As
Mark comments, I fear bumps in the
road as any confused implementors find things weren't what they thought, but I'm happy to see
the clarification accepted.
Update again: If you've stopped rocking, resume. Dave Winer
updated the XML-RPC spec.
It was a small change, could have been more, but had not been done at all until now. I
doubt that my asking please really had much to do with it, but I couldn't guess that it
hurt. Thanks!
[ ... 1015 words ... ]
-
So yeah,
like I was saying,
I've kept my head out of the RSS frey lately. This past post about GUIDs and
their properties of rocking in RSS hadn't had much thought behind it, other
than that the idea of having something well defined and uncontestably intended
for the use or uniquely identifying a weblog post seems like a good idea,
especially if it's a permalink. Because, you know, permalinks seem great things
to serve as both globally unique identifier and locator in one go.
I had a feeling that I was confused about the purpose of the link element in RSS
2.0, but having not really studied the spec, I just kept to maintaining a student
mind and assumed that there were Things Not Yet Understood. Now I read the spec,
curiosity sparked by the recent hubbub over at
Mark's place
and Phil's place.
Dave
wrote that
the link tag in items was "designed for something else". Cool
by me, I assume that I am not yet well informed. So, I read in the
spec, where assumedly I'll be illuminated as to its designed purpose,
that link is "The URL of the item". To me, this means that the link
tag was designed to point at the item, being the URL of that item.
And, as far as I can tell, "the item" is what is being described by
the item tag, in other words: the weblog entry.
But this seems contrary to the statement that it's been "designed for
something else". Designed when and documented where?
Jon Udell writes
that RSS is in no way broken, but I personally think it's got a funky widget
or two in it and is not free of confusion. Bah, really I
don't care. I still think a GUID for a weblog entry is a good idea,
and that maybe some people who comment on links exclusively should
have a tag devoted to that. Maybe in a separate namespace devoted
to link-blogger vocabulary.
Meanwhile, I'll be making occasional pokes at participating
over at Sam's wiki and The Echo Project.
I like the wiki approach he's
offered for participation, especially the potential for zero-ego participation
when it works. I love seeing something I contribute in a wiki eventually
float free from my attribution, to later land in the midst of a summary
elsewhere. And in the end, if it all works right, it'll be something
that everyone had a part in, yet no one owns, and further yet didn't take a
formal committee to approve.
[ ... 802 words ... ]
-
Google ?AdSense is for web
publishers who want to make more revenue from advertising on their site while maintaining editorial
quality. ?AdSense delivers text-based Google ?AdWords ads that are relevant to what your readers see
on your pages and Google pays you.
Source:Google AdSense
Your website is a type of website that we do not currently
accept into our program. Such websites include, but are not limited
to, chat sites, personal pages, search engines, sites that contain
predominately copyrighted material, and sites that drive traffic
through cybersquatting.
Source:Response to an AdSense application for decafbad.com
Hmph. No chat around here. I suppose things are a little personal, and there's a search
engine. But, if there be warez here, I must've been hacked, and nobody seems to want
this domain but me, so there doesn't seem to be any squatting going on.
Guess I'm not a web publisher who wants to make more revenue while maintaining editorial
quality. :) (I'm guessing I've been rejected as a web publisher.)
[ ... 403 words ... ]
-
Guids sure have a funny name, but they're quite useful. If your weblog tool supports them, use them. If not, ask the developer to add the support. It's not very hard.
Further, I strongly believe that all aggregators and readers should pay attention to guids where they are available. It's a convenience that many users will appreciate, especially people who are in a hurry
Source:Guids are not just for geeks anymore ;->
Haven't really been saying much lately about the recent plunge, albeit more amiable this time, back
into the RSS and weblog syndication frey. Mostly because I haven't had the time, and mostly because
people more eloquent than I were already saying what I thought.
In the meantime, I've been working, and puttering around with
my own aggregator as spare time comes up.
And you know, I'm tired of having to come up with some mechanism to detect new entries.
This GUID thing is what I need. I don't want to run MD5 on another RSS item, and I don't
care to track the minor edits people do on their entries, like Dave said.
Personally, I think the GUID should be the permalink, if at all possible. I used
to think that that was what the link of an RSS item should be, but then I never really
maintained a weblog in the quote-link-comment style for long stretches. My entries
aren't usually completely about someone else's article. But, some weblogs are
like that. So, link points to a subject under comment, GUID identifies the entry and
ideally does it via permalink.
Nifty.
[ ... 719 words ... ]
-
Okay, what's wrong with this picture?
Each of those lines is from a terminal I have open, trying to find
one of my monospace fonts that works correctly and I don't hate. My
past favorite has been Monaco for the longest time, but recently (when,
I can't quite remember) it seems that it likes to combine "l" and "/"
into one symbol. So does Courier.
On the other hand, I have this font
called Monaco CY which looks close enough to my favorite Monaco, until I
discover that it mashes double dashes together.
This leaves me with only two monospace fonts on my ?PowerBook that don't
mangle things (however minor) in the terminal. They are Courier New and
Andale Mono, both of which I very much dislike.
So, though I've found one other person
complain a bit about this, I can't seem to find any explanations why. Best
I can figure is that I had to dump a slew of fonts onto my system recently
in order to be able to do some client work, so maybe I clobbered an out-of-box
version of my previously favored Monaco. But that doesn't make much sense, since
I tried snagging a copy of Monaco from my girlfriend's iBook to no avail.
Anyone out there have a clue as to what this is?
[ ... 554 words ... ]
-
So, I downloaded Safari 1.0 yesterday and was very pleased to notice
a new checkbox option on the Bookmarks section of the preferences.
Notice the off state of the checkbox. Goodbye, bookmark syncing, I hardly wanted to know ya.
[ ... 59 words ... ]
-
At the moment, I'm working on a bit of HTML and form processing, but
I'm also monitoring three IRC channels and two web pages since I can't be
at WWDC for the Keynote. But, it's almost like I'm there.
Except I don't get a free iSight. Bummer.
[ ... 61 words ... ]
-
Let's say you're torn between two
worlds. You know that
one is a fevered delusion that your mind has created and the other one is reality,
but which is which? ... Apply this
algorithm in both worlds...
Source:Algorithm for Determining Imagination from Reality
So, after having seen Matrix Reloaded, and having read advice on
how to live in a simulation,
I find myself wondering, how might I determine whether I'm living in a simulation?Well, I figure chances are, I'm not even living in a decent simulation. In fact, I could
just be hallucinating right now. So, Aaron Swartz comes to the rescue with a very
reasonable empirical test I can perform. Everything seems to check out.
Problem is, though, the test is completely dependant upon me and my perceptions.
First, I have to pick a really big number that's outside my ability to perform
a square root on it in my head. Then, someone else performs the square root
on a calculator. I then square that number by hand, and that answer should match
my first, and since I couldn't possibly perform a square root that large in
my head, the answer must've come from outside my head.
Except for this: If I'm hallucinating, then there's a subdivision of me controlling
the perceptions of another subdivision of me. Who's to say that the me who's
performing this test isn't a complete idiot, and all the math skills are in the
part producing the hallucinations? Or, hell, what if for the duration of the test
my hallucination producing side decides to make me blind to any numbers greater than
4?
Hmm. Well, just to be safe, I won't try the alternate "Step in front of a bus
and see what happens" test.
[ ... 395 words ... ]
-
Still have been busy like crazy, but
as I wrote back in April,
some of what I'm doing has been pulling me further into Flash MX and XML.
Also, in the few moments of free time I've had lately, I've been toying
with my own news aggregator. It's a
PersonalServer, written in Python, based on Twisted, and uses
SQLite via PySQLite
for storage and juggling of items.
So, today I've been thinking: How hard would it be to bundle together a desktop app
composed of a backend in Python and a GUI in Flash? Connecting the two is no problem
given whatever method of XML communication you want to pass between them. Pairing
the two together to be launched on the guest OS would seem to be a bit of an
oddity.
See, I like my news aggregator GUI in the browser. It seems native there. But
on the other hand, as far as interfaces go, what I want to make the browser
do tends to sound ugly. I mean, yeah, there're all sorts of DHTML and CSS tricks
and XUL looks promising, but damn have I been noticing how slick Flash is
lately. And fiddling around with ?ActionScript has been pretty fun lately.?JavaScript has gotten a pretty bad reputation via crashy implementations, but
as dynamic scripting languages go, there are some nifty elegances I can pull
off in it.
So...
I've been reading a bit about
Macromedia's Central as far as
desktop Flash goes, and I've seen the
News Aggregator sample app,
but
how about a maniacal mutant hybrid of Python and Flash?
[ ... 322 words ... ]
-
I caught Gizmodo 1983 this week,
along with the news
that NBC may be revisiting the old 1983 scifi series "V", and I was reminded that that
was right around the time I got my first computer. I've been meaning
to write something for Newly Digital,
so here goes:
My history with computers starts a few years earlier than 1983,
though. I think it was during the first grade, when I was a hyper,
easily bored kid. I would get class work done quickly and early, yet
forget to turn it in. Then, I would disrupt class until I was somehow
calmed down or sent to the principal. I seem to remember that, once,
I was caught scaling a classroom wall via the curtains. How far I
made if before being caught, I'm not sure, but it seemed like miles
to me at the time. I remember being the only one happy about it.
One day though, the usual trip to the principal changed. I remember
him as a tall, bald, and imposing man whose breath always smelled
funny. (This was long, long before I knew about coffee and had become
hopelessly addicted myself.) The man scared me, since he was known
have spanked students in the old days, and though he wasn't allowed to
do that anymore he still had the power to call my Mom. And I'm pretty
sure most everyone knows the little-kid terror of that.
This particular visit, though, he led me back into his office, and sat
me down in front of an
Atari 800
and a TV screen. Though I had
already been introduced to video games via our Atari 2600 at home, I
had little idea what this thing was.
He showed me how to turn everything on, and introduced me to a stack
of workbooks as tall as I was. Each book was about 1/4" thick and the
cover colors were a rainbow of progressive difficulty. He told me
that he was trying to decide whether or not to start teaching
computers in the school, and that these books were what the company
sent him for classes. He wanted me to try them out for him and see
what I could do with the computer before he bought more for the
school.
From then on, when my class work was done, I had a pass to go to the
principal's office and work through the books with the computer until
either I ran out of books or the year ended. I worked mostly on my
own, with a heavy sense that it was something special I'd been trusted
with. As the principal went about his daily work, I was barely
supervised with this expensive machine, and I felt I needed to prove I
was worth it.
My grades and my behavior improved as I tore through the workbooks in
his office. There was so much to learn and play with. I remember
with unusual clarity writing a program that asked me for my birthday
and replied with my age in the year 2000. It dazzled me that
something I programmed into the computer could tell me about myself,
all grown up, in the twenty-first century. You know, the year when
all science fiction stories came true! But there I was, playing with
the stuff of sci-fi already.
And the greatest thing, as the books began to ask more creativity and
originality from me in my assignments, I felt my mind stretch. I'd
never quite felt that before, and it was so amazing. Part of it was,
I'm sure, just a property of the elasticity of the brain at that age,
but I'm sure my time at the computer helped. Every day, I remembered
and could do more. My thoughts were becoming more ordered and
organized, as programming the computer required it.
But, after a few months, observing my obvious enthusiasm for the work,
the principal took me out of his experiment. I was disappointed but
he told me that he'd decided to build a computer lab and turn what I'd
been doing into a real class for everyone in the school. I crossed my
fingers: There were still plenty of books left to get through, and I
was just getting to the fun things like graphics and sound.
When the school's little computer lab was finally opened, all the kids
got sorted into groups of five or so, and each rotated through a
weekly schedule of hour-long visits. When my group's turn came, I was
crushed: I found that there were no assignments, just Pac Man and
Missile Command and a smattering of math and vocabulary games. We
were handed joysticks and told not to touch anything else.
These machines were Atari 400's
and looked so much less advanced than what I'd been used to. I
remember there being an intense nervous aura radiating from the
supervising teacher on duty in the lab, just waiting for one of us to
destroy these things. And, when I asked if I could have a BASIC
cartridge to work on some of my programs, I told that if I didn't
want to participate in the computer activities I could just go back to
class. As bitter as a first or second grader could be, I was.
See, I'd gotten teased a bit for the special treatment in the
beginning, but I didn't mind. And, now that everyone played with the
computers, I got teased for not being so special anymore. What I
couldn't get across to anyone, not even my teachers, was that they
weren't getting what I had. There was so much more they could have.
Well, I'm not sure my thoughts were so mature at the time, but I felt
like everyone, including me, had been cheated.
So that ended my education in hands-on programming, temporarily. I took to
reading more computer books, often bought from the school book fair,
like David Ahl's
BASIC Computer Games.
Lacking a computer of my own, I read and ran though the
programs in my head.
For the next year or so, I had sporadic access to computers. My Uncle had a
TRS 80 Model III
that he let me use during visits. That thing mostly
confused me though, as I was introduced for the first time to an
alternate flavor of BASIC. And still, there was the not-mine feeling
and my Uncle's protectiveness of his expensive business machine.
My grandparents also had a
VIC-20,
but sans tape drive or hard drive, so every visit was starting over
from scratch. Nothing would substitute for what I'd had: My own time
with the machine, doing things myself, building one thing atop another.
Then, the
Commodore 64
arrived at the local K-Mart. I was in love. This was it for me, and
I raved about it constantly. I never quite expected to get one,
though, since the thing was expensive, especially for a kid my age. And
besides, computers were always something that someone else had. But I
guess I must've really gotten on Santa's good side, because I was met
with this surprise on Christmas morning that year:
That first computer was really something. It was mine, given to me by
my family as a whole. No one protecting it from me, fearing I'd break
it.
So I attacked it. I learned everything about it, buried myself in
books and magazines, figured out how every bit of it worked and could
be used. More than once, I'd gone at it with a screwdriver to see
what was inside. Then I went at it with a soldering iron to add
things like a reset switch and RCA audio output. I made friends with
people at a local computer store, and they let me be a guinea pig to
test new software and hardware for the thing. At one point in fourth
grade, I learned 6502 assembly, printed out a disassembly of the
machine's kernel, and mapped out what everything did. I still have
that print-out, bound with rubber cement, and full of my scrawlings.
That Commodore 64 would be my gateway to all sorts of further hackery
and geekery, as well as a means of meeting more of my kind. After
getting a modem, it became my entry point to local (and not-so-local)
bulletin boards, and eventually my first tastes of the Internet. I
was still using that Commodore 64 up until my last year of High
School, coincidentally the year of the machine's last production
run.
I've had other computers since that Commodore 64, but it was opening
that box on Christmas Morning that let me continue the process that my
Elementary School principal had started for me, and I haven't stopped
since. I love to feel my mind stretch, and I love to take things
apart and see what's inside.
[ ... 1529 words ... ]
-
Shawn Yeager just dropped me a line to let
me know that my blog has apparently been pinging blo.gs like crazy today, and
since he's set to recieve IM's on blog updates from blo.gs, he's been getting
flooded with IMs.
First off, if this is happening to you: Sorry out there!
Second thing, this
might be interesting for anyone using a blosxom plugin to ping blo.gs or weblogs.com like me.
Basically, I took the ping_weblogs_com
plugin for blosxom, replaced the weblogs.com request with one to blo.gs,
and searched for the pattern 'congratulations' instead of 'Thanks for the ping'
in the response. Finding the pattern is assumed to mean the ping was successful.
A successful ping, then, causes the plugin to update the contents and timestamp
of the status file with the response.
The status file is used by the plugin to determine whether or not a ping
should be sent. This check is made everytime an item is viewed on my blog,
and if the plugin sees a blog item whose timestamp is newer than that of
the status file, a ping is sent.
So! Onto the punch-line: The appearance of the word 'congratulations' and
the successful registration of a ping are not the same thing. Apparently,
blo.gs has been throwing up an error message in response to a ping, while
still registering a ping. This error message does not contain the word
'congratulations', and so my plugin never updates the status file, and so
it happily tries pinging blo.gs again with the next blog view.
Two lessons learned here:
When using a web service, whether a "real" service or a "scraped" service, be very sure to know and handle the difference between a valid response and an exception or error in the service itself.
When using a web service, take care with your usage pattern. That is, just how important is success? Important enough to try again and again? Or could you wait awhile, whether successful or not? Or should you even try again?
My plugin doesn't know the real meaning of the response to a ping. And further,
the fact that it's designed to try, try again in the case of apparent failure
is not the greatest choice for a usage pattern.
So... longwinded post, but I think the realizations are valuable.
[ ... 440 words ... ]
-
Just read Matt Gemmell's bit of a catalog
of his Nintendo collection, and ever since Russell professed his love
for his ?GameBoy Advance, I've been meaning to write something about my personal video game addiction.
For the past few months, with everything
that's been
going on
in my life,
I've not had much time for nursing my habit. But, since things have calmed
down a bit, and my girlfriend and I
both purchased ?GameBoy SP's, our time mashing buttons and cursing at glowing
screens has picked back up. I count myself as infinitely fortunate to have
found a girl who not only tolerates my video gaming ways, but insists that
we display the collection of consoles in the living room.
I have a photo of the entertainment rack around here somewhere, but it
may have been a casualty of the thirsty iBook
incident. (Still tinkering with getting a Linux box to mount the HFS+
partition on the apparently undamaged hard drive.) From where I'm sitting,
though, I see the following systems either connected via switchbox to
the TV, or stowed away in a mesh drawer:
Nintendo
NES (classic frontloader)
SNES
N64
?GameCube
?GameBoy
Classic
Pocket
Color
Advance
purple with TV connector mod
pink
Advance SP (x 2 platinum)
Sega
Genesis
Dreamcast
XBox
PS2
Stowed away in closets and, possibly, at my Mom's house, I've also
got an Atari and ?ColecoVision. Also, I have a small start on a
computer collection as well, including a C64, Atari 800, Amiga 1200,
and of course a smattering of random PCs.
Eventually, I want a house, and a room in this house will be dedicated
to the display and use of these machines. Also, eventually, I want
to work on a proper collection of these things and their games and
software. (For instance, I'm in desperate need of a second generation
top-loading NES.)
The funny thing is that people still ask me occasionally if I really
need or use all this stuff. How could the answer be anything but yes? :)
[ ... 472 words ... ]
-
Oh, and I've been meaning to post a little note of thanks to everyone
who're still reading this blog. I haven't done or said much of note
around these parts in some time, with the only saving grace being my
automated BookmarkBlogger posting every night. More than I expected,
those posts have actually caught the interest of a few people.
But I never wanted this
place to turn into just another link-blog. And I also have been feeling
a bit guilty that my Quick Links give no indication of source where
I found these tidbits. They are, more often than not, gleaned from
the 320 or so sites whose RSS feeds I slurp down 12 or so times a day.
I really need to get some sort of blog roll going again, but somehow I
doubt that everyone wants to download my RSS feed list when they visit
here.
Anyway, that's all. Thanks for reading and sticking around as I get
things sorted and stitched back together in the offline world. I hope
to come back with some nifty things soon, since I'm itching to hack.
[ ... 187 words ... ]
-
RSSify is a rather horrible hack that shouldn't be needed any more. Please ask the owner of the site you're reading (...) to change to a system that generates RSS natively such as Blogger Pro or Movable Type. Alternatively consider hosting RSSify yourself rather than using my bandwidth.
Source:22-May-03 Moving away from RSSify
Noticed this show up suddenly today as the new item to a surprising number of
feeds to which I subscribe. I knew Julian Bond's public RSSifying service
had gotten used far and wide, but wow. The bandwidth bills must've been getting
quite annoying, having become a sort of adhoc bit of the
Rube Goldberg blog
infrastructure. So, as for my own consumption, thanks for the use of your tool Julian, and
all apologies for being a leech!
Well, I'm still working a bit on my own Java-based transmogrifier robot
to scrape disparate sources of info into RSS feeds for me. I suppose I
should get to work trading my RSSify-based subscriptions for my own
DIY-scraped versions. If I get some time soon, I'll wrap this thing
up and release it. But first, I hope to get it fully automatedly working with the
iTunes Music Store, as I've been tinkering.
[ ... 306 words ... ]
-
A copy of Programming Web Services with Perl
surprised me yesterday by arriving on my doorstep. I'd forgotten that back in March,
Paul Kulchenko
(one author
of the book's two) had offered me a free copy of it response to
a quick thought of mine about
a more Unix-like model for filtering web services (something I want to get back to).
Anyway, I've yet to get very much into the book, but a cursory skim tells me
that this looks like a great book. Thank you very much for sending me a copy, Paul!
[ ... 95 words ... ]
-
I've just started experimenting with integrating
Sam Ruby's autoping.py with my blogging
rig here, and discovered that I really had rushed things a bit and didn't
understand what the thing was doing. I think my caffeine intake for today is
way below baseline, so if yours happens to be one of the sites I mistakenly
vandalized or spammed with broken or erroneous trackbacks, I apologize profusely!
Update: Speaking of Trackback, I just duct taped an initial implementation on
the receiving end with a revision to my BlosxomDbcomments plugin. It needs some testing,
so if you see this, and don't mind, pummel this entry with trackbacks! Next, I'm
considering integrating referrers, thus bringing this new blogging rig up to
where I was a little over a year ago.
[ ... 416 words ... ]
-
I've given the aggregator a concept of whuffie. I can give any item that has been aggregated a thumbs up or thumbs down, increasing or decreasing the item and site's wuffie. I sort the sites out as I display them by their whuffie. It is a simplistic way of keeping the sites I'm interested in at the top of the list. I'd like to wire in a Bayesian classifier too, and see if that helps me get the items I like to the top.
Source:Serval, an aggregator with Whuffie via matt.griffith
Yay, an aggregator with whuffie-tech! This is very similar to what I was doing with
AmphetaOutlines for AmphetaDesk - when I click on an item from a channel, I increment
a counter for that channel. And, when I sort channels for display, I use that count as
a factor in the sort.
And, of course, I want to use Bayesian filtering to see what I do want.
What would really be whuffie-riffic about aggregators that support this kind of
thumbs-up/down tracking, would be to have some P2P sharing of these decisions
to come up with something actually more like whuffie. That is, I would like
to see how much right-handed whuffie some item has gotten, and possibly
bubblesort up or visually tag items I've yet to read based on that whuffie.
Right-handed whuffie being, of course, the accumulation of whuffie an item
has be given by authors to whom I've given lots of whuffie.
[ ... 458 words ... ]
-
Okay, no.
This new Matrix-inspired phone is ugly as hell and not cool at all. Do they actually use this thing in the movie? I hope this isn't a sign for what the movie will be like -- clumsy, bulky, cartoonish and not at all subtle like the original. See, the first movie had a phone. I forget the model number, but I think it was a Nokia. its sliding keypad cover was modified especially for the movie with a switchblade-springload action for extra cool factor. And, unlike a lot of phones at the time, it was slick and sleek and tiny.
So what the hell is this thing? It looks like a walkee-talkee for grade school kids, not the "ultimate conversation piece".
[ ... 942 words ... ]
-
...Personally, I'm getting SICK of running into my OWN BLOG while doing research into any of the topics that I've ranted about here. I spend a couple posts talking about a technology with questions or thoughts, then later I go to implement this tech and need specifics and yet 2 or 3 of the top ranks are filled with my annoying blather. Urgh!...
Source:Russell Beattie Notebook: Those who Live by Google...
Amen. More and more, I'm running into myself on Google. I'll be looking for expert information
on something I'm trying to tinker with, and discover that one of more of the top
search results are me writing about looking for expert information on the thing I'm
trying to tinker with. Just occasionally do I find myself having actually provided
the information that I'm currently seeking.
I mean, it's not a gigantic shocker-- my interests are relatively stable over
time, and I circle back to things after long periods of time, so this is to
be expected I suppose. But I'm starting to feel like I'm in a bad time travel
movie.
[ ... 317 words ... ]
-
I've never gotten much spam. I closely guard the email addresses I care about. Spamex makes it simple but I did it before without Spamex. My problem is information overload. I'm much more interested in seeing the same thing for RSS. Instead of blocking stuff I don't want I want it to highlight the stuff I might want.
I've been out of the loop lately because I can't keep up with all of the feeds I would like to monitor. I need help.
Source:matt.griffith: Where is RSSBayes?
Ditto. Using a Bayesian approach, or some other form of machine learning, as applied
to my aggregator and my viewing patterns is something I've been wanting for awhile now.
I've done some very, very primitive self-monitoring with AmphetaOutlines, but I'd
like to get back to some machine learning research and build an enhanced
aggregator that learns what makes me click.
[ ... 216 words ... ]
-
Yeah, I know I gave the iTMS a 'bah'
last week in response to discovering DRM under the hood. But I've softened in
my opinion since then. And bought a few more songs that I haven't heard in years.
And burned an Audio CD. And wasn't too inconvenienced.
My girlfriend and I almost
bought iPods last night, and though we resisted the temptation this time, I expect that
we'll end up with them before long. And when that happens, I imagine we'll try sharing
tracks, and that doesn't seem to be too inconvenient either. And then, there's the
fact that the iTMS seems to have a pretty nifty set of underpinnings that look like
fun to play with.
So now, like anything I'm interested in on the interweb, I want to swallow it up with my aggregator.
Thus, I attempt a new project: ItunesMusicStoreToRss
Progress so far, but I've hit a stumbling block. Anyone want to help?
Update: A little bit of cut & paste from the wiki page:
If you spy on iTunes while browsing to a "Just Added" section of a genre, you'll find that a URL like the following is accessed:
(it's a long URL)
The response to that URL is some very interesting XML that looks like a GUI language. Buried in the GUI recipe, however, is what I want flowing into my aggregator. So, I dust off my XSL skills and have a go at mangling this content into RSS. I seem to have been successful. A test run appears to validate, and is accepted in my aggregator.
The problem, though, lies in the aforementioned URL. Everything seems pretty clear and straightforward, and I can change genre's by supplying discovered ID's to the id parameter. However, the "fcid=145690" parameter is an unknown to me. It seems to change, though I haven't yet investigated its derivation or how often it changes. I was working on things yesterday, and the value was one thing, this morning it was another. If the number is not valid, unexpected results happen, sometimes resulting in HTML output describing an application exception. So, until the fcid mystery is solved, I've yet to automate this transformation.
Any ideas out there on the lazyweb? Visit the wiki page (ItunesMusicStoreToRss) and feel free to poke fun at my XSL skills.
[ ... 716 words ... ]
-
Posting this just in case anyone needs it. I've been getting the
following strange message lately in logs and consoles under OS X:
## Component Manager: attempting to find symbols in a component alias of type (regR/carP/x!bt)
As it turns out, I had just installed Toast. A quick Google search
leads me to blame Toast and remove a QuickTime component supporting Video CD.
That's pretty obscure. Hmph. So much for never again worrying about strange drivers and
cryptic error messages under OS X. :)
[ ... 122 words ... ]
-
How unobservant am I? It took
an article from The Register
to make me realize that this new Apple music store does indeed
use DRM to lock up purchased music. The files aren't mine. (Though, whoopie, I'm
allowed to use them on up to 3 computers.)
Crap. No thanks. I guess
that 10% of the catalog doesn't look quite so attractive at a buck-a-song now.
I mean, I've already destroyed one computer
with a tumbler of water, why would
I want to lose all my music with the next one I douse? And what happens in 10
years or so, when I want to listen to all those hypothetical Talking Heads
tunes I bought? Of course, I'm still listening to CDs and tapes I acquired back
in junior high, and I don't need to query anyone's permission to do so.
I hereby bestow this award to the Apple Music Store:
[ ... 243 words ... ]
-
Umm... what is iSync doing? I didn't know that it had anything to do
with my bookmarks. Lately I've been using bookmarks more since I started
using BookmarkBlogger. Nearly every time I try dragging a bookmark into a
toolbar folder, though, I'm rebuffed by this dialog. What gives? The bookmarks
don't show up on my PDA, or my calendar. With what are they being synched?
I see that Scot Hacker has
discovered the same thing happening to him. Lots of comments, but still no
answer as to what's up with this. Hmm.
[ ... 230 words ... ]
-
Okay, so in between all the other hecticity currently ongoing in my life,
I managed to check out Apple's new music service. Although I'm not interested
in approximately 90% of the music offered so far, that still leaves me with
2000 songs whose "buy" buttons call my name. The process is simple, the files
are mine and not locked up with DRM, and although I hope and expect the price
structure to change (ie. maybe price based on popularity?), a dollar a song
isn't horrendous considering that I get what I want on demand and without
hopping in the car and going anywhere. So far, so good.
So... This got me to thinking in the last 10 minutes: What about an indie
clone of the Apple Music Service? One using RDF or some other XML format to
offer up the catalogues of record labels? Include all artists, albums, songs,
and any various and sundry bits of trivia about all the above. Establish a
modular mechanism for specifying payment process (ie. paypal, credit card,
free, upload a song), and make the whole interface as slick and easy as
iTunes'.
The real trick I see in this, though, is to make the file format for music
vendors fairly easy yet flexible. It should be as easy or easier than an
RSS feed for a blog. Let a hundred of these mushrooms bloom, aggregate,
search, and buy. Make it distributed and not dependant on any particular
music company or technology company.
Not a terribly original idea, but that's what I just thought of. Sounds
like a good semantic web app that could have some umph going for it in
the immediate future.
[ ... 366 words ... ]
-
I take rsync for granted. It's just the best way to keep stuff out there
up to date with stuff over here, and vice versa. And lately, I've been
using it to supplant my usage of scp. And it works. Brilliantly. And until
recently, I hadn't stopped to realize: Hey, you know, this thing somehow works
out differences between files out there and over here long before
what's here gets there. I know, duh, but I just hadn't considered it.
Well,
Paul Holbrook reminded me
of this tonight, with links to Andrew Tridgell's
paper on the algorithm, among other things. Damn, things like this remind me
that I'm supposed to be getting my butt back into school...
[ ... 179 words ... ]
-
Was thinking about learning new tech, and well, I haven't gotten much
farther into .NET
as I'd wanted to,
given moving and a sudden appearance of work. Instead, I've had
a project hurled at me that combines Flash MX and XML, something I've
just barely touched before.
Wow, is this stuff fun. And I write that without a trace of sarcasm--
this app I've inherited for maintenance and extension, though it's a
bit grungy and shows some signs of late-night expediency, does some
neat things with actions and buttons dynamically wired up in response
to XML config files loaded via URL, which are in turn generated from a
database managed by a simple end-user admin interface. Not sure how
much more I should write about it, but I'm dying to post a link. Of
course, this isn't revolutionary stuff in general. It's just a revelation
to me.
The last time I had an opportunity to really, really dig around in
Flash was just about when Flash 5 came out. I was immersed daily in
Flash in the years between the initial acquisition by Macromedia, up
through version 4, and just started drifting away, sadly, when things
started getting really interesting with v5. That was when my daily
job swung entirely into the backend, and client-side concerns were
no longer my task.
But now I'm back in this old neighborhood, and I can see why some
people would love nothing better but to build entire websites in
Flash. Yeah, that's evil, but it's sexy. Despite some clunkiness,
there are some very nice possibilities I see now. I love Java, and
loved cobbling together funky applets 5 years or so ago, but Flash
makes me want to toy with rich client-side apps again.
And then, there's this
Sony Clie PEG-TG50
handheld I've recently started pining for, and it appears to run a
Flash 5 player. It's probably underwhelming, but who knows?
Anyway, back to work, but just had to spill some enthusiasm for
Flash.
[ ... 543 words ... ]
-
Mark Pilgrim asks: What's your Winer number?
I duck & cover under our new futon. The one that I broke already, but it's better than nothing.
[ ... 27 words ... ]
-
Wow. Missed this from Mark over
the weekend, and then further missed Sam's link to it
and the subsequent whirling mass of shitstorm that rolled past in its wake. Well, at least everyone
just ends up being an asshole in the end, and no nazis or ethnicities
or monsters were invoked in the process. And no one co-opted any acronyms, though I think
someone got ketchup on their tie. Hope it wasn't silk. And does anyone know if all us assholes
are actually alive or dead in Schroedinger's Trunk?
[ ... 127 words ... ]
-
Finally got around to reading Cory Doctorow's Down & Out in the Magic Kingdom,
and though I loved it, I wish it were longer. Or, at least,
I'd love to see more stories from the same setting or playing with the
same themes of the Bitchun Society. I have seen some of these things
in stories before, though. So, hey, I haven't posted anything here in
a few days - have some babble and book links (feel free to comment and
leave some more links):
Of course I love the notion of ubiquitous computing and personal
HUD's. I've babbled about that at length for sure. If you want more
of that, go check out Vernor Vinge's
Fast Times at Fairmont High.
Mediated reality with P2P computing woven into clothing and projected
across contact lens displays. A little less obtrusive than
seisure-inducing in-brain electronics, but just as post-human.
And then there's backup-and-restore and the cure for death. Although
in David Brin's
Kiln People,
things start with disposable
doppelgangers, survival of personality after bodily death is promised
in the ending. What could change human nature more than transcending
mortality?
As for deadheading, check out Vinge's
Across Realtime
series. In
particular, read up on bobbling in Marooned in Realtime. There's also
Orson Scott Card's The Worthing Saga.
A one-way trip into the far future
through geological periods of time seems particularly external to
known human experience, especially when combined with immortality.
One thing I've yet to see much in stories or speculations is how
society could function in a post-mortality and post-scarcity
conditions. I've never been satisfied with the way Star Trek dodges
the day-to-day realities of a post-capitalistic Federation of plenty.
Walter Jon Williams' Aristoi
explores an interesting track with a
meritocratic society whose top members have godlike powers matched to
godlike creativity and self-possession (not to mention possession by
multiple selves).
But so far, Whuffie and its currency in reputation is the best game
I've seen yet. Since, even if the problems of mortality and material
scarcity are solved, human attention and cooperation will never be
gratis. So, how else do you herd the cats when you can neither
threaten nor reward them via any physical means? Seems like the
blogosphere, gift culture, and open source noosphere brought to
reality.
Kinda makes me want to get back to fiction writing meself and finally
get out the dozen or so stories I've had bouncing around in my head
these past years. Doesn't necessarily mean I'd churn out anything
good, but who knows? Maybe after some work and some stumbling I could
produce something passable. All those creative writing classes in
college and short stories in spiral-bound notebooks from high school
have to count for something. I'd even love to squat in the Bitchun
Society for a few stories, but that might be a bit presumptuous, even though
Mr. Doctorow himself has let on
that he's not likely to write more
tales from the same Bitchun universe.
Better to get some practice in before jamming in someone else's club.
[ ... 703 words ... ]
-
In my intermittent online presence, I've been happily watching Dave Winer's
ramp-up with OPML toward OSCON with things like
"How to implement an OPML directory browser".
I love outliners, and though it's been a little while since I played with Radio, I loved Instant Outlining
and the transclusion of outside outlines into the
one at hand via URL. And when Dave first introduced the idea of an OPML-based distributed web directory,
I figured it was the start of another nifty twist in the web fabric. (You know, the kind that makes wrinkles?
The kind of wrinkles that make brains smarter?)
Anyway, even given all this, OPML has always bugged me, and I'm not alone. In fact, today I found a
link to OML, the Outline Markup Language project on ?SourceForge,
which seems to seek to address many of the same things that have bugged me. That is, things like
UserLand application-specific tags, and extension via arbitrary attributes. Though I'm no master
of deep XML magic, these things struck me as grungy.
But you know, we're designing for recombinant growth with the lazyweb here
(or at least, Dave Winer was and is), and OPML looks like one of those dirty things that got
thrown together quickly in order to enable a laundry list of further projects. It works, isn't
all that hard to grasp, and has gotten adopted quickly. There's momentum there. As
Dave says, there is no wait for
tools.
So, now there's also OML starting. Hopefully this won't become another rehash of the RSS fight.Because, I sense many similar issues between the two. Maybe it would have been better still if OML had been named something
completely avoiding the letters O, P, M, or L. I already see mailing list charters being called out in order to quiet unwanted
discussion of fundamentals, but, hopefully we can avoid anyone claiming that they have the One True Format, all fighting for the
same name and slapping each other around with version numbers. Gah.
Anyway. I like OML but see some grunge in it as well. At the moment, I'm using an OPML-supporting tool. I can't imagine that
conversion would be more than an XSLT file away. Well, maybe more than that. Beyond that, let's agree to disagree and viva le
fork. Let the best, most demonstratably capable format win. Meanwhile, I'm still considering that Tinderbox license to see if I
might like multi-dimensional outlining...
[ ... 741 words ... ]
-
So yeah, I wanted tabs in Safari. Or something like tabs.
But since discovering that the "Window" menu in OS X is not painful to
use, and that CMD-Backquote rotates between open windows, I've not missed
tabs an incredible amount. I think my tab usage was a response to my experience
with Windows and Linux window managers, though even those have changed since
I started using tabs in my browser.
However, I still love using pwm as
my window manager under X11, with its tab collections and windowshading. Go
figure.
[ ... 91 words ... ]
-
Been out of touch with my usual online haunts as of late, and have barely had time to think straight,
let alone write.
My girlfriend and I have moved just from one end of town to the other, and we have an overlapping
month of leases between the new place and old. In theory, this is plenty of time to accomplish moving
and cleaning and decorating. But, some things are hard to move gradually, especially when both she
and I have compact cars and trucks rent by the hour. And then there's the fact that our new residence
is found on the third floor, providing much exercise to the both of us. So, it's been a hectic couple
of weeks so far, though well worth it for us to start settling down in new and improved digs.
And then, right in the middle of things, I was offered an opportunity for a few months' work as a
contractor for a startup located almost (but not quite) within walking distance of the new apartment.
So, I decided to snatch that up, and since then things have been doubly busy. Whew. This'll give me
some breathing room to figure out what's next, hopefully.
So anyway, here's hoping things settle down a bit and stop feeling like I've jumped the tracks. After
getting life in general into some semblance of happy chaos, I hope to get a chance to catch up on
happenings and maybe even take a stab at some of my projects around here again.
[ ... 340 words ... ]
-
0xDECAFBAD (I love that name) has a new design, powered by Blosxom. I like it. You're going in my bulging blogroll, Les.
Source:Time is Tight: 0xDECAFBAD v2.0
My referrer monitoring scripts have been out of action since shortly
after I revised my site design, so I've been missing links. Today, I
fired things up again for the first time since the beginning of the
months and caught the above. Wow, and I'm an Elite Geek, at that!
Welcome to 0xDECAFBAD! You can do anything at 0xDECAFBAD! The
unattainable is unknown at 0xDECAFBAD! Yes, this is 0xDECAFBAD, and
welcome to you who have come to 0xDECAFBAD! Welcome!
(Please tell me someone out there knows what I'm going on about.)
[ ... 232 words ... ]
-
Sheesh. Okay, come back to us when you get a real phone.
Source:Mobitopia: Slashdot - Life in 3G
So says Mr. Beattie. And I say, "Hey, that's my phone!" And then I
say, "Oh yeah, that's right, that's my phone." I like my phone, it's a
nice phone. I had a Treo Communicator, but it went kaput. I thought the
A500 would be a decent 3G device. It's not. But it's a nice phone...
[ ... 75 words ... ]
-
Apologies in advance if this post-cum-essay runs a bit long...
We could conclude that modern human intelligence is an unfinished product, and something that nature hasn't quite got around to polishing yet. The problem-solving intelligence part can be tuned and revved up to high levels, but it becomes unstable like early supersonic jet prototypes that shook themselves to pieces just after reaching the sound barrier. Nature has outstripped itself, producing a freak organism with a feature that's obscenely over-developed but under-refined. We've seen examples of evolution getting ahead of itself before, like the rapid conversion to an erect, bipedal skeletal frame without properly modifying the spine to withstand the back-aching load of pregnancy. To get a better grip of human failings, and human stupidity, you have to realize that modern Homo sapiens sapiens just isn't done yet.
Source:Disenchanted: * Early prototype, expect instability
When our own instincts are inadequate, or become a hazard, and the surrogate activities to control them aren't sufficient anymore, then there certainly will be a push to change human nature to fit his new, self-crafted niche. And the answer to my original question?that man will invent something that knocks him out of his niche with fatal consequences?is yes. Homo sapien will die, and homo modified will inherit the earth.
Source:Disenchanted: Invent this and die
There's only an essay or two per month published over at Disenchanted,
but they're gems, each and every one. And what I read almost never
fails to resonate with something I've been thinking or musing about,
from my perspective as a geek wondering about life, the universe, and
everything and as a fan of Kurzweil, Vinge, and all of post-humanity.
But my anticipation of the Singularity is constantly swayed by things
such as the theses of the above quoted essays.
See, as an irredeemable believer in the ways of better living through
technology, I look forward to our increasing ability to further
self-improve and bootstrap to higher levels of living, longevity,
ability, understanding, and exploration. But, there's a neopagan
mystic and naturalist in me who keeps looking for the catch. There
must be natural limits we don't yet understand.
No matter the precocious cleverness of our species, there's got to be
plenty of good reasons it takes millions of years to achieve progress
in forms and patterns of life. There are lots of little subtle
details to be easily missed. We're smart, but not yet endowed with
the patience and wisdom that eternity grants. I both breathlessly
await and fear the arrival of our ability to fundamentally change
human nature directly through genetic manipulation and device
implantation.
As the first essay quoted above asserts, I believe the human species
is unfinished. But as with the second essay, I think we've outpaced
evolution in terms of changing the conditions under which the process
itself occurs.
Just look around you. You're likely indoors, in a building composed
of simple straight lines which register easily on your visual pattern
recognizers, with corridors and doorways and rooms proportioned to
your bodily dimensions. The air is conditioned to your respiratory
and temperature tolerances. Things are padded and accessible. Food
and drink are likely plentiful. The only predators you're likely to
meet up with during your day are of your own species. Nothing really
challenges your basic nature.
Yet, this is just what the universe has been doing to forms of life
throughout the history of evolution. Only now, we've jumped the
tracks, reversed the flow of control, and have reshaped our corner of
the universe to fit our status quo. So, where does that leave the
natural process of evolution with regard to us? Stopped or slowed to
a crawl, that's where. Maybe falling backward, since we have
prosthetics, glasses, and other forms of propping up imperfections
that would have otherwise been faced with disincentives by natural
selection.
So, where are we without a natural evolution? We're left as an
unfinished species, with a peculiar mix of awesome abilities matched
with amazing disabilities. Very clever people, but with a lot of
blind spots. There are certain ways in which it is very difficult and
sometimes nearly impossible for us to think. We have biases toward
grouping things by similarity, dividing them by difference - which
allows for a very elegant economy of memory and thought, but allows
for peculiarly devastating things like racism and xenophobia.
Critical thinking is counterintuitive, yet is one of our most powerful
tools.
And there are definite flaws in our perceptions of reality, as any
book of optical illusions will tell you. One thing that struck me
like a thunderbolt came from a human biology class: Ever try following
a common housefly with your eyes? Isn't it frustrating how it just
seems to vanish from your sight? I can't find a reference to back
me up, so this is just from memory: I was taught that flies have
developed a particularly zig-zaggy and erratic flight pattern to evade
just our kind of mammalian vision system. But, studies of fly-eating
frogs have shown that their vision systems appear particularly tweaked
to react to a fly's midair dance. Imagine what else slips past us, or
comes to our attention garbled because our very apparatus contains
biases of which we're yet to even conceive?
Here we are, then, flawed and incomplete yet with a growing ability to
self-modify. As an amateur computer scientist, I shudder a bit at any
code that's self-modifying. It can be done, and it can be powerfully
enabling, but it's just so damn easy to blow a foot off with the
technique. So too with ourselves, then. There's a possiblity that we
can push ourselves into a richer level of thought and perception and
ability without destroying ourselves completely. But, we're going to
miss things, important things.
If we're lucky, we'll roll with it and survive. But, as the second
Disenchanted essay explores, we'll most certainly render the species
as we know it extinct, and push ourselves our of a natural niche and
into a wholly artificial niche in need of perpetual maintenance and
renewal. Maybe this artifical niche will be easily sustained and
portable enough to take with us if we want to leave the planet. On
the other hand, maybe this artificial niche will prove our undoing as
it outstrips our ability to keep it up.
So, given all this, I think the inevitable predicted verticality of
the Singularity's curve has an incredibly strong counter-force
stemming from human nature itself. What does this mean? Not sure.
What to do? Not sure. But it tells me that the Kurzweilan and
Vingian predictions of which I'm so fond face some severe reality
checks.
More thinking to do. Thanks to Disenchanted for making me think this
far today. :)
[ ... 1507 words ... ]
-
It comes out the box with an English manual, a PC link cable, the GP32
uses PC smartMedia as its Hard disk and has 8meg of internal ram + its
(upgradeable) OS, a USB port, a large hires screen (which is SO much
better than the GBA one), two stereo speakers (one on each side), a
joypad and 6 buttons (4 on front and 2 shoulder buttons), a 3v in
socket, a headphone socket, volume control, battery compartment (2xAA
for 10-14 hours) & an EXT out port which allows you to do many things
including using the gp32 on your TV or for wireless multiplayer.
...The console is open source and fully supports people making their
own programs for it, there is a GCC based devkit complete with
graphics and sound libs.
Source:GBAx.com review of the GP32
Just read a review of the GP32,
a handheld game console I'd never heard of before.
Pictures of it look amazing,
and the specs aren't too shabby either. Powerful enough to run
emulators of a sickening array of game platforms, uses ?SmartMedia
cards, support wireless multiplayer via cell phone. And, oh yeah, it
looks like you can actually see the screen.
The biggest flaw I see in this thing is that it would be so easy to
pirate games for it. Supposedly there were some attempts to provide
for a mechanism to "lock" games to a particular handheld, but that
appears to already have been circumvented. So, while the thing looks
like a dream machine to me, it probably looks like a nightmare to game
producers.
Still, though, I want one. And I bet Russell Beattie wouldn't mind
one either, if he hasn't heard about it yet, given his
professed love for his GameBoy Advance.
And, speaking of Russell, I wonder just how well that wireless
multiplayer support works...
[ ... 307 words ... ]
-
Playing with a few other little widgets here and there, I thought I'd
fire off a new revision to the BookmarkBlogger for Safari I've been using
off and on. This one's a big more OS-X-ish, and uses a properties file
for configuration instead of completely confusing command line options.
ShareAndEnjoy!
Also working on a lil DOM4J-, JTidy-, and ?BeanShell-based scraper
for producing RSS from arbitrary web sites. Yeah, it's been done before,
but not by me and not quite like this. And eventually I think I want to
try turning both this and the BookmarkBlogger into dual-purpose standalone
and AgentFrank plugin packages.
[ ... 132 words ... ]
-
So, while my time is mine, I've decided that I want to expand my
practical horizons. And, one of the first things I can think of is to
go lateral and approach something I've looked upon with mild disdain:
Microsoft technologies. In particular: .NET
I already understand Unix well enough to do damn near anything -- this
is not to say that there aren't still years worth of things left for
me to learn in that sphere, but I'm not nearly as adept with
Microsoft's offerings. And, besides the practical concerns with being
flexible enough to take on what work the world offers me, I also have
a hunch that this .NET thing will make me think as differently about
Microsoft as OS X made me change my mind about Apple.
Maybe. But it's still a good attitude with which a punk unixhead can
approach the subject, I think. I'm going to assume that brighter
people than myself have applied themselves to the creation of .NET and
prepare to be surprised. This attitude has always served me well in
the past when trying something new. (Take Python, for instance.)
Okay. Got a good attitude. Have installations of WinXP and Win2003
preview (which I'm kinda, grudgingly digging so far) running in
Virtual PC on my ?PowerBook. Could even draft a PC at home into
service running an appropriate OS if need be. Have downloaded the
.NET Framework and installed it on XP and Win2k3.
Now what? Were this Java, I'd pop open an emacs window and start
playing. I'd grab some free app servers and check some things out.
Being on a fixed budget, I don't think I can spring for any packages
like Visual Studio .NET. And being a unixhead, I'm used to being able
to find dev tools for free.
Anyway, this absolute newbie is continuing to poke around.
[ ... 1204 words ... ]
-
So, as I'm working to recover all the old tools I knew and loved on my
iBook, I see this on the
Searchling home page:
Work on Searchling has ceased to focus on its successor
-- iSeek.
...and there are no downloads for searchling available, neither binary
nor source. Harumph!
And the screenshots of iSeek don't please me much -- I see a search
field wodged into the menu bar in place of the nice, slick ghostly
search field that would materialize with a quick Cmd-Space or a click
of the magnifying glass. Gagh. My menu bar's already crowded enough
with menu entries on this 12" screen as it is.
But, if I'm completely wrong, and there ends up being a feature to
make iSeek work and feel just like my old friend the Searchling...
well, then I say congratulations to its author for cobbling together a
saleable lil widget, and I'll be waiting impatiently for its release.
:)
[ ... 237 words ... ]
-
Here's a little something I whipped up last week: BlosxomPaginate.
I've been using Blosxom and Blagg for my news aggregator lately, just
for a change, and one thing I was really missing was some way to see
entries that fell off the end of the front page.
So, I made this. It
lets me flip back and forth between pages of Blosxom entries, and I
even went so wild as to include full flavour-based template support of
the display of the navigation elements.
ShareAndEnjoy
[ ... 342 words ... ]
-
I hadn't written much last week, between the job search and getting
myself hooked up with a new laptop. I'd meant to revise my initial
post announcing my being laid of - but instead, I simply lost it
when the iBook got doused.
So, to those of you who haven't heard my news: I was laid off a little
over a week and a weekend ago. No hard feelings or fireworks, just
typical bad economy reasons. Nevertheless, it took me by surprise.
So now I'm shopping my resume around.
If you're interested, my resume is available here:
http://www.decafbad.com/2003/03/l-m-orchard-resume.doc
http://www.decafbad.com/2003/03/l-m-orchard-resume.pdf
Last week was strange. Having been let go on a Friday, I had a
weekend to pretend things were all as usual. But, when Monday hit,
things were different. I still got up at the usual time, did the
usual morning things, and got out of the apartment as if I were going
to work. But instead of heading for the highway, I headed for a
coffee shop
near campus with wireless internet. Trying to keep the old
patterns as normal as possible, only now my job is finding a job and
getting myself in shape for what's out there.
It wouldn't be news to anyone to hear that the job market, at least
what I've seen of it so far, is nothing like the verdant plains and
valleys of even 3 years ago - which is about the last time I took a
serious look. After a first survey of a few job boards online, I
fired off a handful of resumes and apps, and took notes on what's
being asked for so as to prepare some semblance of a learning plan
while I'm off.
So by the end of last week, I'd accomplished these various and sundry
things:
5 cups of coffee consumed per day
1 resume updated and revised
6 resumes emailed and 4 online applications filled
5 profiles completed at online recruiting sites
1 application for unemployment filed
1 12" G4 ?PowerBook acquired and configured
3 Microsoft operating systems installed and configured under Virtual PC
1 .NET Framework installed and exploration begun
1 novel finished, Close to the Machine by Ellen Ullman
6 hours of Metroid Prime played
There's been more, but it's the amount of Metroid Prime play I'm most
proud of - had I not gotten out of the apartment in the morning, the
hours invested in that would have been immensely greater. Maybe after
I've fired off a few more resumes, I'll feel better about actually
taking a rest since my brane's been going full speed for months now at
work.
Thought a bit about striking out on my own with freelance work, but
the Ellen Ullman book has given me a bit of a strange mood. She makes
working for oneself sound both promising and desolate at once - though
the promising bits would seem to be the things that disappeared with
the 90's. So that leaves it sounding pretty unpalatable. Who knows,
though - I always wanted to work from (but not at) a coffee shop.
Well, back to searching and my first baby steps with .NET - wish me
luck. And if you happen to be in town, stop by and say haloo.
[ ... 812 words ... ]
-
Just for future reference: No matter what your cats think, iBooks
never get thirsty for a nice big tumbler of water. Nor do they
ever have a need to soak in the contents of said tumbler overnight.
Although now, I have an expensive, dead laptop that makes white
noise sounds not unlike the ocean when it's plugged in. And it
smells like the magic blue smoke when it's let out of the chip.
I just hope that the hard drive is recoverable. Updates will be
sporadic as I try to reconstruct my environment and remember passwords
and try to find serial numbers.
Oh yeah, and still on the trail of the job hunt. Going to be
tweaking some things around here as I have time, to make things
a bit more presentable for company.
[ ... 552 words ... ]
-
Simon Willison writes
that he'd read my blog more if I
pinged weblogs.com
more often. I used to, via MovableType, but my new blog doesn't. Enter
ping_weblogs_com,
a Blosxom plugin to ping weblogs.com. I've
just installed it.
Let's see if he notices. :)
[ ... 182 words ... ]
-
So on this day last year,
I was excitely thinking about pipelining webservices together like commands in a
UNIX command line shell. Lately, I've been doing quite a bit of work at the
command line level, more so than I ever have before. And for all the clunkiness
and inelegances to be found there, I think the zen has stuck me.
Sure, it's an ass-ugly string of characters that connects commands
like find, sort, awk, sed, grep, and ssh together. But, in constructing such
monstrosities, I find myself generating new disposable tools at a rate
of at least one every minute or so. And, though a few have found themselves graduating
into fuller, cleaner, more general tools, I would have been stuck for
hours were it not for a quick multi-file grep across a vast plain of comma-separated
value files digested by a tag team of sed and awk. Then, like magic, I toss in
an incredibly slow yet, at the time, convenient call to mysql on another server
behind a firewall via ssh with a SQL call constructed from the regurgitations
of said sed and awk brothers.
So, I'm thinking again: How hot would this be if it were web services replacing
each of my commands? How hot would it be if there was a STDIN, STDOUT, and STDERR
for a whole class of web services? Imagine an enhanced bash or zsh piping these
beasts together. For awhile, I thought my XmlRpcFilteringPipe API was the way to
go, but lately I've been thinking more in the direction of REST. I have to admit
that the XML-RPC API is a bit clunky to use, and besides, no one's really paid
it much notice besides using it in the peculiar fashion I do to make my WeblogWithWiki.
How about this for a simpler API: Post data to a URL, receive data in response.
There's your STDIN and STDOUT. What about STDERR? Well, I suppose it's an
either-or affair, but standard HTTP header error codes can fill in there. What
about command line arguments? Use query parameters on the URL to which you're
posting. This all seems very web-natural.
Now I just have to write a shell that treats URLs as executable commands.
[ ... 725 words ... ]
-
Okay, I don't think I made this up: I was reading Wired Magazine a
few months ago, and I saw a phone featured in the Fetish section
that was designed like a KooshTM ball or a sea-urchin.
The idea is that it would be used in a teleconference, thrown back
and forth across the room from speaker to speaker.
We need this at my work.
Has anyone else seen this thing, remember what it was called, or
where they're selling it? I can't seem to find it again in any of
the Wired issues I can find in my apartment and office.
[ ... 203 words ... ]
-
So, sometime around last November, my iBook started having battery
problems. It went from 3 hours of life, down to an hour, and finally
down to about 15 minutes' worth of life. Being lazy and busy, and
having my iBook mostly at desks near outlets, I put off taking it into
the shop -- I'd just taken it there to replace the hard drive, and I
didn't feel like parting with it again. Stupid, I know. Lazy, I know.
Well, since then, the problem hasn't gotten better, and I was just about
to get off my ass to do something about it when I see this:
MacNN: iBook users experience 10.2.4 battery bug
So after browsing around forums a bit, I learned how to reset my Power Management Unit,
did so, and discovered that the battery began to charge again. I left
the iBook off and watched the 4 LEDs on the battery gradually light up
over a bit of time while working on my desktop. Looks like the problem's
solved for now.
Ugh. I'm glad, at least, that it wasn't a physically dead battery.
[ ... 206 words ... ]
-
Oh yeah, and a giggle for me today:
Go to Google.
Enter "I'm Feeling Lucky".
Click "I'm Feeling Lucky".
What do you see? If you're seeing what I'm seeing, it's this very site!Now, I'm not sure how long this will last, or whether it means someone at
Google H.Q. loves me, but it's pretty dern nifty.
Thanks to Nathan Steiner of web-graphics.com for the tip!
[ ... 439 words ... ]
-
Thanks for bearing with me out there in the blogopshere. This transition, though
smoother going than I'd thought, is still exposing some rough spots and things
I hadn't thought to check. Seems my RSS feed hasn't come through quite as intact
as I'd hoped -- and an unexpected bug in the rss10 plugin for ?Blosxom seems
to have caused some news aggregators to implode. Apologies all around!
But, I'm watching, and tweaking, and will be shortly reporting all that I've done
around here to change things. I'll be cleaning up and releasing my small pile of
blosxom plugins and patches, once I have a bit more time to do so. In the meantime,
I've got an error log rolling in one window, and I'm keeping an eye on comments and
email. Hopefully all this will be nice and smooth before the end of the week.
[ ... 151 words ... ]
-
Not much to see here yet, but I've burnt down my old weblog and
replaced it with this. Planning to start out simple and gradually
re-introduce features from the previous incarnation very slowly and
carefully. I've enjoyed many of the toys I've piled on top of this
blog, but its time to revise and simplify.
I've also been thinking of expanding the focus around here a bit: Up
until now, this place has just been the home of my nerdy brane dumps.
But, I'd like to entertain the notion of opening the place up to more
of my writing. Assuming, that is, that I can reacquaint myself with
certain muses and notions of free time and management thereof.
I really appreciate every reader of this site, though, so I've tried
to minimize the impact of changes. Broken links are bad. Links to
individual blog entries from the old site should redirect themselves
to their newly converted counterparts. And, no matter what new trash
I start publishing here, the old RSS feed will continue to show mostly
nerdy brane dumps. Should you want to follow any expanded content I
start to spew here, you'll need to update your links and
subscriptions. It's up to you.
Anyway, thanks for reading, bear with me, and wish me luck.
[ ... 299 words ... ]
-
By the way, Namber DNS at mysteryrobot.com (found via DiaWebLog) is damn nifty. As I understand it, it works from a set of 256 very short and simple words. Assemble four of these, and you can represent any IP address. Seems like this would make for very easily remembered IP addresses, as well as fairly simple to recite over the phone.
For example: decafbad.com is sing.far.dry.today.mysteryrobot.com
[ ... 294 words ... ]
-
Whew. Still really busy. Times like these, I wish I had my Blosxom and link-blogger action going, because I'm still out here, grazing on the links everyone else is publishing. I haven't had much energy to write much while wrapping up this work project and getting AgentFrank lumbering about. And there hasn't been much I've wanted to say that others haven't said. So, at least nodding my head by echoing some links would make me feel like I'm still making some useful noise.
Maybe I'll have time for that burn down and rebuild next month. :)
[ ... 104 words ... ]
-
Oh, and I completely forgot to toss a link his way, but Kevin Smith of electricanvil.com is working on a Java PersonalWebProxy project also. With AgentFrank, I've been leaning toward patching the core together as quickly as possible to enable the plugins and scripting I wanted to play with. But it looks like Kevin's spending more time carefully architecting the core using Jakarta Phoenix & some homebrew proxy work. Would be nice to borrow from his work soon.
Who else has code out there that could be assimilated? :)
[ ... 218 words ... ]
-
Les has been very quiet lately, but that's because he's been heads down working on his Personal Proxy he's dubbed "Agent Frank" (it's got a little logo and everything). He just set up an Agent Frank ?WikiPage with download and install instructions.
I'm downloading now (it's pretty huge - like 11 megs), but the Wiki page has lots of good info, including Les' new acronym, PIIC. ... Very cool. I'm going to start playing right now.
...
Later... Urgh! It's GPLed! Bleh!
Source:The UPP Lives: OxDECAFBAD Launches Agent Frank .
So Russell noticed my late night release of AgentFrank. Cool! Hope it actually works for him.
Currently it's very big, because it's got everything in it, all the JARs and the kitchen sink from everything I thought I'd start using at some point. My actual original code is likely less than 100k so far, if that. Suggestions are more than welcome.
The same goes for the license - all I want out of this thing is to share it and get interested tinkerers tinkering. It'd be nice if anyone who tinkers with it gets credit for said tinkering, but that's about all I care about. Hell, if it gets incorporated into a commercial product, I'd like some credit, and some cash would be nice, but otherwise I'd just be flattered. Is there a license to cover that? Maybe I should research a ShareAndEnjoy license.
This first code dump is very much premature - I'm not even pretending that this deserves a version number. It's more a conversation piece and an a tangible starting point to play with things I've been thinking. It's 99% crap code that apparently works, at this point. I fully expect it to get rewritten before it rates a version number.
So... have at it. Play with it, make fun of it, send me patches and abuse.
[ ... 1343 words ... ]
-
I've been quiet - too quiet. Work's had me busy again, as has life in general. But I still have had something in the PersonalWebProxy works:
It's ugly, but it works and does stuff. And I was feeling pretentious enough to give it a quick logo and a wiki page. Enjoy!
[ ... 116 words ... ]
-
Of course, along with changes I want to make around here, one of the first is the design. Thinking I might follow in Mark Pilgrim's steps a bit, and just strip the thing down to essentials and then more carefully consider what I slap back on the thing. I've been meaning to pay more attention to his accessibility work for awhile now, among other things.
I'm also thinking of ditching Movable Type for pyblosxom - since although I want to tear down the hierarchical filesystem, there still are a load of decades-old tools that I know and love under UNIX to manipulate directories and text files. That, and the MT-to-blosxom converter that came with pyblosxom, along with some tweaks to the genericwiki preformatter, seems to have brought nearly all of my entries across without harm. I'll just have to work out some way to redirect requests for "old" URLs to the new content.
Of course, after that, I'll have to reconstruct my comments and trackback system, among other things... might be fun though.
Oh, and a PS to Wari Wahab of pyblosxom: It works just fine on my iBook, and I plan to use it to preview my entries before they get rsync'd up to my decafbad.com server. :)
[ ... 357 words ... ]
-
Now using Kalsey's SimpleComments MT plugin. Planning to integrate referrers into it at some point, also, along with an easy yea/nay interface via email or Jabber to ask me whether I want to allow a new referrer to be published or not. Having had my site used to advertise adult movies and anal sex this week was not appreciated.
This blog's first birthday is coming up, and though I doubt I'll have time, I've got a few things I'd like to renovate around here...
[ ... 157 words ... ]
-
Well, it doesn't look like I'm getting the new Java-based PersonalWebProxy code released last week or soon this week, but if you'd like something to poke fun at, try this...
BookmarkBlogger - a quick hack for Safari users to generate blog entries from bookmark folders.
Hope it's useful, bet it's ugly, but it was fun in the making.
[ ... 321 words ... ]
-
So... has anyone gotten to making a Perl engine for the Bean Scripting Framework? I can't seem to find a decent archive of the dev mailing lists, and the links from the Jakarta home page are broken. And, of course, Google doesn't help me much except to point me at all sorts of pages saying that BSF supports scripting languages "like Python and Perl", but without actually showing me the Perl money.
Well, if not, I have a horribly hackish and inefficient idea that might just work, involving either pipes or sockets to external perl daemons and extreme abuse of perl's AUTOLOAD and Java's reflection to build proxy objects. Yeah, yeah, someone could maybe embed Perl in a JNI-ish thing, but I'm not at the level of wizardry to be mucking about with Perl guts - nor do I want to be. But, I think this idea of mine just might work.
Why bother? Because it's depraved and possibly very fun.
[ ... 162 words ... ]
-
Mark Pilgrim is going to unplug for awhile.
Sounds like he's been going through the same woods I trudged through recently, or at least some paths in the same thorny forest. But, it sounds like he's gotten himself even more inextricably bound up in ties to work then me - so much so that he really needs to unplug even from personal net presence to escape. Not just to avoid falling to some abstract sheep-farming burn out, but to avoid the immediate reach of The Client.
So, not that I want to assume to much about you and me, but I think many of us are passionate about the things we're lucky enough to get paid to do - so much so that many of us do work-like things for play. And oftimes, actual work spills into play/personal time. Sometimes it's heroism, sometimes it starts as fun, but eventually, as Mark also recently observed, there remains no demarcation. No amount of human passion or personal love for work can survive when the demands of work inevitably grow to consume all available bandwidth. And y'know, no amount of human sanity can stand for long when one's capacity for effort is described as 'bandwidth'.
Bah. So how do you strike the balance, and where do you dig in? How many do you take for the team, and how many times do you shrug it off at five? The one thing that I saw as positive in the crash of the dot-com age was an anticipation of life-at-net-speed slowing down to something a bit more human, no longer powered by insane sums of money and crack-monkeys of hype. Are we getting there yet?
Good luck, Mark. I recommend trips to the zoo, and close observation of cats.
[ ... 329 words ... ]
-
Amen. Iāve always found it irritating that news aggregators insert their URL into the referrer field. ... It would be nice if there was some sort of browser header the aggregator could send to identify itself instead of using the referrer field. Oh, thatās right, there is. Itās called User-Agent.
The user agent field is designed for browsers, robots, and other user agents to identify themselves to the Web server. You can even add additional information, like a contact URL or email address. Iād like to see aggregators start using it.
Source:Kalsey Consulting Group: Referral Abuse.
Hmm, being mostly a standards neophyte, I thought this was a great idea, you know, NeatLikeDigitalWatches. I thought this was more a semi-clever overloading of the referer, rather than outright abuse. And this, I thought, was reasonably okay since there wasn't, I thought, anywhere else to stick a backlink to myself while consuming RSS feeds.
Well, yeah, now that I read some of the complaints against this use of referers, I agree. And, yes, now that I read the fine RFC, I see that the User-Agent string is more appropriate for this purpose.
So! From now on, hits from my copy of AmphetaDesk will leave behind a User-Agent string similar to this:
"AmphetaDesk/0.93 (darwin; http: //www.disobey.com/amphetadesk/; http: //www.decafbad.com/thanks-for-feeding-me.phtml)"
I tack my own personal thanks URL onto the end of the list within the parenthesis. In addition, I no longer send a referrer string when I download RSS feeds. How did I do it? Very simply.
First, I modify my AmphetaDesk/data/mySettings.xml file by hand to supply a blank referer and a new user URL (having some angle-bracket problems, bear with me):
[user]
...
[http_referer][/http_referer]
[user_url]http://www.decafbad.com/thanks-for-feeding-me.phtml[/user_url]
...
[/user]
Second, I modified AmphetaDesk/lib/AmphetaDesk/Settings.pm to account for the new setting:...
$SETTINGS{user_http_referer} = "http://www.disobey.com/amphetadesk/";
$SETTINGS{user_user_url} = "http://www.disobey.com/amphetadesk/";
$SETTINGS{user_link_target} = "_blank";
...
Third, I modified the create_ua() subroutine in AmphetaDesk/lib/AmphetaDesk/WWW.pm to actually use the new setting:
sub create_ua {
...
my $ua = new LWP::UserAgent; $ua->env_proxy();
$ua->timeout(get_setting("user_request_timeout"));
my ($app_v, $app_u, $app_o, $user_u) = (get_setting("app_version"),
get_setting("app_url"), get_setting("app_os"), get_setting("user_user_url"));
$ua->agent("AmphetaDesk/$app_v ($app_o; $app_u; $user_u)");
...
}
And voila - no more referer abuse. If you want to discover my thank-you message, examine the User-Agent string. Seems like this would be a good idea for all news aggregators to pick up. And if I get ambitious and have spare time today, I'll be sending off a patch to Morbus & friends later today.
Update: Gagh! This has been the hardest post to try to format correctly within the fancy schmancy auto-formatting widgets I have piped together. All apologies for content resembling garbage. I think I'll use this excuse in the future whenever I write something completely daft. (Which means I'll be using it a lot, most likely.)
[ ... 1145 words ... ]
-
I couldn't resist a bit of tinkering with NNTP, partly to follow up a little bit myself on RSS to/via NNTP, but mostly in fact to re-acquaint myself with the wonderfully arcane configuration of the majestic beast that is inn . In addition, there's been talk recently of aggregators moving out of the realms of satellite applications and into the browser itself. The Blagg and Blosxom powered Morning Reading page - my personal (but open) news aggregator - is already web-based, so I thought I'd have a look in the other direction.
Source:DJ's Weblog: Tinkering with RSS and NNTP .
I've been toying around with doing this with inn for quite some time now, so I'm happy to see someone else actually follow through and give it a whirl. And, using blagg with a plugin to do the posting seems just the right twist of clever.
Yeah, inn's a beast and meant for Usenet-scale beating, but it's Just There on many Linux installations. And blagg seems to do a decent job of prying content out of RSS feeds, with just a few regular expression incantations. DJ didn't have to reinvent an NNTP server, or create a brand new aggregator - just a few tweaks and glue, and two existing apps are joined in a completely new and interestingly complementary way.
Though one thing he says: "As I saw it, there are two approaches to newsgroup article creation ... Send items from all weblogs to the same newsgroup ... Send items from each weblog to a separate newsgroup." First thing I was wondering is: Why not cross-post the articles and have both?
And then there're the ideas for experimentation that come first to mind: "... Combining the various weblog trackbacking mechanisms with NNTP article IDs to link articles together in a thread; replying (to the newsgroup) to an article might send a comment to the post at the source weblog."
Kinda retro, kinda nouveau, joining the new distributed RSS net up with the semi-distributed NNTP net.
[ ... 389 words ... ]
-
So I'm thinking that I might prematurely release some code before the week is out, so anyone who's interested can point and laugh at my PersonalWebProxy exploits - this time in Java.
One thing that disturbs me a bit about this thing so far is that, for what I have in mind, I'll have built a mini-OS when all is said and done. It'll have a web server, a web proxy, telnet-able shell, scripting languages, scheduler, full text search and index engine, persistence & metadata storage, and whatever else I can eventually think to toss in. There are just so many nice toys for Java, and most are a snap to glue together. But, I can't really use any of the toys that come with the OS itself.
It's something I've rambled on about before, as has Jon Udell in his old Byte column: Zope Lessons Learned. If this thing is to run on more than one platform, it can't rely on the facilities of any particular platform. So, all these lovely things I like OS X for are somewhat off limits.
On the other hand, if I get tired of doing this thing in Java, I could always just finally embrace the platform and go straight for Cocoa. :) Yes, that would make for 3 environments tried, but hey - it's still fun for me!
[ ... 435 words ... ]
-
interestingly, it seems that besides myself there are a goodly number of people wondering about the etiquette surrounding foaf friend declaration. while it's mostly a social and not technical problem, it's precisely the sort of thing that will keep foaf from reaching any kind of critical mass.
Source:snowdeal.org, ex machina.
I've wondered a bit about this, too. If I've heard of you, can I list you as a friend? If I've emailed you once or twice? How about if I've dated your sister?
However, Eric Vitiello Jr. has an interesting schema for further specifying relationships in FOAF.
[ ... 283 words ... ]
-
Log::Log4perl is different. It is a pure Perl port of the widely popular Apache/Jakarta log4j library for Java, a project made public in 1999, which has been actively supported and enhanced by a team around head honcho Ceki Gülcü during the years.
The comforting facts about log4j are that it's really well thought out, it's the alternative logging standard for Java and it's been in use for years with numerous projects. If you don't like Java, then don't worry, you're not alone -- the Log::Log4perl authors (yours truly among them) are all Perl hardliners who made sure Log::Log4perl is real Perl.
Source:perl.com: Retire your debugger, log smartly with Log::Log4perl! .
Wow, I hadn't noticed this before. We've been looking for a Log4J-workalike in for our perl-based web apps at work, and thought CPAN:Log::Agent was where it's at - and it still may be - but CPAN:Log::Log4Perl looks very keen now.
[ ... 297 words ... ]
-
A new and better way to experience the "Dive Into" empire! For only a few cents a day, you get fast, uncluttered access to your favorite "Dive Into" sites, with premium features available only to subscribers.
Source:Dive Into Premium .
Finally! All of those pop-ups, pop-unders, DoubleClick cookies, and epilepsy-inducing banners were really getting to me. And if Mark Pilgrim can do for full frontal nudity what he did for web accessibility, I'm sure we're seeing the start of something big here.
[ ... 176 words ... ]
-
And one more post for the night: I wish Safari gave AppleScript access to read and manipulate bookmarks. If it does, I can't find it. I've been playing around with AppleScript folder actions, Matt Webb's link blogging folder hack, and BlogScript. I've been thinking, for good or bad, I'd like to do more link blogging. Well, in Safari, I've created a toolbar bookmark folder called "READ/BLOG QUEUE" into which I drop links for later reading and/or blogging.
So... If I could get at that bookmark folder via AppleScript, I could schedule and generate a templated blog entry for auto-posting every night, listing just the links I've left in that bookmark folder, and clear it out when it's all done. I could do the same thing with just a Folder Action enabled desktop folder, but it's just so much more convenient to drop things on the toolbar.
And then, there's the other wild idea I'd use scriptable bookmarks for: RSS aggregation. Imagine bookmark folders dynamically generated and updated from RSS feeds. Maybe even one big bookmark folder with RSS items aggregated from many feeds. This seems somewhat appealing to me.
[ ... 230 words ... ]
-
Have I mentioned lately that I ā„ BeanShell for Java?
I haven't said much about it lately, but I'm still working on my PersonalWebProxy - only this time I'm playing with Java and all the goodies I was wishing for while in Python. I've got Jena and Lucene and HSQL and BSF and Quartz and Muffin and... well, a lot of stuff that feels pretty nice to me.
But, with respect to BeanShell in particular, I've got a lot of the nifty live hackability that I had with the things I was playing with in Python. With no more than 5 lines of code, I've tossed a live interactive shell into my running proxy, into which I can telnet or access via Java console in a browser. With this, I can get into the machinery before I take the time composing a UI, inserting/removing plugins at will, tossing together new proxy filters on the fly, composing RDQL queries adhoc, tweaking Lucene searches.
Fun stuff, and so easy. But, sorry, no further code from me yet. It's very ugly, and barely works, but it's just a sketchpad at the moment. I hope to have a little something before the end of the month to pass around, should anyone still be interested.
[ ... 235 words ... ]
-
Apple's Script Editor 2.0 for OS 10.2.3 has support for Application Services . Basically, you can hilight some valid AppleScript text in any supporting application (like Safari, for instance) and
execute the script [or]
get the result of the script [or]
put the text into Script Editor.
Source:Mac OS X Hints: Run AppleScripts via system services.
Neat! Now... can we do something about making access to those services a bit more prominent? System-wide functions buried in a menu obscurely labeled "Services" under an app-specific menu label doesn't seem very inviting or intuitive. AppleScript gets insanely better treatment than this. I'd like to see these services pushed just as far forward, and more easily discoverable. It's been awhile since I played in Cocoa, but for awhile I was wondering if rearrange things a bit, maybe pull the service items for the currently active app into a menu extra or something. Or, at least pull the menu up a level, maybe stick it next to the "Script" where appropriate. And, I'd love to be able to customize the keyboard shortcuts assigned to them - some have shortcuts and I'll rarely use them, while others have no shortcut and I wish I could use them all the time.
It's good stuff, Apple, show it off.
[ ... 291 words ... ]
-
Vellum is a server-hosted application to run weblogs for you. It's like MovableType or b2, in that it's hosted on your web server. And it's written in Python.
Source:Vellum: a weblogging system in Python .
Need to check this out, have been itching to revamp this place and rethink what I want to do around here since it's coming up on my 1st full year out here in the blogosphere at large.
Funny thing, too, is that the coffee-ring-like background image on that page looks exactly like some of the first designs I played around with for my site. Only his looks much better than mine did. :)
[ ... 109 words ... ]
-
Apple's Cocoa library contains two very good table controls, ?NSTableView and it's close relative, ?NSOutlineView . However, both of these controls have one large limitation: All the rows must be the same height. This is an issue when displaying table cells with content that varies in height, such as large amounts of text or images. Luckily, Apple's Cocoa controls were also very well designed, making it possible to add this functionality simply by subclassing the table views. ?RowResizableTableView is an ?NSTableView subclass which allows each row to have variable heights, and ?RowResizableOutlineView is an ?NSOutlineView subclass with the same functionality.
Source:RowResizableTableView: Variable Row Height Tables and Outlines .
Wow. Although I think my Arboretum project is very likely asleep for good, it was the want for this particular component that most discouraged me from continuing with my outliner. Nowadays what keeps me away is the absolute brilliant quality OmniGroup's OmniOutliner and the fact that it supports AppleScript, and furthermore, the fact that AppleScript has its tendrils in abso-freaking-lutely every app that seems to matter to me. So, I can pretty much have everything I wanted to have out of Arboretum right now.
But I'd still like to come up with a decent project in Cocoa for myself. It's just so damn comfy to develop with.
[ ... 216 words ... ]
-
I've been wasting some of my time playing the superhero role-playing computer game Freedom Force, which turns out to be done in Python.
Source:The Happiest Geek on Earth: Python for superheroes .
Swanky! I've been thinking about getting that game, off and on, wondering if my aging 600Mhz desktop PC would run it. It looks like a hoot - and if it's that nifty under the hood, it should be pretty fun to hack with occasionally. :)
[ ... 78 words ... ]
-
It's only day two, yet I have nothing to Moblog today. Yesterday I was bopping around the city from my main office to the Sun testing center to lunch with my wife. Today I'm at the office in front of my computer, where I will probably remain until after dark. I guess I can moblog my lunch... It probably won't be that exciting.
Source:Russell Beattie Notebook: Moblogging Thoughts .
Is Russell losing the Moblogging faith already? :) He raises an interesting, mostly obvious point: After all the whiz-bang setup and build up - what do you have that's so important that it's worth covering in mobile multimedia splendor?
On one hand, some would say, "Nada mucho," and hang up their camera peripheral. Me, well, I don't have the hardware yet to deluge my corner of the web with instant snaps and clips of me, my girl, the cats, and co-workers. (Though I have gotten a start on it.) But, I find what Russell's posted so far to be fascinating and amusing. Of course, he's in an exotic locale with respect to me. Why else, unless he was a photographic genius, would his lunch seem interesting to me? Then again, I might be in an exotic locale with respect to someone else.
Anyway, after all the fun of connecting various bits together, you always come back to finding a reason to use it if you hadn't had one to begin with. Sometimes the reason ends up being that it's just fun to play with bits connected together, and that someone somewhere might just find the result interesting. No one's forcing anyone to look at all the pictures of the world's cats, you know?
[ ... 369 words ... ]
-
It seems that beyond carrying syndication information, RSS is a very useful and flexible way to get all sorts of application data pushed to a user over time. In the same way that a web browser is a universal canvas upon which limitless services and information can be painted, so (in an albeit much smaller way) an RSS reader/aggregator might also find its place as an inbox for time-related delivery of all sorts of information.
Source:DJ's Weblog: The universal canvas and RSS apps .
My thoughts exactly. Not sure if I've posted here about it, but I know during the whole RSS hubbub this fall, I'd babbled something about RSS being a messaging queue from machines to humans. Or a transport for timely ephemera to people. Or something like that. Basically, I'd like to see RSS, or something like it, used beyond just headlines. This is why I've leaned toward the RDF-in-RSS camp - I want to see lots of things besides titles and excerpts hung off the individual message events, and RDF seems downright nifty to me for this.
But either way, I like the expanded notion of RSS usage as a timestream-oriented stream of messages targeted at subscribed people.
[ ... 365 words ... ]
-
I used an HP 1200bps external modem. To connect to BBSes. When I was in elementary school. I remember tearing through the latest Focke's BBS list. It was the definitive guide to DC-metro BBSes. I'd print it out on my Okidata dot-matrix printer on fan-folded continuous feed paper with the holes on the sides. Then I'd grab a pen or pencil, mark up some interesting BBSes, fire up Procomm and try to connect.
Source:postneo: 1200 bps .
Oh yeah? Well, my first experience with dialing up in Jr. High was with a 300 bps modem on a C=64. I used to pour over Horst Mann's 313 area code BBS list and sneak in calls to long-distance BBSes throughout Michigan, for which I'd later pay dearly out of my allowance. :) I remember coveting my friend's hulking Tandy PC and its 1200 baud modem (nearly a full screen of text at one time when playing BBS games). Then, I bought a 2400 baud modem with an adapter, and became the envy of everyone - until they all moved up to 2400 and then 14.4K.
I'm sure someone else can give me an oh yeah, too, and we can work up a skit ala Monty Python's "We Were Poor". ("You were lucky to have a lake! There were 15 of us living in a cardboard box in the middle of the road!")
What I really miss from the BBS days, though, is the local community. Used to be that far away places were far away, and near places were near, and you had to go through the near places first before you could visit far places. So, communities formed around BBSes, even as those around be began changing into mere portals onto the internet, and then later to become fledgling dialup ISPs. Nowadays, the distance between points on the net is measured in terms of interest, attention, and affinity, without regard to physical location. It's so much harder to get together for a cup of coffee with the people behind the keyboards these days. :)
[ ... 444 words ... ]
-
If I did have comments on my weblog it would be like Slashdot, a very low signal to noise ratio. Don't blame me for that, it comes with longevity and flow. The longer the site is around and the higher the flow, the more losers one attracts. I can see where these things work for a lower flow site, but they would never work for Scripting News, I'd have to turn it off quickly because of the low-roaders.
Source:Dave Winer in comments on Simon Willison's Weblog.
Although I'm too lazy to search for the links at the moment, I've sung the praises to automatic trackback and referrers and friends. Like Simon, I've also bemoaned the apparent lack of participation Dave has in this self-organizing chaos of blogs auto-discovering each other.
But, Dave's right. Get too much flow, piss off too many people, say too many controversial or contrary things against too many camps - in other words, assert a strong opinion, right or wrong, and get it read widely enough and do it often enough - and your weblog will turn into a cesspool with all its graciously thrown open doors clogged with trolls.
At present, I'm safe. My rating is Mostly Harmless, so all my open systems are mostly free from abuse. But, the first time I really strike a nerve somewhere, I'm a sitting duck.
I've got some pretty pretentious ideas floating in my head about how this relates to an open civilization and culture in general, but I'll save them. Basically, I don't want to give up my openness, but I want to deflect the barbarians. Need to think more, but I suspect this may cross streams with the spam crisis, eventually.
[ ... 303 words ... ]
-
Oh yeah, and further along the lines of filesystem sacrilege, my most used OS X apps are:
Searchling
LaunchBar
Both of these give me lightning fast access with keyboard-shortcut finger twitches to what's on my mind and what I want to do. I want to find more things like this.
[ ... 88 words ... ]
-
Gosh, I've been quiet lately. What could be the reason?
1) I'm sick of blogging (not likely) 2) I've been having a life away from the computer (not likely) 3) I've been heads down doing something cool that I'll shortly be blogging about? Hmmm....
Source:Russell Beattie Notebook: Quiet .
If you're careful and look hard, you'll find what he's teasing about. Go look - it's not in the quote above.
[ ... 71 words ... ]
-
Dear Operating System Vendors.
I no longer want to know where my files are stored. I no longer care. I have hordes of directories on my various computers called stuff ,downloads and documents , and the effort that it would take to organise them into a proper heirarchy is just not worth it. The heirarchical filesystem is a really wonderful thing for programmers and websites, but it just doesn't cut it for personal use.
I no longer care where my files are stored.
Source:The Fishbowl: Filesystem sacrilege.
I'll be burned at the next stake over from Charles when the time comes, for this filesystem heresy. Just the other night, a co-worker was asking me about how diligent I was in organizing my email. I told her, "Not at all. I leave it all in one pile and then run the Find command on it later." She was shocked that I, alpha geek and info freako, didn't have some intricate taxonomy of folders into which mail was sorted by carefully crafted filters.
Years ago, when I first started using email, I did indeed do this with procmail and other arcane beasties. Then, I found myself cursing that I couldn't do cross-folder searches very easily. Also, the filters and folders started making less sense as their structure represented only one possible scheme for finding what I was looking for, and I was needing many possible kinds of schemes over time. So, eventually it all ended up in one pile, and searches became my way of finding things.
I abandoned bookmarks for Google by the same principle. Now, my bookmarks consist completely of bookmarklets and a few stray links to local on-disk pages like Python documentation. In fact, I'm wishing that I could create bookmark folders that are fed by Google API powered persistent searches.
So, now I'm looking balefully upon my filesystem. I haven't had much chance to play with BeOS, but I've read about the design of the BeOS file system and drooled. I hear about Microsoft's Longhorn and its WinFS and grind my teeth - I very much dislike Microsoft, but if they pulled this off, I'd have to sing their praises. Apple? Do they have any aces up their sleeves in this regard? Don't let a new fanboy down. :)
Anyway, that's what I want to see: Storage without explicit organization, but with super-rich metadata for super-fast searches. Allow me to create views made from persistent searches - my "project folder" is simply a collection of resources tied together by a common tag, one of many. And, if I want to form a project hierarchy, make my persistent searches into file objects too.
The main thing in all this, though, is that it be woven very deeply within the OS. I don't want a helper app. I want this to replace the standard metaphor completely.
RDF triples at the inode-level anyone? Heh, heh.
[ ... 940 words ... ]
-
I've been thinking about the whole "I need tabs in Safari" issue, and have come to realize that no, in fact, I don't need tabs in Safari. ... What I need is a way to manage multiple open web pages in a single window. ... I've done a quick and dirty mockup of something that approaches what I'm thinking about.
Source:D'Arcy Norman's Weblog.
So, yeah, not that anyone needs my US$0.02 added to the cacophony around Safari, but here it is anyway. Go check out D'Arcy's mockup. I think this is precisely what I want. I've been keeping myself using Safari since it was released, and I've been disappointed with it very rarely. Instead of tabs, I've been heavily using the Window menu, wishing for some window navigation shortcuts (ie. prev, next, 0-9 for first ten windows?)
Anyway, I say: tabs can go, but give me a sidebar.
[ ... 350 words ... ]
-
Wow, take a look at The Cocoanization of Kung-Log:During the New Year holiday I started getting acquainted with Cocoa programming by converting my ?AppleScriptStudio Kung-Log app into a Cocoa version. Well, smack my ass and call me Judy, it's done !
It'd been awhile since the last time I checked out Kung-Log - when it was working for me, it was my absolute favorite way to update this site, but then it started breaking in a few places such as recent post retrieval. So I gave it up. Then tonight, on a whim, I looked. And whew, a complete rewrite, apparently release just this night. Talk about coincidence.
Nice.
[ ... 143 words ... ]
-
Lazy Mac OS X: Weblog links sidebar: ...it's going to be about turning your Mac into a weblogging machine. As easy as the links-and-commentary genre is with all the blogging apps out there, I'm too lazy for the commentary bit, and so I tend to drag-and-drop links to my desktop for later posting -- and then promptly forget about them. Consequently my desktop is a mess, and my blog is stagnating. Bad.
So... what he made, with AppleScript's Folder Actions, is a magic folder on his desktop. When links are dropped into the folder, a script is triggered which posts the link to a weblog via BloggerAPI. I've tweaked it a bit to ask me for a link title, quote, and a tiny bit of commentary, but it still needs a bit more work.
I had never heard of Folder Actions before this. This is very nifty stuff - blogging woven into the OS X desktop itself with AppleScript. Whew.
What I'd really love is to be able to drop a folder onto my dock, and then drop things onto it there. I'm thinking of a bunch of "bins" on the dock that shuttle files off to different destinations and through various transformations, right there, always in view.
[ ... 341 words ... ]
-
Using OmniGraffle as an RSS News Reader with AppleScript: To learn about the capabilities of a new application in its enhanced AppleScript capabilities, a project is created that turns ?OmniGraffle into an RSS News reader unlike any that are out there.
This rocks. Been a little quite lately, busy at work, still tinkering with my proxy. And now I find myself poking around into AppleScript again. Wheee!
[ ... 68 words ... ]
-
Speaking of BML, what ever happened to it? I first wrote about it back in August and later got a response from Sanjiva Weerawarana, one of the original authors at IBM. Someone hinted to me that it was supposed to eventually land at Jakarta, and while the Bean Scripting Framework did land there, BML is still off the radar. Meanwhile, I'm still using it at work, still sitting on some dubiously-gotten source code, and want to use it in more public projects.
Anyone out there in Java-land besides me know about this thing and find it useful? While it's still very likely that I'm delusioned, I have yet to find something equivalent to what it can do for me. On the contrary, I've seen other projects rolling their own much less functional versions of BML. But, I have to assume that I'm not realy a know-it-all, and that I'm likely missing something that makes this not-so-great and relatively unadopted.
[ ... 182 words ... ]
-
I'm still not entirely sold on Python and Twisted as the foundation for my PersonalWebProxy. Yeah, I know I just release a bunch of code to that effect, but it's still just a proof of concept. While there are some impressive things in Twisted and Python, there's also a lot of flux and immaturity there. Not a bad thing, since the hackers in that camp are doing mad crazy things, but I don't want to focus on mad crazy things in my toolkit - I want to focus on mad crazy things built on top of it. The thing I've been hoping for is that some of those mad crazy things in the toolkit would enable even madder crazier things down the line for me. This may be true still - so I'm not tossing anything out, just still experimenting.
So far, this is just playing for me. For fun, I think I might do the whole thing over again in Java and play in parallel for a little while. Well, not quite all over, since I think I've found some pretty ready-made components:
Take Muffin, for example. It's a Java proxy that looks like it's been dormant for quite awhile, but seems ideal on the surface for my needs. Just today, though, I checked back in the project's CVS repository and it seems that there's new activity and checkins starting up in there. On the other hand, I've also been poking at Jetty and the proxy classes it comes with. Seems like there's a lot to work with here, and I have a better vibe about it.
Besides that, Jena seems stronger than rdflib for RDF support, and I'm just biting at the bit to pour damn near everything at Apache Jakarta into this thing. Also, I suspect I may be able to preserve the quick scripty hackability I want out of the this thing by using BSF and Jython, with some assembly and config in BML.
Hmm. Still tinkering.
[ ... 486 words ... ]
-
In case anyone's interested, I've been hacking like mad on my toy proxy since New Years'. Check out PersonalWebProxy for current downloads. It's got primitive forms of proxy filters, browser-based UI, RDF-based metadata management, logging, config, plugins, and some other goodies. So far, the major plugins include:
a noisy logger;
a content archiver that captures and saves all response headers and content in a directory structure loosely based on the requested URL;
an initial metadata harvester that fills up the RDF database with triples based on headers and details encountered during browsing
It works, and does stuff, but I'm sure it demonstrates a complete lack of understanding of large portions of the Twisted framework, Python itself, and likely causes forest fires. So, I hope many people will download it, snicker at it, and maybe set me straight on a few things and contribute a few patches and plugins.
ShareAndEnjoy!
[ ... 433 words ... ]
-
So... yeah. I watched the keynote, and I'm lusting over the new mini-me ?PowerBook with a plan to purchase in the Spring. I was aiming at a high end iBook but holding out for something better, and well, this new ?PowerBook is the precisely what I was holding out for. It appears to be the rightful successor to my formerly beloved Sony Vaio 505TR.
Then there's Safari. Mark puts it under the microscope, as does Mena. On the other hand, Ben writes that it's almost-shit. Oh yeah, and JWZ reports: "Apple says 'fuck you' to Mozilla" But he's not bitter.
I haven't got the chops to seriously test the thing, so I'll be watching the more spec-wise out there for info. But, what I do think about it makes me sound like an Apple fanboy: I think it's great.
It's not perfect at the moment, but I've got a feeling that this will change. And fast. Mozilla's shipped with talkback, but Safari's got a bug submission button right up front. And the fact that they did snub Mozilla for a dark horse like Konqueror seems a bit provocative (at least in techie / Open Source circles), and after all that talk of innovation I doubt that they're going to let it rest as-is or back down. I expect lots of movement from here. Think different and all that. And, from my minuscule bits of Cocoa dev, I'm looking forward to poking around with ?WebCore and the ?JavaScript framework. Unless it's a complete disaster, I expect the building blocks of Safari to pop up in projects everywhere.
These things tell me to expect good things from Safari. I hope Mozilla can keep up. As for IE, well, I deleted that a long time ago.
[ ... 332 words ... ]
-
I needed and found a little help enabling readline support for Python under Mac OS X. I love a lazyweb so lazy that solutions to my problems have already been posted. :)
[ ... 33 words ... ]
-
Okay, this is getting close to outstripping my enthusiasm and invoking my laziness: Does anyone happen to have RDFLib and ZODB working under Mac OS X 10.2.3? Have also tried compiling Redland and its Python and Java APIs, but that's not been a 100% success. Or can someone recommend another decent RDF repository to play with under Python? I've had fun with Jena under Java, love using RDQL, and dig switching between MySQL and BDB stores.
I want an RDF repository I can integrate into my proxy experiments, currently implemented in Python. I've been very tempted to switch to Java, which I know better and have a better sense of tools available. But I'm still pulling for Python. I suppose I could just go with an in-memory repository at first, but I don't want to stick with that.
I'm still finishing up the PersonalWebProxy notes and plan I've been working on, but I've still got an itch to play in code. The next major thing I want to do is extract as much metadata as I can from every HTML page I visit and load the RDF repository up with statements based on what I harvest. Examples would include things like HTML title, visitation date, referring url, any meta tags, any autodiscovered RSS and FOAF URLs, and anything else I could eventually dig out. Then, I want to amass some data and play with it. I'm thinking this could give me a kind of uber-history with which to work.
Update: Seems like I managed to get Python, RDFLib, and ZODB working, but I started completely from scratch and compiled everything from clean source. I guess Apple's build of Python has more hiccups in it than just the Makefile thing.
[ ... 420 words ... ]
-
Just in case this wasn't common knowledge, it seems there's a bit of a boo-boo in Jaguar's installation of Python that sticks it head up when one tries to compile extentions (like, oh say, the Standalone ZODB).
Line 62 of /usr/lib/python2.2/config/Makefile reads:
LDFLAGS= -arch i386 -arch ppc
But, I think should read:
LDFLAGS= -arch ppc
Making this change appears to have gotten the thing compiling, though it may also cause my iBook to eventually self-format since I barely understand everything involved.
Ugh, though now that everything's compiled without hitch, the test script goes belly up with a bus error. Time to go back googling to find a solution or somewhere to whine. I may also swap back into Java, since I like Jena better than anything I've found in Python for RDF support.
[ ... 304 words ... ]
-
Teasing everyone with a solution to something we MovableType users been hacking around with for awhile, Ben Trott writes:We envision Text Formatting options as complete, encapsulated formatters, handling both the formatting of structured text and any desired typographical details (smart quotes, etc), analogous to the way in which Textile handles quote education and its own miniature formatting language.
Plugins will be able to easily add new Text Formatting options to the menu on the New/Edit Entry screen.
Yay! Text formatting as a first class feature in MovableType - used in previews, as well as in publishing. No more including plugin tags in every single template ala MTWikiFormatPlugin and friends. Rock on! It's a good thing when hacks I write get outmoded by a more elegant treatment.
[ ... 126 words ... ]
-
Anthony Eden writes:I was so intrigued by having a proxy agent which would work for me that I wrote a little generic pluggable proxy this morning.
Show us the code! :) I want to play.
As for my code so far: This is extremely premature, and I'm not even sure if it will work anywhere besides my peculiar iBook, but here's a quick & dirty tarball of my experiments with a PersonalWebProxy (just web, not universal) with a simple plugin API, in Python using Twisted:
dbproxy-20021231.tar.gz
It's poorly commented, doesn't do much useful, but it's a few nights' work by someone just getting acquainted with Twisted - if you're even worse off than me and want to poke at this proxy thing, maybe this will help you. It does do a few things I thought were nifty, like use Mark Pilgrim's rssfinder script in a thread to dig up RSS feeds for every URL with text/html content you visit. There's no persistence yet, so they just appear in the log, but figuring out the integration and thread use so far was nifty to me.
Anyway, enjoy. I'm still tinkering, thinking, and working up a plan.
Oh yeah, and I'm watching Dick Clark and the ball drop with my girlfriend, so this post brings an end to hacking for the night.
[ ... 379 words ... ]
-
Following in Russell Beattie's outliner brain dump footsteps, I revised my PersonalWebProxy page with cut-and-paste from my own outline in the works. It's far, far from complete, especially in the planning department, but I figure it could be worth poking at just to check out the developing direction.
Having trouble getting to Russell Beattie's blog at the moment, so a link will have to wait.
[ ... 66 words ... ]
-
Continuing along with the PersonalWebProxy meme, Russell Beattie writes:I have a lot of trouble with motivation on something like this. I get excited then I flip out because of how much something like this could entail and I find other stuff to do for a while to calm down. This has been going on for months. ;-)
Yeah, and then there's that. Seeing what people are wishing for out of this class of app, we could wish it into the outer reaches of tinkerer feasibility and then get fed up and go home. That's what I'd like to avoid, personally.
I'm puttering around with a bit of a spec for myself today, laying out a limited wishlist drawn from mine and others' ponderings and rationing it out into chunks. There's no way anyone could handle creating the perfect proxy/agent for the next millennia without descending into insanity and obscurity some 1% into the process. But, if things can be worked out to where it's being cobbled together in a process of loosely structured madness, there just might be hope.
The crucial thing is that it move forward in tinkerer-sized steps, remain understood and tested at each point, and do some reasonably useful new thing after each lurch in order to remain interesting. The process should give tinkerers a chance to spend just a little bit of their non-copious free time to come up with something clever and nifty. This is how the lazyweb works.
Okay, before I start getting too pretentious here, I'd better get back to work.
[ ... 427 words ... ]
-
On the cite tag, Mark Pilgrim writes further: "All right, everybody just calm the fuck down. It?s only a tag. I didn?t expect the Spanish Inquisition." Don't I feel like a fan boy? :) Heh heh.
And, oh! that INS tag! Must be something in the post-holiday egg nog.
[ ... 50 words ... ]
-
Russell Beattie writes: The idea in my mind is for a project that would be a universal personal proxy (UPP) that sits between you and the internet. It would be a web proxy, email filter, spam cop, a place for agents and schedule tasks to run and more. It would be responsible for both receiving and sending of information - as web pages, emails, web services requests, ftp file postings, etc. In the middle it would do analysis like for Spam or RSS autodiscovery, intelligent bookmark history, etc. ... This sort of app would be for people like myself who spend an innordinate amount of time on the internet.
Precisely. Exactly. Even down to the combination of P2P and desktop-to-server mix he writes about. I think we're starting to ride a meme here. This is what I want from a PersonalWebProxy. I've been trying to think of a better name for this class of app - it's more than a literal web proxy. I want an agent and an assistant - something that sits shotgun with me while I putter around and can help me study what I do and see. I want something that can eventually do things in my name for me, if I allow it. I want basically all the things Russell wants, along with everything agents do for the characters in David Brin's Earth.
So. How to do this? I think I need to spend some more time fleshing out a spec before I do much more in terms of putting gadgets together. Need to reign in the fantasy, lay out some feasible first revision features, and start. I want it all, but I want to start out with something hackable, useful, and inviting for collaboration.
Still probably too early to be thinking about implementation language, but I have been experimenting and expect some of these things to become the base for my development. My ideas on choices have become less clear-cut now. When last I wrote about this topic, and languages, Donovan Preston left a comment enlightening me with regards to my Python/Twisted vs Java/Threads consideration. In fact, threads are available in the Python/Twisted environment as well. So, now I'm back to thinking about things like free library availability, environment maturity, possible collaborators, and my own comfort level in each.
Bah. At this point, I think I know all I need to know about what various environment choices can do in order to come up with a set of features that can be reasonably implemented in either or any environment. Need to solidify this wishlist into a task list and design and get going.
[ ... 730 words ... ]
-
Although I did begin my days on the web by pouring over the HTML and HTTP specs back in 1994, I soon abandoned that effort and learned how to make web pages by example like most webmonkeys and hacks came to learn it. I wasn't ready, back then, to read a document like the HTML spec. But now, Mark Pilgrim makes me want to give it another serious shot, having leveraged the CITE tag in his weblog writing to pull a view of his entries by cited source. He's good at sneaking in smarty-pants things like that - you probably never knew he was doing it, you think it's wonderful when later revealed later, and then you wish you'd done more homework. Maybe "you" is just me. :) You think he plans these things in advance? I want the Mark Pligrim syllabus.
Of course, the problem with the way so many webmonkeys learned to apply some semblance of web standards was the web browser. If it showed up nicely in the browser, it was Good. If ugly, it was Bad. If invisible or without apparent effect, it was Ignored. And this mindset worked great for the busy page builder up till 4:30am trying to cobble together the latest brochureware site.
But now, after all the rushing around and recontextualization of business, it seems we're in the Winter season around here on the Internet. Not so much new feverish development going on, but a lot of reexamination and introspection - and actual reading of specs. And some really nifty things are going on, like the "rediscovery" of things that were there from the start but not too many people were careful enough to pick up on them at the ass crack of dawn while trying to launch another blight on the web. (No I'm not bitter about those days at all - no wait, yes I am. :) )
So, anyway, Mark Pilgrim wrote: "Let?s try pushing the envelope of what HTML is actually designed to do, before we get all hot and bothered trying to replace it, mmmkay?" I really like his point. Now that we're done rushing around trying to solve the insane demands of the moment, or trying to "add value" for the stockholders, maybe we can do more mining into what's already out there that we all trampled and stampeded past in the early years.
Or, rather, maybe I'm a "we" of few or one here, since obviously some people have already started mining the hills for neglected gems.
[ ... 423 words ... ]
-
It's not incredibly complicated, but it's something I just hadn't gotten to until now: My AmphetaDesk subscriptions are now the source for my blogroll. I noticed the opml2html script that Jeremy Zawodny wrote, snagged it, and set up a quick cronjob to run the script and upload the HTML every few hours. It's ugly right now, but maybe it will finally make finish that new design.
[ ... 167 words ... ]
-
Chris Winters threw me a link to my recent ramble on completely planned perfection versus workable organic imperfection in software design. After citing a very good perspective on software as gardening from The Pragmatic Programmer, he writes:As I've mentioned before there are a number of software development practices moving toward a more humane process. And I think the ideas underpinning worse-is-better play a big part in this. The major one in my mind is this hypothesis: there is no such thing as a perfect software design. Have you ever seen or heard of one? What design hasn't been modified over time due to changing needs or conditions?
Good points. Still trying to find the middle ground between clean design and fecund dirt. There's no perfect design I've ever run into in my time so far, and I don't expect to. But, what I have learned, is that where there is no design at all, you have disaster.
So I guess the main thing I've learned so far is that this guilty love of a dirty yet elegant hack while giving myself a "tsk-tsk" with a hope to design it more fully next time is useless. There will be no time that I've participated in a design that ends up being marvelously perfect. But what I'm trying to feel toward is how to get the balance right between clean and dirt, given skill levels and team sizes and project complexity.
[ ... 239 words ... ]
-
Matt Griffith proposes a virtual project: Jog. For the most part, what he wants is what I want from my PersonalWebProxy, and more.
The big difference in the writing, though, is that Matt writes from features and what he wants, where I'm already describing things in terms of implementation. That is, I started talking about "proxy" where he's talking about "my personal Google and Wayback machine". I think looking at it that way makes a more compelling case for this thing being generally useful, rather than just some nerdy toy.
Another way I'm looking at this PersonalWebProxy is as an assistant in a sidecar attached to my browser. I want this assistant to watch me, learn, and pipe up from time to time with suggestions. I also want to be able to ask questions and to remind me of things I vaguely remember. Eventually, I'd like this assistant to be able to drive for me from time to time, doing some info hunter-gatherer work for me while I do other things.
I'm still working on this thing. So far I've got a proxy in Python and a simple-minded plugin framework. Two plugins so far: one is a cookie jar separated from any browser - that is, cookies are managed in the proxy, not in the browser; the other is a little thing based on Mark Pilgrim's rssfinder.py that quietly seeks out and gathers RSS links from every text/* resource I view. It seems to be standing up fairly well.
My next steps are something along these lines: Should I continue in Python? To do so means delving deeper into Twisted, using their web app framework for the management UI and staying within their event-driven paradigm in lieu of threading. The reason I first chose Python is because I wanted something that was quickly and easily hackable and fun to contribute plugins for. Does this still apply, if things are deep in the Twisted mindset which is not quite straightforward?
On the other hand, I took a peek at Jetty in Java, which also comes with a simple and hackable HTTP proxy implementation. I could easily cobble together in Java what I have in Python using this. I would also say that I could easily make it compatible with whatever plugins were written for the Python version, using Jython, but there's also a paradigmatic difference were I to go with Java: Threads in lieu of event-driven design.
Maybe I'm thinking too much about this and should just keep doing what I'm doing. I'm trying to think and second guess a lot about what anyone who might care to play with this thing would actually care about. As for myself, I seem to be having fun with things as is.
[ ... 1391 words ... ]
-
Mark Pilgrim writes: "I am now a card-carrying associate member of the Free Software Foundation. Software is free, but lawyers are expensive."
And from the FSF membership page:You will receive a personalized, bootable, business-card-sized GNU/Linux distribution as your membership card.
This GNU/Linux distribution is based on LNX-BBC. New cards will be sent to renewing members every year if and only if there is a new major release of LNX-BBC.
So, not only can you give a little something back to a project from which most of us have benefitted from - you'll also get a membership card that's as useful in and of itself as carrying a swiss army knife, and as far as I know, it won't get confiscated from you at the airport.
(Oh, and this entry is a test of the NetNewsWire Pro Beta weblog editor. This is only a test.)
[ ... 178 words ... ]
-
This week, at work, I cobbled together a hack for MovableType that hooks it up with an LDAP server for author accounts: MovableTypeLDAPAuthors. This is an early, early, early release of the thing, and is likely to do very nasty things for all that I know. But, I wanted to share, and it seems to be working for the proof of concept at work (that is, MT weblogs on our intranet for every employee). Hopefully soon it'll be approved, and I'll be looking into a commercial use license for MovableType.
You know, for all the praise I've read about MovableType, something I've really not seen much attention toward is the code itself. I mean, yeah this thing is great, and it's so simple to install and use by even non-techies. But, under the hood, there're some nice things going on - like a very decent object persistence approach; templating for pretty strict code/presentation separation; a workable servlet-like app structure with facilities for localization and a dispatch-table approach to executing web app methods. There are some spots that are a bit too if/else-ful for my taste, like the CMS' edit_object() method, but hey, it works.
In other words, MovableType isn't just a cobbled together tangle of code that happens to produce weblogs. I've seen piles of well-used code on the web before that all seem to do what they advertise, but present a nightmare under the hood. (cough Matt Wright's famous scripts cough) No, MovableType looks like the result of experience, and I feel biased, because it demonstrates a lot of the same perl web app design patterns I've been employing and advocating for years now. So, my LDAP hack was a bit of enjoyable fun, instead of a chore.
Along the lines of what I'd written last week about perfection versus good enough, I think MovableType is a good example. It's something I could have written, but didn't write and didn't release and didn't support and didn't make lots of people happy along the way. All the did-nots are the important bit. It's why I have two projects dead (Iaijutsu and Iaido) after a few years' effort, and MovableType is a gigantic success today.
So, these are the kind of lessons that are an important part of what this weblog is about for me.
[ ... 608 words ... ]
-
In case anyone's been wondering, I've not slipped back into oblivion. I've been a little busier at work again, but nowhere near a death march pace. And, in the free time I've been clearing up for tinkering, I've been working semi-obsessively on the aforementioned PersonalWebProxy idea. I dove back into Python, started soaking in the Twisted framework, and just about have an operable-but-ugly first attempt at a basic HTTP proxy with plug-in filters.
Thinking that I need to knock it around some more to work out some kinks, and then bang up some initial useful filters. But I want to get something packaged up and offered for ridicule within the next week or so. Not sure if anyone else is or will be as enthusiastic about this little toy as I am, but I think I'll be able to do some nifty things with it.
[ ... 181 words ... ]
-
Today's revelation while tinkering with my PersonalWebProxy: Decompressed content is larger than compressed content.
See, I was decompressing gzipped content streams in the proxy in order to fiddle with the content before it got to the browser, but then I noticed that browsers kept only displaying part of the content. I remove the in-proxy decompression, things are fine. Put it back in, things get cut short. I poke for a few hours at various bits and parts, to no avail.
Then, finally I remember... "Hey, you know, I think browsers pay attention to that Content-Length header. And hey, you know... decompressed content is larger than compressed content." Bingo. Problem solved by keeping the Content-Length header updated with the length of modified content. This makes me feel very dumb, because it's not like I haven't written a handful or two both of HTTP clients and servers. And I've written code that uses the Content-Length header, over and over again. I'd've thought this would be something I'd remember more quickly.
More often than I'd like to admit, remembering simple and obvious things like the above are what unlock problems that I'd been banging my head against. The last revelation of this nature I had while frustrated at a project at work was: "Not all months are 31 days." And before that, "Not all days in a year are 24 hours long, due daylight savings time."
Trying to think of more, but I thought I'd share. Maybe I'll start a wiki topic for sage bits of the ObviousInHindsight.
[ ... 430 words ... ]
-
Okay... This really is the LazyWeb. I barely mention writing some sort of RDF-based Tinderbox-a-like, and I see a link to Ideagraph on the #rdfig scratchpad .
[ ... 98 words ... ]
-
Is it just me, or is it pretty damn ironic that Mac OS X: The Complete Reference is available in Acrobat eBook Reader format, yet the Acrobat eBook Reader is not available for MacOSX?
Yes, I'm bitter. And stupid. While I didn't buy the MacOSX reference, I did buy a book in eBook Reader format. Dumb, dumb, dumb. Can't read it now. Bastards. And no returns on eBooks. Crooks! Mental note: Next time remember to buy books whose only compatibilty requirements are sight and literacy. Unless I decide to learn braille or go for audio books, that is.
[ ... 142 words ... ]
-
I hadn't had a bon mot for it until yesterday, but I've been thinking about the concept of recombinant growth for awhile now and how it intersects with the LazyWeb / blogosphere.
In particular, I've been thinking about design. I'm of two minds. As a perfectionist, I want sparkling gorgeous gems of elegance. It feels so good to be playing with something polished that so obviously has been imbued with immense thought. But, as a realist and a guy trying to make a living, I also appreciate adhoc rusty tools that still turn a screw. The thing might fall down in some cases, but otherwise it's a pretty steady companion.
Looking at it another way, though, many of those otherwise sparkling gems won't let me use them as a hammer the same way I misuse the screwdriver on occasion. And oftimes, they don't have any control panels to open so I can reroute EPS conduits and exploit leaky abstractions. And then, there's the problem domain: on what classes of nails is this hammer indented for use? In one case, a particular hammer shines, in another, it leaves thumbs throbbing.
I see infernos of flamewars start over principle, predictions of falling skies and doom doom doom. (It's not maintainable! It won't scale! It'll be the end of the web!) And then I see mischievous wizards and cantankerous veterans pull out a much-abused old standby and knock it out of the park. (Only the feeble-minded need strong types! Goto considered harmful - to you, maybe!) And then, sometimes, when you're in the realm of recombinant growth and the lazyweb, what initially looks like a jumble of wrinkled paper takes one more origami fold and turns into a perfectly formed swan. It gets confusing sometimes.
So anyway, this all leads up to my questions as a naive, wannabe computer scientist: By what processes of design do we best facilitate recombinant growth? How deeply and to what level of detail? How dirty should it be, how unspecified or left with holes or shrugs? (Plants need dirt to grow.) How meticulously clean should it be? (We don't want to attract any bugs.) How much should be chalked up to bootstrapping, and how much should be wrangled and hogtied into perfect working order?
I doubt that it there's a single fully valid answer to all this. But, I'm always interested in any of the answers I find anyway.
[ ... 403 words ... ]
-
Another little train of thought, whose conclusion will probably be obvious to anyone:
I wonder how hard it would be for me to make a little personal idea processor like Eastgate Tinderbox using RDF? Very likely much harder than it was for its author to create the original - I'm by no means smarter than Mark Bernstein, and he's got years on me in developing tools like Tinderbox. So why would I even consider rolling my own in this case? Certainly not the price - for a tool like that, the price is a steal.
No, I think it's because I don't have full access to hack the thing, and it has a few itchy spots for me. At least, that's the way it looked when I last tried a demo on MacOSX. I wish I could fix them, and rolling my own is the best way I know for that. And besides, I'm in a real RDF love fest lately.
But... Is it really so bad as it is? Bad enough to try to play catch up with what someone else has already devoted so much to? Nope. Bad idea. Best to promote recombinant growth, and rephrase the question: I wonder how long it would take me to get used to the tool's quirks as I percieve them, make suggestions to the author, and then use the extraordinary hackability already present in the tool to get it soaking in RDF?
This is a lesson that's taken me awhile to resign to learn, so I figured this would be a good exercise to document it. And to think, I once spent a year or two trying to re-implement Zope, by myself, mostly in quiet, and with not much to show in the end.
[ ... 443 words ... ]
-
Mental note: Look into Jical, Java iCal Group Scheduler, as part of an Exchange replacement when the current server comes tumbling down when they try to upgrade it. Yeah, I know, it'll most likely be just fine. But I can dream.
[ ... 42 words ... ]
-
So I'm singing the RDF praises at work today. I've gone through creating a very small proof-of-concept task tracking vocabulary in RDF. Initially, it covers things such as clients, projects, tasks, workers, time card entries. So far, I just have a vocabulary in RDFS and a sample load of data - both built by hand - but thanks to the BrownSauce RDF browser, I've been able to show off some nifty things. I know I've linked to that project two days in a row now, but I think it was seeing things through that browser that finally turned the last tumbler in my mental lock on RDF.
As a demo to co-workers, just clicking through the linked resources, I can show who's managing what projects, who's been assigned what tasks, what a given person has worked on, etc. And I just keep drilling through links showing one view of relations after another. It's fun. Someone said it looks like how they breeze through data on fake computers in TV shows.
Eventually what we want to do, if this proves to be useful, is expand this thing from just task tracking to slurp down more and more knowledge from around the organization and form a semantic intranet. And, I think it can do it. I just started getting Jena stashing statements into a MySQL database, so my next steps are to start actually working up an application around the data.
So far so good. I hope I'm not insane to be thinking this is easy. Waiting for the enthusiasm to calm down so I can realistically take account of what warts are to be found in the current state of RDF art.
[ ... 380 words ... ]
-
Along with tweaking my RSS template today, I've been tweaking my FOAF file after perusing the FOAF vocabulary again and having spied on the FOAF files belonging to some of the bloggers in my neighborhood.
Trying not to fall prey to simple cut-and-paste copying, and trying to keep in mind the underlying RDF model as I push things around. I've been browsing things with BrownSauce, but I have to keep reminding myself not to fall for the "looks okay in the browser, must be okay" fallacies that plague the entire HTML universe. Just because it seems to render in ?BrownSauce doesn't mean it's okay, and just because it might not look okay doesn't mean that it isn't.
Must learn the model and the vocabulary. Repeat this 100 times.
Whew. Anyway, the more that I look at it, the more that I'm thinking that FOAF is the perfect LiveJournalFriendsReplacement format. I can't believe I hadn't seen this before and gone completely gonzo over FOAF already.
I think I'm grasping RDF and the FOAF vocabulary, so I don't think it would be a herculean task to build something similar to the user info editing pages over at LiveJournal, and to build a FOAF display page like my user info page on LiveJournal. Perfect.
In fact, I wonder if I might not be able to work out a conversion between the two, maybe create a supplemental vocabulary to supply details that LiveJournal does yet FOAF doesn't. (ie. bio, IM accounts, birthday, blogrolls, etc.)
More study to come...
[ ... 335 words ... ]
-
Still working on my RdfNotes and learning RDF. I hadn't realized it, but my RSS feed was still being produced from a vaguely constructed v0.91 template, so I decided to update it to a v1.0 template for MovableType I found via Bill Kearney. I also got a little brave with the RDF and decided to start tweaking, stealing a few extra metadata elements from Eric Vitiello Jr.'s RSS feed such as a pointer to my FOAF file.
If I'm lucky, everything validates and I can start poking at my own metadata with RDF tools.
Update: Yay, according to the RSS Validator, my RSS template passes:
[ ... 106 words ... ]
-
Well, it's not earth shattering, but after some research and some more feature brainstorming, I've gotten a start on a PersonalWebProxy. At the moment, it's not much code and is transparent except for the fact that it transforms every appearance of "0xDECAFBAD" on pages into "0x31337D00D". What's exciting to me though, is that I tossed it together with an hour's hacking in Python, using the Twisted framework. So, starting work in a language and framework both mostly new to me is fun. Also, it's interesting to work in the event-driven style again, even though I still think I still want threads (see: Blog:000090). If I go to threads, I think I'd swap over to Java.
I was tempted to start the thing in Perl, since I already know my way around the language, and I already know my way around POE, a major event-driven framework in that scene. But, I've been programming in perl almost to the exclusion of other languages for a little over 10 years now, not counting some excursions into Java or assignments for college classes. I think it's time to expand, and force my brane wider. I've touched on this topic before, but it's been ages since I had any energy to devote to this place and its projects.
At least I didn't go completely apeshit and start this in Lisp or ?Scheme. :) Though I'd still like to make myself an assignment using one of those two, I'd like a few collaborators on this project at some point. I figure Python would be more hackable for this at present.
[ ... 265 words ... ]
-
Speaking of BlogWalking and ?MoBlogging, I just noticed via following a referring link that Matt Croydon has thrown together and released a proof-of-concept WAP-based blog client app. Swanky. Check it out.
I want to do the same thing, but using kinder-gentler-HTML templates for my Treo. My previous phone, a Touchpoint, supported WAP and thus WAP interested me for a year or so and I have a few private hacks laying around (ie. controlling X10-enabled house lights from my phone, trying to set up recording times on my half-working Linux PVR). But, being a fickle geek, WAP no longer interests me since my current phone supports richer things.
[ ... 108 words ... ]
-
Okay, now that I'm healthy, my girlfriend is healthy, my job is healthy, and my iBook is back to being healthy - I might just get a chance to sneak some time in for my Projects again.
At the moment, I'm considering building a PersonalWebProxy. I've been playing with them and thinking about them off and on for years now. You can see a short list of them that I've poked at in the PersonalWebProxy wiki topic - you're welcome to add to the list if you know of more. In particular, the WBI project at IBM got me initially hooked on things. I really thought it was nifty how they edited HTML on the fly to weave in the WBI UI and add indicators on links in the page. And the idea of storing a history of one's web browsing in a local cache, available for review and later search, has always seemed incredibly attractive.
Lately, I've been thinking of a few more things that might be useful in a personal web proxy:
Marking pages to be periodically checked for change notification.
A browsing "shopping cart", in which to collect pages now for later browsing.
Auto-harvest RSS, FOAF, and whatever other associated page metadata that might be useful now or later. Maybe suggest subscribing to the site after a few visits.
Use a ubiquitous rating mechanism, machine learning, and maybe some off-peak spidering to have new content of interest suggested.
Publish and share browsing patterns, generate "Friends who viewed this page today also viewed this today..."
Generate RSS feeds for all notification features
And then, of course, there are things that I've seen already in other projects:
* Rich browsing history
Collaborative annotation
Ad filtering & pop-up blocking
Content summarization
I'm thinking it would be nice to put together something like WBI and its modular API, maybe in Python, and make this thing super friendly to quick hacking. Could be some fun. What do you think?
[ ... 881 words ... ]
-
So, I finally got my iBook into and out of the shop, and I'm back on the reinstallation road to recovery. Not only that, but I got a little bit extra - seems that since my model of iBook is no longer available, the repair tech found that he was required to replace my dying 9GB hard drive with a quieter 15GB drive. Not a gigantic improvement, but it ends up being just enough to be nifty and was free to boot. Also, I splurged a little bit before starting Christmas shopping, and replaced the 256MB of RAM with a 512MB module. The improvement from 384MB to 640MB under MacOSX was much more marked than I'd anticipated. I don't know what kept me so long.
So, my desires for a new iBook or a glitzy ?TiBook are dampened for now, and my pocket book thanks me. And, if/when I decide to toss this thing up onto eBay, the extras couldn't hurt the value. Also, I imagine that my expectations of OS X will have set me up for amazement once I get to play with a machine that supports Quartz Extreme and has a more decent processor speed.
[ ... 199 words ... ]
-
Tonight, I was perplexed about a comment claiming to be from DaveWiner - though it could just as easily be from an impostor - on the final installment of my extended XmlRpc ramblings:Pretty arrogant if you ask me. What has LM Orchard contributed to the world? Add more value, less bluster and bullshit.
I suppose I should have stopped while I was ahead, refraining from rambling on about my case study to begin with, let alone responding to this comment. But, nonetheless I was bothered tonight, and responded:Your comment confuses me.
I've written that your work has helped me and given me food for thought - despite other disagreements I may have with you. And I've written that a tool of yours I've come to consider imperfect has, nonetheless, worked perfectly for me. These things remind me that I don't know it all and have much to learn and hash out. If this is bluster and bullshit, I certainly didn't intend it as such.
As for my contributions and value - I'd like to think I'm doing something right in this field, given that I still have a well-paying job and a non-zero readership of my weblog.
I was trying to give a compliment and a positive testimonial, while addressing some of the standard criticisms I'd seen before. And I wanted to back it up with my own experiences while tempering it with my admitted inexperience.
But, after having had a decent dinner and a measure of time watching soap operas with my girlfriend on the couch, I've decided that this is what tweaks me about the comment: What's a weblog for, if it doesn't make room for arrogant bluster and bullshit? My assumption in writing here is that I know enough or can figure enough out to write things valuable to someone - or at least, if I'm wrong, I can still provide myself as a foil of ignorance to someone else's enlightenment.
Attacking the process, or the village idiot himself, is not constructive.
Hope that helps. Have a nice day. Please drive through.
[ ... 784 words ... ]
-
About the difference between BlogWalking and Moblogging, Bryce writes:Moblogging is about capturing the moment with multimedia.
Moblogs exist in a seperate space from traditional weblogs, and that's where the disconnect with Blogwalking occurs.
Blogwalking is about taking our weblogs with us.
Originally, we'd written about BlogWalking as literally walking with your blog, publishing & management software and all installed on a mobile device. The spirit of it, in my mind, was mobile blogging.
Then, I read about Moblogging and assume it's the same thing. But, Bryce points out an important distinction that I think I get now. BlogWalking is coming from one direction, trying to make the blog mobile. Moblogging comes from the other direction, trying to make the mobile blogged.
I'd off-handedly wondered what it would be like as a BlogWalker, if I could add a camera, GPS device, temperature sensor, and other various things to a PDA loaded with blog software - I already have the blog, but I want to add multimedia to it. Well, Mobloggers already have multimedia devices, with voice and imaging and more to come - they want to capture that content in a blog.
It's a subtle difference, but somewhat important, I think. And after experiences with my Treo, an admittedly underpowered device versus the Zaurus, I'm leaning toward Moblogging. Keep the blog software off the mobile device, but make the mobile device into a multimedia blogging terminal. I think Moblogging is really the spirit of what I want from BlogWalking.
I'm hoping for a color HipTop after my Sprint PCS contract is up. The Treo has been nice to me, but it hasn't amazed me. Although the grass is always greener, the HipTop seem the more amazing device to me.
[ ... 425 words ... ]
-
Sheesh, have I rambled on forever about this or what? I think it could have all been summed up so much more concisely, as Paul Prescod did for me: "For the 20% that XML-RPC solves, by all means use it!" With my case in particular, his 20% is 80% of my problem domain.
I guess the vehemence and volume of my reaction, which surprises me now looking back, stems from three things:
First, I've gotten used to seeing statements along the lines of "XmlRpc and its ilk are complete and utter useless shite, will be the end of the web as we know it, and what kind of brane damage have you suffered to continue using it?" So, I fully expected to be smacked around for even vaguely hinting that I'd found, in my experience, that XmlRpc is extremely useful. But, to the contrary, I got a very nice and thoughtful response from Paul Prescod and the flames never rose.
Second, on the level where my purist perfectionist self sits, I've bought into the "XmlRpc is shite" meme. So the fact that I do useful work with it from day to day introduces a bit of cognitive dissonance for me - how can it be complete shite if it's a money maker for me? I've got to justify this to myself somehow. Granted, there are warts in XmlRpc - but warts by what measure? A measure of ideal perfection, or a measure of real world experience? Well, what Paul gets across to me is that it's the latter, but my experience thus far is very much a small subset of the experiences of people who feel they need to go beyond XmlRpc. My experience with my problems is valid, and XmlRpc is useful. It's just that there's a larger domain for which XmlRpc falls down.
This, I think is the key: XmlRpc is not complete shite. It works just great for the right problems, which happen to be mine. The question to which I don't have an answer is this: How many developers' problems fall into my kind of domain?
Third and lastly, though I didn't get any flames for XmlRpc advocacy itself, I got a few private nastygrams flaming me for talking nice about DaveWiner's work after I'd talked not so nice about him some months ago. With regard to that, I have to say that I'm not on any particular crusade, other than for that which I find interesting. That said, I dislike many things Dave does and says. Though I'd rather not add fuel to fires, I might fail to resist on occasion - as noted above. However, I'm not the guy who's going to change his ways, he obviously doesn't feel a need to change, and I frequently don't have all the facts anyway. But I do like some of the guy's contributions to the world, so when occasion arises (as it did with XmlRpc), I'll say so. In any case, DaveWiner has given me much food for thought.
[ ... 642 words ... ]
-
This really confuses me. Seems that I've gotten 2 pieces of comment spam on one of my more visible blog entries (due to controversy) from back in September, More on the Monster Mash. The first one expresses confusion, then links to something about zip codes, and links again to something about business plans. The second one is apparently a strange flame at DaveWiner, then links to something about vehicle history reports.
I'll likely delete these in a lil while, but it really confuses me. Do things like this really bring in financial results to the spammers? Random links in nonsensical comments and inappropriate flames? Do they really have some metrics on this, before & after spam revenue results? This, and other spam, is really seeming like superstition in pigeons, where various kinds of spam are tried and maybe money is made so more kinds of spam are tried until the oddest forms appear.
I really do hope that, some day, the full fury of hell and recipients of spam scorned rain down on the heads of all those slimy bastards.
[ ... 801 words ... ]
-
You know you're an info freako and coding junkie when: Your trusty iBook - home to all of your projects in development, news aggregators, and blogging tools - goes into the shop for repairs, and all you're left with is a Treo 300 for convienient from-the-couch computing over the long Thanksgiving weekend. Most people would settle into a good book or maybe a James Bond marathon.
Instead, being thankful for unlimited data access and at least having a screen at which to stare (albeit tiny), you install LispMe and a handful of editors and tutorials in a sudden, inspired effort to a) learn a new language (Scheme) and b) write a news aggregator and blogging tool for PalmOS as your first project. I mean, hey, LispMe has access to net sockets on PalmOS - XmlRpc and REST, here we come!
Then, you go have Thanksgiving dinner, doze on your mother's couch in a food coma, dream about hordes of attacking parentheses (too much Scheme and NetHack), and all but forget about the whole thing. Watching the James Bond marathon isn't so bad after all.
[ ... 185 words ... ]
-
Noticed http://example.com linked to via the #rdfig weblog, and had this thought: Thank goodness that someone had thought to register that domain, or else my boss would have more likely than not been viewing porn after clicking on a link it in our last meeting.
[ ... 169 words ... ]
-
So, while I was catching up on T Bryce Yehl's blog since missing his transition to MovableType, I caught an interesting blurb he wrote with regards to Phil Ringnalda's ponderings on FriedPages and BakedPages in weblogs:"Funky caching" could be useful for a static publishing system as well. Weblog archives can consume a great deal of space, yet those pages are infrequently requested. Why not GZip entries from previous months and use a 404 handler to extract pages as-needed?
The funky caching to which he refers involves implementing a 404 (page not found) handler that, instead of just supplying condolences for a missing page, actually digs the page up out of cold storage if it can. I think I need to look into this for my site - throw all the old blog entries away into gzipped files, or maybe a tarball per month, and have a funky 404 handler dig them out when needed.
There are issues with this - such as what happens if I want to edit old content, or I change templates, or what not - but I think there could be decent solutions for those. Hell, maybe this is an easier way to blog locally and publish globally - don't rsync directories of files, just publish locally and upload a new tarball. Then, on the remote site, delete the index, RSS files, and other affected files and watch happy updates ensue. If a massive site change is made, rebuild locally, re-tarball every thing, upload the new tarballs, and delete all remote content to trigger revivification. Scary but possibly nifty.
[ ... 1031 words ... ]
-
So... I want to learn and tinker with C# and .NET, but I don't have and don't want to build any Windows machines. I have some headless Linux machines, but my main daily environment is Mac OS X. So now I see that The Shared Source CLI 1.0 Release compiles for OS 10.2, and I see that the DotGNU project has some news of a 100% free software release of Portable.Net. What about Mono? Any OS X news?
Or are things still basically in the "You can compile & run Hello World, barely, but don't expect access to any of the standard framework classes or GUI building" stage? Or are they in an even less developed stage that would make my non-compiler-building branes melt out my ears?
[ ... 172 words ... ]
-
With Syndication is not Publication, Mark Pilgrim elucidates in eloquence what I'd vaguely poked at in silliness. No, I agree with mark - syndication cannot replace publication, and publication cannot replace syndication.
Though, I think it's interesting to watch the thought experiments and the "what ifs" as Anil commented - and while I sense some eloquence in both approaches, neither answers all the demands placed on each. Hopefully, what dabbling in overloading one could provide is a little insight into enhancinging the other. In the end, as Dave commented and Mark demonstrated, there will be no fight - tinkerers will route around the solution they don't like.
[ ... 108 words ... ]
-
So, from my last installment, I left off with this: My daily situation with regards to integrating the web-based services of various parties boils down to what I can explain in 30 minutes to a developer of unknown competence to whom I'm barely a priority. So far, I've been able to apply XmlRpc as a solution to this problem with a great degree of success.
About this, endulging me although I'm sure issues like mine have been hashed out a million times already on various mailing lists, Paul Prescod writes:There's nothing wrong with the way you've used XML-RPC and I've used it that way myself. I only write to ensure that you (and anyone reading) understands the costs and benefits of this approach. You've given up on many of the feature of the underlying XML and HTTP technologies. If you don't need the features, then amen, let her rip. But people building more substantial apps do need them.
Yes. So, I know that my chosen solution has its blemishes and land mines awaiting my step. But, none of them have bitten me yet, nor do I expect them to for some time. On the contrary, for every practical application of XmlRpc we've deployed, we've had happy clients. While working in this industry, the 80/20 rule has been beaten into me - so if a solution that's 20% complete solves 80% of our problems, it's time to move onto the next problem.
This, however, is a dangerous thing to become complacent about - there're still 20% worth of problems out there for which this solution falls down. And each one will become a work-till-3-am nightmare when they hit. And besides, I'm a perfectionist and a purist at heart, so the dirtiness of this solution will never fail to irk me. It may be simple, but simple is not necessarily elegant. So what about the rest of Paul's points?
The first axis of flexibility you've given up is in character set. XML-RPC disallows Unicode character strings.
On one hand, my impulse is to respond to this by invoking YAGNI - since our use of XmlRpc involves messages between machines, I don't care about localizing those. I'll just keep to language neutral data in there. But, I'm naive. With a bit of pondering, I can identify at least one area where user-supplied input needs to be traded - shared registration and login for global promotions. And if I can identify one, there's bound to be more. I don't have as much experience with Unicode and handling languages other than English as I'd like, so I can't trust my assumptions here.
XML-RPC is brutally non-extensible. ... Dave Winer has been actively hostile to every effort to evolve XML-RPC.
In this case, I'd say this is a good thing, for XmlRpc in particular. It does what it does, no more and no less, and this will never change. I will never need to bring up versions of the spec in my 1/2 hour conversation with the junior engineer. And though I don't want to second guess Dave Winer, I assume this is the goal in his insistence on XmlRpc being ever frozen. The alternative at which he points, if memory serves, is SOAP.
The third aspect: XML itself. What will happen when and if your industry, whatever it is, canonizes a schema for what you do, like the hundreds of existing ones. ... Maybe your industry isn't organized enough to define a schema for whatever it is you do.
Bingo. Our industry is neither mature nor organized enough to conceive of any sort of schema. Our products and the concepts involved in our work are in constant change, not to mention the conditions of the market and what will continue to work.
Another way of attacking your interoperability problem would be to start from that point of view, defining the XML and then working back to the REST system. That's a much more natural way to define a REST-based web service and is arguably a better way to define web services in general. I would be curious whether you tried this.
Nope, haven't tried this. This is where I start calling things "ivory tower". To me, it makes elegant sense.
You say you tried "REST-ish URL schemes" but that isn't the same as trying REST. In particular, you don't say much about what your XML strategy was.
Calling our early interop schemes "REST-ish" may have been too strong an association with REST. No XML strategy. More like: "Make a GET to this URL with these query parameters, documented here. You'll get a simple text response back, documented here." This was always balked at, though I thought it was the simplest thing in the world. I might have some thoughts on the failure of this, if I think about it some more.
I don't know your suppliers, but in the vast majority of situations I have been exposed to, they "get it" if you tell them: "here are some XML example documents. POST things like that to me when you want me to do something and GET them from me when you want to get information from me."
Whenever XML is brought up, fear rises and eyes glaze. We're the "supplier" in most situations, and it's rare that our clients are tech-centered companies. They have a web site, and maybe a team maintaining it if we're lucky.
You depicted the situation as SOAP on one side (grungy), REST on the other (ivory tower) and XML-RPC in the middle. REST is basically the same old CGI-ish stuff we've been doing for a decade so I don't see how it is very ivory towerish.
Well, I'm mixing audiences here. SOAP, to me and my lack of time with it, seems grungy and in flux. To my clients, REST seems out there, especially if I ever try to explain to them what the letters stand for. :)
"What's a siggy?"
"You mean a C-G-I?"
In explaining XmlRpc, I usually say something long the lines of "your people's boxes talk to my people's boxes," then point them at the website that says largely the same thing. Of course, this begs the question of my effort and competence in explaining REST to others, which I feel is lacking thus far. Because, well, REST is also their boxes talking to my boxes. And, I haven't found a website at which to point both techie and non-techie audiences to "get it".
...perhaps REST doesn't apply to you. Yet. ... You say that REST doesn't "feel finished" to you. Nor to me. Our industry is in the middle of a huge migration to extensible formats like RSS which builds on RDF which builds on XML.
I don't think REST quite applies to us yet. I'd like it to. Maybe once we, as a company, have formed much stronger partnerships I can get past the 30 minute barrier and get to some real discussions and establish things like XML schema.
When people are comfortable with extensible, hyperlinked formats, REST will seem like a no-brainer.
The more I read and think about REST, the more I agree with this. It's just that I find it very hard to sell the idea yet. Again, this may be the nature of the beast for whom I work, and it may be my lack of ability to describe this simply.
But for now, it seems that the ability to throw a random unknown web developer an API and a URL with less than 30 minutes' accompanying discussion, and get results, seems to work for us.
[ ... 1478 words ... ]
-
So, my little off the cuff case study with XmlRpc from yesterday got a ton of flow from Dave. Got some comments and some email, and a very good response / rebuttal from Paul Prescod. Thanks to everyone for reading & responding!
I think a few things from that case study, the response to it, and Paul's comments need some more consideration. And, I think some more clarification of my situation would work as well. So I'll start with that:
The first thing is that I think the quote Dave pulled from my case study nails my situation: "I get it, my boss gets it, his boss gets it, the sales guys get it, and the marketing guy who's always our first contact at a client gets it."
I work at a promotions company. Because we act as a sort of application service provider and host everything ourselves, we can do and have done some pretty nifty things within our walls.
On our clients' side, though, they're hiring us to do nifty things that don't require them to do much at all. In fact, the vast majority of the time, our direct client contact is a marketing/PR person who has little, if any, access to technical resources. It's a major undertaking for this person to get an image tag or a javascript pop-up window slipped into an HTML page somewhere on the company website - anything further than that leads down a trail of tears and a week's worth of conference calls with grumpy people. We've spent more than 4 years streamlining everything down to the acceptance of this fact.
So then, in walks the client who wants a lot more, like the aforementioned client for whom XmlRpc worked swimmingly for integration between their site, their commerce vendor, and us. Our direct contact is still a marketing person, but she has a bit more technical support behind her. She might be able to get up to 4 hours' time from a junior engineer over there to whip up a CGI or JSP, assuming that the team in charge of the website approves permission to do so.
As part of that junior engineer's time, I have one half-hour conference call to explain our points of integration, how everything's going to work, and where to go for resources on how to do it. I barrage the person with email, offer availability on at least 3 different instant messaging networks, but when it comes down to it this 1/2 hour phone call is all anyone remembers. We've been working on improving this situation, but the only traction comes from where we've been able to gain the confidence of a client as a long term strategy partner. Otherwise, we're just some random vendor who gets tossed away if we make too much noise.
What this all boils down to is that I've needed to develop a scheme of integration between web apps that can be explained mainly in 30 minutes to a developer of unknown competence to whom I'm barely a priority. This is my situation at present. The situation has room for improvement - but the power to make those improvements are largely beyond my influence - so I've applied XmlRpc successfully so far as a solution.
So, I'll post this, and in the next installment (if you're still with me), I'll voice my concerns with the solution I've chosen, and consider what Paul's written. As always, I encourage you to tell me where I'm full of it, and why.
[ ... 588 words ... ]
-
About SOAP vs REST vs XML-RPC, Dave writes:By and large REST interfaces don't tell you how to serialize and deserialize complex structures, so you kind of start from scratch every time. If they use a SOAP encoding, you get some of that. But there just is nothing simpler than saying "Here's the XML-RPC interface, and some sample code in a few of the popular scripting languages." If you want developers to get going quickly, avoid the religious wars, just support XML-RPC. Now even this isn't bluster-free. Think of it as evangelism. Have a nice day.
Though I don't like his remarks on REST, I gotta give Dave an amen here and thank him for having XmlRpc out there for me to stumble upon. REST seems shiny and neat to me but incomplete - I'm pulling for it because it seems warm and fuzzy to my ivory tower branes. SOAP seems to try to be nice & clean, but feels grungy and toxic to me and I've always felt a vague sense of unease when walking past its house on the block. In the middle, XmlRpc seems to be the right balance of dirt and acidity to grow things in.
About XmlRpc, I can speak with some experience now. I first started tinkering with it a few months shy of a year ago, when I first launched this blog. I went to http://www.xmlrpc.com and learned it in less than an afternoon. Cobbled together a few Projects with it, and saw an opportunity to introduce XmlRpc as a new tech at work.
(In case you don't know about my job, I'm the guy who does most of the research & development, tool building, and overall platform architecture at a promotions company called ePrize. I grumble about the place from time to time, but on the whole they let me do an amazing amount of wandering around in finding new things.)
Anyway, we'd been poking around with a few different means of integrating our promotions into clients' websites. You see, we host the web part of all promotions on our servers, and work toward making inclusion of those promotions not much harder than slipping in a banner. But sometimes, there's just no avoiding more complex integration. And sometimes, it's exactly what you want with things like shared sign-up & login, awarding tokens & coupons, and coordinating between vendors' services. So, we had a few different REST-esque URL-based schemes, some FTP-and-PGP-based things, and even managed to convince one client to use a one-off protocol between two perl net daemons. Every time, it was something new, and we could seem to get no two clients happy with the same integration scheme.
Then one day I bring XmlRpc into the office, and weave it into our app platform at the ground floor. Overnight, just about every aspect of every promotion we do can be accessed via XmlRpc, given the proper switches thrown and the right access controls set. Producing an XmlRpc API spec for potential clients took mostly a bit of cut & paste massaging of our internal API docs.
(Your mileage may vary greatly, but our homegrown platform seemed pre-designed to fit with XmlRpc. This point right here may be what makes this anecdote fall on its face for you and your experiences. But, based on the success we have with it on the client end of things, I suspect that we're not that much an outlier.)
The next day, we had a meeting with a giant customer whose name I'm not sure I can mention, but suffice it to say you've heard of them and It's very, very likely that you've used their gadgets. We helped them put together a customer referral and rewards program, using XmlRpc as the glue between an online store vendor, the client's site, and us. It was a great success, and since then we've done another dozen or so cool promotions involving XmlRpc.
What makes this tech so successful here is that it's so simple and decently understood. I get it, my boss gets it, his boss gets it, the sales guys get it, and the marketing guy who's always our first contact at a client gets it. And when we finally drill down to a tech guy on the client's side, we just tell him, "Here's the API for your promotion, and visit http://www.xmlrpc.com for details on how to call it." And that's it - 75% of the time they get it and implement it. The rest of the time, we have to spend some time proving the tech and answering doubts, but it's always a pleasant experience from our end having all the answers.
I'll adore REST if/when it gets to this point in helping me get mine and my client counterparts' jobs done.
Anyway, this is sounding like a bad infomercial and I need to get back to work. But I had to toss out this bit of anecdotal dross in favor of XmlRpc.
[ ... 1403 words ... ]
-
In Formats for Blog Browsers, Dave writes:I wanted to add a facility that would automatically back up all your weblog posts... "I bet RSS 2.0 could do this," I said out loud. And now that the code works, the answer is clear. It can. ... Then another lightning bolt hit me. ... What if someone made a browser that worked with this format? Let's call them Blog Browsers, apps specially designed for reading weblogs.
About syndication formats, Anil Dash writes:I have a radical proposal for a ubiqitous content syndication format, applicable for almost any purpose, but extremely well suited for weblogs. ... My new syndication format is called XHTML.
In one corner, we have the syndication format taking over the document format. In the other corner, we have the document format taking over the syndication format! FIGHT!
Yeah, yeah, it's not quite that simple - but the opposed directions are interesting.
[ ... 413 words ... ]
-
About the "RDF tax" in RSS, Jon Hanna says:Ah, but you're missing the key point that a framework for making statements about web resources is of no use to a format that makes statements about web resources. It was obviously forced. :)
Heh, heh. Full point, RDF team. Now I go back to studying.
[ ... 54 words ... ]
-
Want to watch me taking baby steps in coming to terms with RDF? Come take a look at, or point and stare at, my RdfNotes!
[ ... 26 words ... ]
-
Justin Hall of TheFeature, in From Weblog to Moblog, writes:Weblogs reflect our lives at our desks, on our computers, referencing mostly other flat pages, links between blocks of text. But mobile blogs should be far richer, fueled by multimedia, more intimate data and far-flung friends. As we chatter and text away, our phones could record and share the parts we choose: a walking, talking, texting, seeing record of our time around town, corrected and augmented by other mobloggers.
Touches on SmartMobs and collaborative journalism. He calls it ?MoBlogging (mobile-blogging), but I'll still call it BlogWalking.
Haven't had a whole lot of success doing any blogging on the go with my Treo yet, but then I haven't tried very hard at all yet due to a lack of copious free time. Have some ideas that should help. Liking the Treo, but after settling into it, I'm kind of wishing I could have gotten a HipTop. I expected the Palm to be mature, but it's just kinda clunky and creaky and elderly. I expect the HipTop to get slicker, and I'd like the bigger keys better - not to mention the lil camera. But, my Sprint contract is up in a little under a year, so around this time next year I'll retire the Treo and survey the landscape again. Hopefully there'll be a dream device out by then.
[ ... 227 words ... ]
-
Okay, so I'm probably the only one who didn't know this, but I've been wondering why it seems that every website owned by someone within a few degrees of separation from TimBL tend to use URLs of the form:
http://www.example.org/2002/11/nifty-thing
Just one of those things I figured kinda made sense, but was never sure why for. Then, today after a bit of wandering while researching things RDF and SemanticWeb, I found a link from Sean B. Palmer pointing to Hypertext Style: Cool URIs don't change by TimBL himself. Seems the example of this pattern is layed out there by the man himself.
Seems like it would work like a limited sort of concurrent versioning scheme, but it just looked wonky the first time I saw it. I mean - date-based website layout? I'd been raised on the high falutin' directory trees made by very well (overly?) paid Information Architect types. /2000/10/stuff? What about /about-us/corporate/ceo.html?
Of course, this is ignoring the fact that some webservers need not directly tie physical disk layout to URL layout. Or that site architecture is best presented via links in the documents themselves. It's just that plain vanilla Apache uses a 1:1 match between file path and URL path, and that's what most everyone uses.
Hmm.. Might play with it a bit around here.
[ ... 483 words ... ]
-
Remember that Slashdot story from the weekend - Gillette Buys Half a Billion RFID Tags? Boring, yawn, inventory management, who cares? Well, imagine if those things, along with their readers, got so ludicrously cheap and small that mothers would stick 'em to kids' underwear as they went off to camp, and readers came standard in watches and cell phones. Imagine that, somehow, the range was improved to at least 20 feet - your reader sends out a radio ping (MARCO!) and back comes a list of the contents of the room as every object responds (POLO!). Assume as well that your reader can work out the location of each of these objects.
Ignore the big brother fears for now - everyone will have this stuff, not just the MiBs. Also, ignore my ignorance - many of these enhanced tricks are likely impossible or at least very hard for these little gadgets. For now. Until someone does something clever.
But, imagine! Never lose your keys again! Find the cat! Owners manuals to everything you own - should you ever actually have the impulse to read one - are available with a tag ping and a lookup. Imagine the games SmartMobs will find for this stuff! Handle the tag code on objects in the world with a URI scheme - the Semantic Web reaches out to help create the Semantic World! It's CueCat on super steroids! Facial recognition in your PDA to remember my name? Bah - I've got a RFID business card in my pocket. Consider the combination of this with a WearableComputer, and the world becomes just a bit more active as previously inanimate objects can tell you their stories and stories told about them. High tech animism!
Okay, I'm winding down on this. I know I'm going wonky with this idea, but this is along a theme I've been playing with in my head: My favorite sci-fi stories and meditations on the future involve little clever bits of tech that get tweaked and leveraged in powerful ways few could have guessed at. Maybe this tagging tech I'm hyping won't be it, but it's one of the ideas that tweaked me in VernorVinge's Fast Times at Fairmont High.
Trying to stretch my brane more along these lines - squinting at the knee of the curve of TheSingularity. What else is on the verge of emergence, how much of it is crap and how much of it is the real thing? Will things like this and the WearableComputer and ubiquitous internet actually cause major change, or will it just turn into more chuckles for my grandkids when they ask me what it was like in the 90's and double-naughts, when I was living it up at conventions on the company tab, sipping a stout while getting my whites washed at a whacky laundry/bar in San Francisco?
[ ... 560 words ... ]
-
Just read E-mail as a System Console. Part I over at Linux Journal. It's something I've been meaning to implement for awhile, using CPAN:Mail::Audit and other tools. Got a bit of a start on things, with a sort of plugin system for adding commands available via email. Maybe I should hit this again.
Personally, it's been a long, long while since I had a dialup-based system - so I don't have to solve their exact problem. But, it would be nice to fire off an email to myself or some other mailbox I own, formulated just so and cryptographically signed by me that would cause some machine behind a firewall to open up an SSH tunnel to a predetermined location somewhere. Among other things.
[ ... 125 words ... ]
-
Whew, it's been a long stretch of urgency and emergency lately without much time for reflection or writing, let alone any general hackery. Don't want to go through a fully detailed recap - but my girlfriend moved in with me; mostly have survived a near ?DeathMarch project; took care of said girlfriend after surgery; was taken care of by said girlfriend after surgery myself. Managed to squeeze a birthday and dating anniversary celebratory break in there, but - not much time for blog, Doctor Jones.
I really want that to change now. A lot has been going on since I've had to take a step back from large parts of my life, but I think I can step back into things again now. Need to restart the social life, and start poking my head out of the lurker pool on the net. Time to finally regroup, exit the state of urgency and emergency, and get back to moving foward instead of moving in reaction.
[ ... 165 words ... ]
-
Just fixed the download links on these projects: RssDisplay, ShowReferers, GoogleToRSS. Just took me awhile to circle back around to it, ugh. So if you were ever interested in those projects, please try grabbing them again for ridicule. :)
[ ... 40 words ... ]
-
I've gotten some interesting comments and emails with regard to yesterday's semi-rant, Bitterness in the halls of Xanadu, that further expose some itches I felt after writing it. Two main things:Ideas are not completely worthless, and I have a certain adoration for sprawling cathedrals of vapor.Sometimes the best implementations from the best ideas go unremarked or unnoticed due to obscurity or a lack of understanding.
My recent and vehement attachment to the meme of ideas are fucking worthless is a reaction to my own time spent working on projects that seem to me, in small ways, to resemble what I've read about Xanadu - in terms of long development arcs and seeing similar (but flawed) systems released before my work is done. This presumes a lot - I've not published anything (besides this weblog), probably influenced less than a handful, and these projects of mine are more than likely nothing in comparison to Nelson's work. They certainly haven't consumed as much of my lifetime as his, by an order of magnitude. But still, it felt like some of the scenery along my road matched his. And I don't want to end up down that road, frustrated or bitter that my visions or work had been misread and appropriated by others. (Again, this presumes I have visions or work worth stealing! :) I can't say that I've demonstrated such, as of yet.)
And then there is the addictive quick fix a hacker like me can get in the blogosphere, with just a brief and clever twist of code one can improve the neighborhood and take in some brief praise. Spend a bit more time working, maybe a few weeks or months, and introduce a slightly more useful or complex contribution - you might receive kudos for some time running. But, work for too long, plan too far, build too high, and the blogosphere likely passes you by - unless you really have a deep grasp on what's going on, and your projects meet the blogophere where it later arrives. So I see the techie bazaar of the blogosphere as a kind of fun cauldron of hackers, throwing in ingredients and taking others out, kind of a hivemind without design evolving toward higher connectivity.
So where's there a place for longer-term design?
Okay... more to think about and write, but for now, I'm leaving for surgery. Plenty of time for babble while recovering. Wish me luck!
[ ... 405 words ... ]
-
I'm a little late on this, but I just read an BBC interview with Ted Nelson this weekend. I don't know the man, and am familiar with his work in only the most cursory way - I've read a bit about Xanadu, skimmed some articles on its history, but I've yet to download the code finally released a few years ago and see what's what. Having said that, this interview reads like the bitter mutterings of a guy who wants to slug TimBL and the rest of us web hackers for making him all but obsolete.
From what I've read, the body of work surrounding Xanadu seems to have a lot to say for itself, though some of that - the assorted collection of almost psychedelic jargon invented to describe its convolutions - seems almost self parodying. The history of Xanadu and Nelson's work with hypertext systems looks to me to be yet another proof that ideas are fucking worthless, and another vote in favor of bazaar- over cathedral-style development.
Maybe it's the fault of the interviewer, but Nelson comes across as a bit self-aggrandizing in trying to puff up how creative and multi-talented and immersed in media - I mean, he made his own rock opera for cripes sake! So I guess this should set the stage for his authority when he dismisses HTML as "a degenerate form of [hypertext] that has been standardised by people who, I think, do not understand the real problems" and the web as "... trivially simple ... massively successful like karaoke - anybody can do it."
But y'know, for all his criticism of systems here and now, and claiming that the people involved with the web don't know what they're doing - he has a surprising lack of working software out in the world. That's what I consider success - working and well used implementation. Who doesn't understand the real problems? One can only build architectures of vapor for so long up in the ivory tower before one must pit it against the world. I don't suppose he's ever tossed around the idea that maybe, just maybe, Xanadu hasn't stormed the world because it's too big and cumbersome and amazingly convoluted for anyone who hasn't worked with it for 30 years to put into use? There is much to be said for the benefits of Karaoke-simple technology.
So yeah, maybe on some Platonic plane of ideal forms, compared to The One True Hypertext, HTML is crap. Okay, maybe right here on my hard drive, HTML is crap. But it's on my hard drive, and I use it. It's been learned from, and attempts are being made to improve upon it. As far as I know, Nelson's ideas of perfection have never seen a pounding from the world of imperfection. That's the crucible in which things are really formed. You get your software in reasonable shape, toss it to the wolves, and see how it fares. Realize that some of your prized pet theories and designs were bullshit, and rework the thing for the next time around.
Okay, that's my rant. Now I'm back to work, and possibly maybe off to check out more things Xanadu and Nelson on my lunch break.
[ ... 1370 words ... ]
-
So, with regards to things SemanticWeb, I think I'm about to eat dogfood in considering a BlueSky project at work. We've been tossing around ideas for a kind of uber-space or organization-wide brain in which to gather all kinds of details about clients, projects, project details and components, lifecycle, etc.
We want this thing to be as flexible as possible, without filling up a wall with printouts of database tables and relations - in fact, we want the thing to provide ad-hoc relations we hadn't thought of at first. We want people (ie. project managers, sales) and robots (ie. metrics engine) to contribute data, and then people (ie. sales and clients) and robots (ie. automated project build) to be able to query and use this data. We want roles and access control. We want scalable centralization of data items. (ie. Why should the start date of a project be maintained in 12 different places?)
I'm certainly not naive or ignorant enough to think that this is virgin territory. There are entire industries devoted to these issues of business data integration - but here, budgets are very slim yet we love to play with new tech. It continues to astound, but with a little ingenuity (okay - a lot of ingenuity - we have smart people working here), this has led us time and time again to a combination of Open Source software and homegrown code that treats us better than any outside vendor solution. So, I'm hoping to pull another hat trick here and have some fun expanding my brane at the same time.
One of the first notions was to ease more information into our LDAP servers, since it has a very nice hierarchical layout and can accept arbitrary name-value attributes for items. But then the topic of RDF came up, and the discussion really caught on fire as everyone came to understand what RDF is - at least insofar as I understand it, since I seem to have become the local domain expert.
So, first thing is: I hope I really do grok this stuff, at least at a gut mojo level. No one's called me clueless about it yet. But the second thing is: Any practical tool suggestions, case studies, prior art, etc that I should be looking at? I've started with a Google search for RDF and I've been wandering W3C documents - but I need a Busy Developer's Guide. My ultimate hope through all of this, is that even if things are still baking, that there's enough out there with which to make something practical.
The goals are gigantic, but my intuition is that using SemanticWeb tech will let us start out small and simple and then add vocabularies and items as needed without massive tool rebuilding. This is the key thing - the ability to do some initial, fairly easy things that show early results without a heavy, multi-month process to get the thing providing value. My gut tells me it's possible. Am I mad?
[ ... 615 words ... ]
-
My referrer log harvester and stats were down last week, so I missed out on just how much traffic and flow my Semantic Web ramble had gotten. But, I followed the paths back to a riff on the Semantic Web by mrG of ?TeledyN, perhaps more appropriately predicting The Semantic Mycelia. And I think it's on the mark with what I've been thinking lately.
I've yet to really have time to dunk my head into the Semantic Web barrel for the apples yet, play with the tools, get some results, but I think I get the mojo. And it excites me. But, I've yet to really see any flowers bloom or any heads pop off. I hate to criticize or critique, not having walked the walk yet, but I worry that the present state of things Semantic Web and RDF are walking similar cathedral paths that Xanadu wandered decades ago. I say this in the sense that there seems to be a lot of heavy design up in the clouds, and not enough improvisation down on the ground. And it's the improvisation that kicked the web off to its heights.
On the other hand, it's not like one company has the Semantic Web in a stranglehold. There are groups which, had I the spare time and inclination, I could probably join and pitch in a hand. It seems an open and friendly group, normal birthing struggles and conflicts notwithstanding. So I'd say this thing seems more a bazaar than a cathedral.
But from my occasional stops by the farm, looking over the fence, I see equipment and activity, but no loam or topsoil tilled with things blooming. I don't see the mushrooms popping up in the moist dirt. I see lots of work on the tractors, but I don't see the ground getting into condition for all the little fungi to spontaneously appear. And that's what I see with the web now, and with the blogosphere. The dirt's nice and warm and space aplenty, so things are burbling up all the time. And it feels like I'm getting some neat things done without the need to commit a slab of time to a group or a process other than my own jazz on my own time at the keyboard.
I guess what I'm getting at is that the Semantic Web just doesn't yet seem to me to be sufficiently dirty enough for the grassroots to grow. But I'm still unsure about this assertion. Need to play more.
[ ... 459 words ... ]
-
Even though I've been in a whiny, self-pitying mood, there is a reason I got into all of this to begin with:
I have to keep this always in mind. And yes, that is authentic - from first grade, I believe. And there's another like that for second, and third, and...
[ ... 196 words ... ]
-
Mental/public note, things to catch up on soon as the Death March and surgery is cleared:
Fix all the broken links in wiki to download scripts & projects.
Revamp site design to use CSS, accessibility, and less ugliness.
Get back on IRC and in touch with Morbus and the AmphetaDesk clan. (I missed a release!)
Write a CPAN:Net::Blogger based blog post page for AmphetaDesk
Also write a fairly simple yet generalized CGI wrapper for CPAN:Net::Blogger to use with my Treo.
Toy with XmlRpc and PalmDevelopment on my new Treo - some lightweight native Palm clients might be nifty. Poke at ?MetaWeblogApi & others.
Play with the Java MIDP for Palm.
Look into more ?MachineLearning things, maybe apply Bayesian mojo to finding interesting RSS items to highlight.
Relax, recover, catch up on the world, enjoy quiet time with the girl.
Update: Copied things over to LesOrchardToDo
[ ... 154 words ... ]
-
Hmm, stevenf gives Zoe some love. I want to like Zoe, and I think I might try it again. But ever since I first found and later tried Zoe, I've not had much luck with any release. I think at one point I had a single email appear, but I'm just not getting the mojo. I think I sorta figured out what I'm supposed to do, but mostly it just sits there munching CPU quietly. Hrm. Not that my setup is exotic - I use Mail.app to access an IMAP server on my iBook. I see others manage to work it. Trying again...
Update: It appears that my setup is exotic enough - I'm foolishly using UW IMAP as a quick & dirty install. As-is, it points to the root of my home directory for email, and I set the IMAP folder prefix to mail/ in Mail.app. Well apparently, Zoe doesn't support this and instead tries spidering my entire home directory via IMAP. It appears that there may be hope in a future release, but not for now. I suppose I might switch IMAP servers, since UW has always given me the willies. And, I'm assuming that while I'm using IMAP, Mail.app doesn't have any mbox files for Zoe to auto-import.
[ ... 251 words ... ]
-
Looks like the Semantic Web hurts Russell Beattie's branes. Hurts mine too. But, I tried explaining what I think I understand in a comment on his blog and I figure it's worth reposting here for ridicule and correction:
Did you happen to catch Tim Berners-Lee on NPR Science Friday today? Not sure if you get the broadcast there, or listen to the stream. He was expounding on the Semantic Web a bit.
Maybe I'll take a shot at explaining, since I think I understand the idea. Likely I'll fail miserably, but here goes.
First simple thing: Look at your weblog page. What would it take to extract the list of people from your blogroll, just given your URL? What about the titles of all the weblog posts on that page?
You, personally, can extract that information very easily since you, as a learned human, grasp the semantics of the page quite quickly. (The semantics are, basically, what's what and what's where and what does it all describe.)
Imagine a document containing exactly all of the same info your weblog page presents - only the data is completely, easily accessible to a robot in a universal, easily handled format.
Furthermore, imagine that the schema describing the data to be found on your page is in that same format. And then, imagine that the document describing the construction of schema is in that same format. And then imagine that the decomposition continues, all of the way down to base data types and relationships. Eventually, the whole thing rests on the back of a turtle - er I mean a sort of universal client.
Now, what if every single page on the web were available in this manner? No scraping, no regex, no tricks. I could use the entire web as database and execute queries that draw from data available on a myriad of disparate URLs. My client can figure out what to do with whatever it finds at a URL by chasing descriptions and meta-descriptions until it reaches the level of understanding implemented in the client.
Going out on a limb here, but imagine a practical example: "Hello computer, find me 2 roundtrip tickets for 7 days anytime in the next 10 weeks, for under US$300 each, to a vacation spot where the weather this time of year is usually warm and sunny, the exchange rate is better than 3 to 1 US dollar, and is rated as better than average by Ann Arbor, MI bloggers."
Assume my semantic web client knows some URLs to airlines, to international weather services, to exchange rates, and to vacation spot reviews in weblogs in Ann Arbor, MI. Assume that there are schema available for the things these URLs describe. Assume that my semantic web client can parse my natural language query.
So, it takes my request, goes out and snags the URLs appropriate to the various topics involved. Once it has all it need to process the data in each URL, it can find me the answer to my query, based on data pulled from all over the place.
Now, get nuttier and bring in some intelligence with robots that can do some inference and reasoning. Say I throw out some facts: Mammals breathe oxygen. Men are mammals. Joe is a man. With the right client, the query "Give me all oxygen breathers," will include Joe in its results.
Whew. There. That's what I think I understand about the Semantic Web.
[ ... 716 words ... ]
-
Should I be concerned? About me, Les Orchard, Googlism says: "les orchard is going weird again" About the other name variations I've used online (ie. l.m.orchard and Leslie Michael Orchard), it has nothing to say. Though, it does tell me that "decafbad is getting left out of the pipeline".
[ ... 50 words ... ]
-
A bit more meditation along the geek-turned-shepherd theme: I've always joked with friends and co-workers that someday, I'm going to go off the deep-end, quit my job, and do one of three things:
abstain from touching another computing device for the rest of my life and live out my days gently raising sheep for wool.
renounce all personal wealth and possessions and enter grad school on a ?PhD path, intending one day to become a professor
go completely apeshit bonkers, develop a slew of dramatic twitches and tics, and speak exclusively in pronounced punctuation symbols until they gag me.
So I've got that feeling again, but any expressions of frustration and angst I've tried writing so far all strike me as spoiled whining. Professionally, I still have a job in the field I love and I doubt that this is in jeopardy, despite shipwrecks and employment carnage everywhere I see. And I'm not facing financial hardships, just minor budgeting dilemmas. Personally, I've only just turned 27 last week. I have a wonderful love and a mostly whole family. And other than some minor surgery approaching, I have my health.
So, in general, life's peachy. But, it only takes one solo death march project and a bit too much personal investment in one's work to lose perspective and make things taste sour. You can see the beginnings of it and my enthusiasm (and perhaps some of my trepidations) back in mid-July. The current project in question is not the web app platform rewrite I'd mentioned then, though, but rather a new major tool that seemed a good candidate as a "rev up for Java before the rewrite" and a break with Perl as the established in-house development language. So, despite a concerned trackback at the time from Brian Donovan of MONOKROMATIK, I've managed to avoid descent into that level of hell thus far. (This may still be naivete - but I'm not yet convinced that starting this over in Java is a bad thing.)
So, my project at present started off as a new venture, with new technologies and techniques, and a planned end product of immense value and demand for the company. But now, it's ending in a vague cloud of missed deadlines and frustrated expectations. A slew of "missing" features have been added far beyond the woefully incomplete original scope, yet features expected from the start have dropped, leaving a sum impression of a diminished result. Initial deadlines have been blown by months, leading to an abandonment of testing and documentation just to sprint to the next already unrealistic due date.
With whom does the blame lie? With everyone - with no one - but no matter: I'm a perfectionist, so I choose myself. I tell myself that I should have known better than to have agreed to that timeline - I should have better forseen the unexpected. I should have never let go of my pristine test-driven development for perceived speed - mystery bugs and instabilities are biting me now. And I didn't stick to my guns on feature creep - the "wouldn't it be nice if" genie is out of the bottle now.
All I wanted was to be accomodating - to deliver an affordable, massively flexible product in a perceivably reasonable timeline that satisfies all parties concerned. And as I write that somewhat convoluted sentence, I realize both that that really has been my motivating goal, and that the goal is unlikely to impossible. Infinitely more so for just one guy, left mostly to his own devices, to pull everything into order. Funny, at the time, I didn't feel like a one of the "hotshot programmers who are convinced that they can run barefoot all the way to the top [of Mount Everest]." (See: Yourdon.)
So what mystifies me now is this: I'm a smart guy, I've been through a few trying times and I read a lot and I try to pay attention. The development pattern I've just described is neither new nor unknown to me - I've been here before and I've read the accounts of others who've been here before. And I know the benefits in sticking to a clean test-driven dev cycle, and keeping a stern check on scope creep. I know these things, and I try to practice these things - I'm normally known as the guy who pees in the happy idea cheerios - so what hit me this time?
I'm carefully trying to steer clear from writing a rant or a whine - I know that my angst comes from a loss of perspective in being nose-to-nose with this project. I know it will get better once this is done, and I know from recent developments that this project can and will be rescued unlike many doomed projects. It will be good and well used and the disappointments will fade. But for the life of this project, the late nights and my absence from life online and off have been real, as have been the doubts and fears.
That said, what the hell happened? And how can I prevent it in the future? Is it just a matter of guts, principles, and discipline? I don't think I'm personally in the wrong line of work - but this sort of thing puts me in an unfortunate mood to question everything. Hmm, I know I've intimated that I'm well read - but I've yet to actually read Yourdon's Death March. I think it's time to dig into a pile of his works.
Anyway, it'll work out in the end. In the future, I need to learn how to steer clear of these rocks - though it seems Yourdon asserts that the rocks are unavoidable. So, even with the knowledge that this is not the end of the world, contemplating doors 1, 2, and 3 seems peculiarly attractive. I'm leaning toward #2 as the more healthy alternative, but it might be fun to pursue some combination of all three.
I'll let everyone know which I choose, when the time comes - but the message may come entirely encoded in non-alphanumeric ASCII.
[ ... 1020 words ... ]
-
Still alive... I think. Have worked about 40 or 50 hours in the past 3 days. First release of work project happens tomorrow, after which I hope to take a 4 day vacation to celebrate my and my girlfriend's birthday (Oct 24, same day).
Some revelations over the last few days:October is not 31 days long - it is 31 days and one hour long.Java has a class to handle date manipulations that I forgot to use, for some reason - but now that I know about it, there's some major rearchitecting in store to use it.I'm tired of bar graphs, web server logs, and explaining how the tea leaves are read.I like my new Treo Communicator - it is great for playing kMoria (but not NetHack) and quickly SSH'ing into a server from bed to restart a process about whose death I was paged on the same device. (ooh, what if the pager message could contain a button to connect to the server crying wolf?)I've come to realize that I enjoy playing NetHack and kMoria between compile-and-test cycles at 3AM better than I ever enjoyed Everquest in quieter times.On the drive home, as the sun was rising, I was trying to think of just what aspects of Everquest (beyond graphics and besides multiplayer) are better than the family of Rogue-like games. And... could someone hack together a Rogue-like UI with which one could play Everquest?Also on the drive home, I was thinking that I'd like to take a break from being a geek and a software architect altogether, and maybe raise sheep and make sweaters for awhile. What do you think?Hernias suck, and impending surgery is stress-inducing.
Hopefully I'll be recovered and returned to the world after next week. I've been vaguely poking my head in and around the neighborhood, but with my iBook suffering from seizures and my brain feeling like an alien abduction aftermath, I'll have a lot of catching up to do.
[ ... 1198 words ... ]
-
Found something sick from JWZ on LiveJournal - a CSS-based RSS icon. Believe it or not, it's smaller in bytes than the gif.
XML
[ ... 416 words ... ]
-
Still fighting my way through the following: iBook hard drive crash (it's still making that noise, though I thought I could cordon off the bad blocks)subsequent email loss (tried to mail me in the last month? try again, please?)final sprint to the end of a work projectMy girlfriend's moving in with me!It's my birthday and her birthday (Oct 24th), and our dating anniversary (Oct 27th) (strange alignment, that)
But, the work project's almost done. Going on a vacation this week to celebrate the birthdays and anniversary. And I'm not sure yet what to do with the iBook. But! I plan on torturing myself into prodigious amounts of writing again by having a go at NaNoWriMo.
[ ... 115 words ... ]
-
Ugh. Hard drive in my little iBook is going "brr-tick-tick-tick-tick. brr-tick-tick-tick-tick." while various processes inexplicably freeze up and go AWOL. These are things I've come to recognize as the death of a hard drive. So, since I've been putting everything on my laptop lately, I'm feeling a bit lost today. No email via local IMAP filtered through SpamAssassin and things. No news aggregator. No outlines. Currently running a disk utility that I hope will route around the damage, but I'm not sure.
I know it's gratuitous, but replacing my iBook with a ?TiBook that I don't have to pay off for 18 months sounds mightly nice. But, I promised myself and my girlfriend that the Treo was the last toy for awhile, until I have everything all paid off. That, and I keep hearing rumors of a new ?TiBook in the ether, early next year. Yeah, yeah, wait for the next latest and greatest and I'll be waiting all eternity, but waiting in this case would appear to align with other more prudent goals.
But either way, I'm feeling my lack of computing support today like a sleeping limb. I'm such a dork :)
[ ... 278 words ... ]
-
So I finally broke down and bought a new gadget. My phone was 5 years old and losing its antenna, and my Handspring Visor Deluxe (pre-ordered the day of introduction!) was showing its age. I was thinking that, for the phone, I'd get something small and sleek. Something that would likely fall through the little hole I inevitably have in every pocket of every jacket and overcoat I own. And I was thinking that eventually, I'd procure a Sharp Zaurus to replace my PDA. I would miss the Palm OS platform, it having done alright by me for about 6 years now, but I wanted excitement! and adventure! and modernity!
Well, after long deliberation, I saw that ?CompUSA was having an 18-months same-as-cash promotion, so I finally dragged myself in there and purchased... A Treo Communicator 300.
What can I say? I had a service contract with Sprint yet to expire, and had heard decent things about the device. And it turned out that I just didn't want to give up Palm OS quite yet, and all the other powerful phone-and-PDA combos were so hideously brick-like. (I know, let's take a standard Pocket PC, and Krazy-Glue a speaker and a stubby antenna on top! It'll look brilliant!) And along the BlogWalking theme, among other things, it does wireless internet admirably well and for a price that I'm not sweating too hard to pay. The HipTop might've done me better, but like I said: contract I'm still stuck in, and Palm OS that I know how to hack.
Before I'd gotten it, I obsessively poured over reviews. One of the biggest horror story themes I caught was with getting the thing activated on Sprint's network, back in August. Well, no sweat for me there. I called up the activation number and was walked through the process by a very polite gentleman with a pleasant Indian accent whose phrases and reponses were so identically spoken that I thought I was speaking with a machine. (Am I looney, or had I read somewhere that many call centers were being outsourced to India and lands abroad via some nice WAN technologies?) The new Treo was ringing and hitting Google within 10 minutes after I got off the call, despite his cautioning me that it might take upwards of 6 hours to get processed.
Once working, the earbud/mic supplied with the device worked very well. Holding the thing up to my ear is nicer than one might expect when pressing a small slab to the side of one's face, but I'll probably use the earbud more. Now if I only had an elegant way to stow and fetch the earbud on my person while I'm out and about. Unspooling that tangle is no way to quickly answer the phone.
As for applications, I had everything from my old Visor transferred in short order, including my body of data whose history went all the way back to my first blocky Palm Pilot. After that, I went out and snagged a large blob of net apps and synced them up. AIM, SMS, IRC, SSH, VNC, ICQ, email (via IMAP!), and most importantly Google via Blazer all worked great. (Although, you have to watch AIM. It does nasty things occasionally and seems to corrupt its databases, requiring a warm reboot, deletion, and reenabling of all your hacks and extensions.)
So, shortly I'll be looking for a way to combine AmphetaDesk or ?OffNews with ?Plucker to give me a way to package up and slurp down a day's reading like I used to do with ?AvantGo. The funny thing, though, is I don't know whether a single package-and-sync of reading is enough in a day. It used to be - I would slurp down News.com, Wired News, Slashdot, and a smattering of other sites and be set for the day. But now, I check my AmphetaDesk at least 6-10 times a day. Given that, and the fact that I do have a decent allowance for data per month, I may look at putting AmphetaDesk out on my JohnCompanies server and whip up a Blazer-friendly skin for it using some ideas from AmphetaOutlines to hide redundant items and save me some bytes.
It's such a difference now, using the Sprint data network with this slim and elegant little Treo, versus when I first bought a Novatel Minstrel for my Palm III and used it to vaguely, slowly, gradually poke around on the web with that solid brick of magic stuff. I can't wait until all of this finally converges with affordable, socially inobtrusive wearables.
I also just got done reading another VernorVinge story, Fast Times at Fairmont High. I blame him for this. I want display contact lenses, ubiquitous networking via computing clothes, consensual imagery overlayed across all I see. I want twitch-speed access to searches so that I can pick up on the song or poem someone starts quoting, and complete the line for them. I want to ping objects around me and have them respond with self-identification. I want to live-by-wire. But until then, I have a wireless net connection on my Treo with a keyboard that makes me look only slightly geeky when I type. That, and a desire to get my butt back into school so that maybe I can get the credentials to climb my way into research with tech like this.
[ ... 898 words ... ]
-
This is why I love the web & blogosphere these days. It's getting just a little harder to get away with bullshit every day. :) Threads cross at on Dave's site and Metafilter, among other sites, and Microsoft's counter-Switch ad is revealed.
Man. And they even used a clip art image for the "real person". I've not been in the web biz, ad biz, or promotions biz for very long at all, but I've already developed a cynical chuckle for clip art people. I've either searched through sources myself, or been at the shoulder of a creative director while she or he did the searching. It doesn't take very long until you can nail it when you see it.
But clip art in and of itself isn't bad - it's when an attempt is made to pass the clip art over as some kind of candid reality... that's when a company really shows how smart they think you are. :)
(Oh, and P.S.: Can you people please stop using those "photographer standing on a stool over a model looking up" posed images? It really doesn't convey hip, cool, or clever.)
[ ... 482 words ... ]
-
Mark Pilgrim implements something I've been thinking about for awhile: His "Further Reading Upgrades" now harvest what appears to be the paragraph surrounding a link on a referring page. Along with the RSS feed of "Further Reading" items he's made available, he's got a nice game of follow-the-leader set up for the rest of us referrer log watchers. Nice & elegant & makes me ashamed. :)
Update: Mark pulls back the curtain and reveals the secret. Yay!
Yet another demonstration to me that ideas are f'ing worthless and the making is what matters. Lots of things I've been thinking about doing, but never get done. (See also: Blog:000305) Just a few weeks ago, I managed to upgrade my referrer tracking to dig out the titles of referring pages. And I've had an RSS feed of referrers myself for a few months now, but mine's been ugly as sin and so I've kept it to myself. So now I'll need to think in earnest about how to do some extraction of the link-surrounding excerpt in referring pages. Mark's referrer handling really is elegant - it even seems to know how to collapse multiple views on the same referring entry (ie. front page, archive page, individual entry page).
More things to play with :)
[ ... 681 words ... ]
-
Funny, I've just been toying around the last few weeks with doing just this, for AmphetaOutlines, to cut down on the wodge of HTML it feeds the browser all at once. Marc Barrot presents Transclusion Breakthrough: The Endless Web Page. The post reads a bit like an advert for Amazing Live Seamonkeys!, but I think the enthusiasm is understandable:This is the in-browser version of what Dave Winer and UserLand created for Radio's outliner.
This is instant rendering, happening on the fly as you browse through the current page. It is totally recursive: try clicking on the 'endless web page' node that appears under my name in the demo page.
Now, I'd like to dissect and figure this out, and add it to my AmphetaOutlines hack so that I can stop loading 5MB (!!!) of HTML every time I reload the page. Beyond that, I can see some very cool applications involving live data navigation and outline rendering and... yeah. This is cool stuff.
[ ... 163 words ... ]
-
Testing Net::Blogger
[ ... 3 words ... ]
-
Hello everyone out there. I've been busy as hell these last weeks, for reasons personal and professional, but I wanted to take a second to say something:
Thank you for writing. You make me cheer, and you make me cry.
This makes me think about being a bit more human in this space, along with being the tech obsessive here. My LiveJournal has caught most of my personal entries and rants, but maybe I'll finally merge the two. Not that I've had much time for writing lately, unfortunately, but it's heading toward colder and longer nights here, and I'm sure the urge will strike.
[ ... 105 words ... ]
-
Like the MTCleanHTMLPlugin I released a little while ago, Brad Choate's new MT Sanitize Plugin appears to do the same job. I haven't tried it yet, but since I'm using a pile of Brad's plugins and have based all of mine upon his examples, I'm assuming it's good stuff. I'll likely check it out and see if I like mine or his better for my own use. :)
(Oh, and in case anyone wonders, I mean "competitor" in the "there's no competition because his code-fu's likely better than mine here" sense. :) )
[ ... 284 words ... ]
-
Seen at HipTop.com:Hiplogs Online Journals?It's your chance to be a star! You and your trusty T-Mobile Sidekick, that is. Share your deepest thoughts or wildest whims online with a public journal you can update on the go!
See? BlogWalking is a nascent meme! Or something. Yeah!
[ ... 90 words ... ]
-
More on Pingback vs TrackBack on Hixie's Natural Log.
How embarassing - he points to my referrers as a typical list. :) Mine are crap. Look at how Mark Pilgrim handles referrers. Yesterday I was working at making my referrer tracking harvest titles, clean out false links, and collapse redundant backlinks, but I'm far from perfecting that. But, at the same time, I agree: Referrers are not enough. They're one source, the most noise-ridden but the most effortless on the part of the outside contributor. But you can only do so much with almost nothing. :)
I think, when it comes down to it, my only issue with Pingback is not a Pingback-specific issue at all: How to harvest machine readable metadata from a web resource. This applies to my referrer links, Pingback, and TrackBack alike. TrackBack has a bit of a solution, with embedded RDF, but that's got its own issues. Ian suggests a few things to me in comments, such as harvesting the title from the HTML title tag (a no brainer), and then harvesting further data from DublinCore-based data in meta tags in the page. I've seen this last convention only once before, in the geographical data consumed by Syndic8.com.
Is this a pretty common convention? I've not seen it done much, but the I obviously have not seen everything or a large chunk of anything. :) If this is a known convention, it makes me happy and I think it would answer a question I asked back in May.
Update: Duh. Yes, it's a known convention. It's even got an RFC: RFC2731: Encoding Dublin Core Metadata in HTML Simple Google search. Sometimes I can be so daft. :) Now I just have to start using this more - and I wonder why more people aren't using it? Most likely because there's been not much in it for them.
[ ... 313 words ... ]
-
Dave writes on "RSS and Namespaces":... there are some XML parsers that don't properly deal with namespace attributes on the top-level element of a source.
Agreed. These parsers are often cheaper to deal with when you know that the format you're expecting doesn't involve namespaces. You trade some flexibility for some ease of development.For these guys, just introducing an xmlns attribute is enough to make them reject the feed. So while they could handle a 0.92 feed, as soon as we introduced the xmlns attribute, they gave up.
Yes, because they weren't expecting to be fed something with namespaces, since they'd been designed around v0.92 and family, and had been fed v2.0 with the expectation that it was 100% backward-compatible....Presumably RSS 1.0 doesn't have the same problem we tripped over yesterday with RSS 2.0. So I looked at a few RSS 1.0 feeds, and guess what, they do the same thing we were doing with the 2.0 feeds. ... I conclude that the same broken parsers that didn't like the 2.0 feeds with the xmlns attributes, must also not like the 1.0 feeds.
And your conclusion would likely be correct - because those parsers weren't expecting to consume namespace-using XML, and they shouldn't be expecting RSS v1.0. If an application is designed with RSS 1.0 in mind, then the author should be using a namespace-aware parser and correctly handle the namespaces, since that's the nature of the beast. To neglect or mishandle namespaces in consuming RSS 1.0 is a mistake.
Admittedly, some applications which apparently consume RSS v1.0 feeds correctly may be broken in this way - this is not unique to RSS v2.0. If they're broken, they need fixing. But that's another story...
So, on to the conclusion:If this is true, we can't design using namespaces until:
All the parsers are fixed, or
Users/content providers expect and accept this kind of breakage (I don't want to be the one delivering that bit of bad news, got burned not only by the users, but by developers too, people generally don't know about this problem, or if they do know are not being responsible with the info).
Anyway it looks to me like there's a big problem in the strategy of formats that intend to organize around namespaces.
Well, of course, end users should not expect breakage. This is obvious to me. No one really wants that.
The big problem I see in the strategy, though, is this: RSS 2.0 claims to be backward-compatible with the 0.9x family, but the addition of namespaces in XML is enough of a fundamental change to break this. I think what Shelly wrote in RSS-DEV is correct: "Namespace support is NOT a
trivial change, and will break several technologies, including PHP if
namespace support isn't compiled in. This isn't something that can be
hacked out."
When I originally read about the emergence of something called RSS 2.0, I said "Go man, go!" But I also said, "What's the catch?" Well, this appears to be a catch. But I think it can be worked through. This is not a fundamental problem with namespaces themselves. This is a versioning problem, and a problem with anticipating all the implications the new version brings to the table. This goes for RSS 2.0, as well as RSS 1.0.
The first thing is to nail a few things down about version numbers and reverse-compatibility. It's been my experience that, when some thing experiences an increment to its major version number, reverse-compatibility is not guaranteed. So, I would assume that from a v0.94 to a v2.0, things are sufficiently different that using it would require that, indeed, "All the parsers are fixed" to support the new major version. So for the most part, v2.0 follows the v0.94 tradition faithfully, but on this issue it parts ways - and yes, potential consumers of v2.0 feeds will need to adjust from their v0.94 code. Thems the breaks, I've been told, when it comes to major version upgrades.
So, again, I don't think that this is a fundamental flaw with RSS 1.0, RSS 2.0, or namespaces. This is an issue of versioning, understanding the technology's implications, and reverse-compatibility.
[ ... 918 words ... ]
-
Mark Pilgrim seems to have implied that chicks dig Mac OS X. Well.. I certainly can't dispute him. :)
[ ... 20 words ... ]
-
Quick update to MTCleanHTMLPlugin: renamed the directory extdir in the tarball to extlib, which is what it should have been for easy drop-in installation. Thanks to John of illuminent.com, whose weblog gets me funny looks at work. :)
[ ... 144 words ... ]
-
Tonight, I borrowed LiveJournal's comment filtering code and made it into a MovableType plugin: MTCleanHTMLPlugin
After all that ramble about having open system and not having been the victim of an exploit, SamRuby inadvertently revealed one gapingly wide hole for me. Not that he did anything to exploit it - I just realized that a bug he tripped over could be used for more nefarious purposes. So, I closed the hole, and after a bit of quick research went a bit further and made a new MovableType plugin. Borrowing LiveJournal's code yields a filter which strips out most nasty ?JavaScript exploits, and attempts to close tags left lazily open.
Hope someone finds a use for it.
[ ... 416 words ... ]
-
I completely disagree with Ray Ozzie ("I'm thinking right now that I'd prefer to stick with human talkback rather than automated pingback"), John Robb ("I don't want pingback, trackback, or refererback."), and Sam Gentile ("Amen to that.").
I want as much automated and intervention-free invitation to participation in my blog as I can provide. I want manufactured serendipty to operate here while I'm away or asleep. I want this site to help me discover connections and uncover links, whether by automated agent or by friendly visitor. I want to lower the thresholds to interaction as far as I can. I love it when I've seen a few visitors to my site talk amongst themselves while I was on my drive to work.
Of course, I've never been cracked or assaulted by an exploit of my systems. I don't have unwanted stalkers or abusive anti-fans or malicious kids or babbling spammers after me in this space. Perhaps if I did, my systems might not tend toward such openness. I think this is a statement on many things beyond blogs, but that's a post for other days. Maybe some day I'll have these negative elements facing me, and I'll have to revise my systems and their direction to account for them.
On the other hand, I've got a naive notion that the openness itself can counteract much of the reason to become closed in the first place. Should the need arise, I think I can come up with some measures to deflect inane and juvenile attacks. As for spammers, I tend to think that their days are numbered anyway - but if they do arrive on my weblog I think I can leverage many of the technologies I use right now with great effectiveness on my email inbox. But, to defuse real frustration behind attacks, I tend to think that more communication, not less, is what's needed.
But I'm not sure at all, though, whether or not the threat of abuse is what motivates Ray and John to leave automated discussion channels closed. It's just one motive I've seen discussed before. I think they want more "human" and personal contact.
With regards to that: The irony in my life is that, with my lack of much free time, automated agents, aggregators, and weblogs have given me more personal contact with human beings than I might have been able to achieve without them. I'm trying to remember the thread a few months ago between DaveWiner, JonUdell, and others concerning humans with the uncanny ability to connect other people together. This very thing was supposed: That aggregators and weblogs could augment one's ability to act as such a superconnector. In that regard, I consider my agents, aggregators, and weblogs as integral to me as the new and improved pair of glasses I picked up last week. Just as I can't see road signs without my glasses, I can't keep track of people without my agents.
ShareAndEnjoy.
Update: And happily, Greg Graham, someone I've not met before, sends me an unexpected TrackBack ping and invites me to another blog I've never visited.
[ ... 781 words ... ]
-
Okay. Enough's enough - the phpwebhosting server's disk filled up again, and my JohnCompanies server has been idle all this time. I've moved everything over, made a cursory set of tests to see if everything's okay, and flipped the DNS switch.
Hopefully, you're seeing this post. Otherwise, you probably saw a test pattern until the DNS wave of mutilation reached your corner of the net. In the mean time, a few random things will likely be broken. I'll be sorting through those in the next week or so. If you feel like letting me know when you find something, I'd be much appreciative.
Thanks!
[ ... 141 words ... ]
-
And of course, Ben of MovableType is not unaware of Pingback:In current implementations of TrackBack, the user sending the ping must take some action: either by selecting the post he wishes to send a TrackBack ping to via a pulldown menu, or by retrieving the ping URL and pasting it into the entry form. And yes, we agree on the point that transparency is the ultimate goal*.
But note the emphasis on "current implementations"--there is nothing inherent in TrackBack that would prevent an implementation from making it completely transparent.
Interesting. Let's see where he goes with this. He does raise a concern with more automation though:* (I do worry slightly about the impact of content management systems fetching and scanning every external link in an entry to determine if it's ping-able. But that's not really the issue.)
Hmm - I suppose if a site gets heavily referred to, that's a double-Slashdot-effect? And this investigative process has the potential to add more overhead to the publishing process. But.. hmm, until I see some convincing ConsideredHarmful arguments, I think the flow producing qualities of this sort of thing are worth it.
[ ... 260 words ... ]
-
I've got some further thoughts on Pingback, provided that my server humors me.
David Watson says, "uh, no," citing a horrible experience in actually trying to see the spec in the first place, and a lack of working code. Not to mention that my site was having a seizure last night - not good things to recommend that one check out a new technology.
Well, I did manage to see the spec, but haven't tried implementing it yet. (Though it shouldn't be too hard, given a few round tuits.) A few things, in particular that I like about Pingback:
URIs are used to specify the source and target of the ping, no other information is involved in setting up the relation (ie. arbitrary IDs, etc). This makes site-wide integration of Pingback drop-dead simple - everything's already identifiable via URI.
If one implements Pingback HTTP headers, one can allow non-HTML resources to be pinged. (ie. Ping my FOAF file when you add me to yours, and I might add you to mine. That might be pushing the spec a bit, though.)
The XML-RPC server is not tied to any sites it may serve. I could offer one here, and you could point to it from your site, and if I allowed it I could record pings for you as a service.
All-in-all, Pingback just seems like a more direct, intentional form of referrer log.
One thing I don't like about Pingback, though (and the same for referrer logs): It's just about URIs and links between them. It says nothing much about titles or excerpts or comment bodies. The spec suggests that a Pingback server might retrieve "other data required from the content of Alice's new post," but makes no statement on how this is to happen. I like that TrackBack sets down how to provide a bit more information.
I've got a vague idea in my little head, and I think it's something Sam Ruby touched on: ShowReferers, form-submitted comments, TrackBack, and Pingback are all just different on-ramps to inviting open participation in discussion on one's blog. I want to take a shot at implementing Pingback very soon - but I might also try taking a shot at implementing a unified comment system that accepts comments for any URI from any of the aforementioned sources. I'd also like something that scans a blog entry I post for links, then investigate those links for Pingback/TrackBack availability - all to make the system even more automatic. I doubt that it would be very difficult, though I am notoriously naive. On the other hand, I've been on a run of making hard things simple lately. :)
But I sense my round tuits slipping away - back to work!
[ ... 1108 words ... ]
-
Amongst his heavily medicated rantings, Mark Pilgrim points at something called Pingback. At a cursory glance, it seems to answer all my initial gripes about TrackBack. So, I think I'm going to take more than a cursory glance, and make an implementation in the next few days if I can't find one ready-made.
Some initial wishes for Pingback: How about making it two way? Ping a URL via its autodetected Pingback server, and also retrieve a list of pings for that URL from that server. Another idea, add a pub/sub method: I supply a URI to monitor and a URI of my own, and the remote Pingback server will ping me at my URI (via my Pingback server) when the monitored URI gets new pings at the remote URI. Require that the subscription be renewed weekly/daily. Make sense? One way to track conversations.
[ ... 144 words ... ]
-
After following the thread on Sam Ruby's blog about Dave's comment tracking feature request, I figured I'd try RSS-izing comments on each of my posts. As things seem to have been going lately, I'd underestimated MovableType, and it turned out so much easier than I'd thought. :) I'd had an RSS feed for comments overall on my site, but now I have individual RSS feeds for each post. (Notice the in the comments section now.) The RSS feed is also linked in the head as per RSS autodiscovery discussions.
I don't think aggregators are really ready yet for these per-post comment RSS feeds, but the availability of the data gives food for hacking. Being that they're pretty disposable and of interest for a very short time, aggregators will likely need to implement expiry times for feeds, or watch for a period of inactivity before unsubbing. Grouing feeds would be nice too, in case I wanted to round up all my points of weblog discussion participation. I've got a few things of this sort in my AmphetaOutlinesWishList, with which I hope to play with further aggregator ideas.
If you use MovableType and you're interested in trying this, check out these two templates: recent_comments_rss.xml.tmpl, for blog-wide comments; and archive_entry.rss.tmpl, for per-post comments. The former template is added as an index template in MovableType, whereas the latter is an archive template. Also, the per-post archive template will need to be added to the list of individual archive templates in the Archiving section of your blog config. You'll want to give it a template for the filename, perhaps something like <$MTEntryID pad="1"$>.rss.
At present, I'm publishing in what I think is vaguely RSS 0.92 format. Whether it complies with the spec, I'm not quite sure because I was lazy. I plan to revisit this soon to make it at least comply with RSS 1.0. ShareAndEnjoy.
[ ... 681 words ... ]
-
So I was just reminded by Mike James about this tip on running Classic from a disk image on OS X that I'd previously found via Mark Pilgrim. I think I need to try this the next time I feel like wiping and reinstalling my iBook. I've been meaning to try a different file system under OS X - like, you know, one that's case-sensitive so that something like /usr/bin/HEAD doesn't overwrite /usr/bin/head. That, and I just don't have very much use for Classic anyway, other than for 2 or 3 apps.
[ ... 93 words ... ]
-
For the hell of it, I have a FOAF document now: lmo-foaf.rdf. I don't yet completely understand the spec, but via a referrer left by Tanya Rabourn, I found Leigh Dodds' FOAF-a-matic and gave it a shot. Need to do more research.
[ ... 51 words ... ]
-
Oh, and a quick thing I feel compelled to share: JohnCompanies is the best hosting I've had so far since I started this domain. I have yet to move everything over to it, but I'm so very impressed at the notices I've been getting. There were two brief outages recently, one planned and one not planned, but the important bit is that I received email telling me about them and what happened before I was even aware there was a situation.
I like that.
Update: Shawn Yeager commented that the outage wasn't really all that brief - 9 hours in fact. So... well, that does suck. Personally, I didn't suffer from it, having yet to completely rely on them. I do, still, enjoy having gotten the email. :)
[ ... 291 words ... ]
-
Saw this on Jon Udell's blog via the #RDFIG chump feed, from Sergey Brin: "I'd rather make progress by having computers understand what humans write, than by forcing humans to write in ways computers can understand."
Well, sandro on #rdfig writes "Why am I arguing with a sound-bite?" Why not? :) Here's a counter-sound-bite: Use Newton handwriting recognition, then try Palm's Graffiti and come back and tell me which seemed more worth while.
The way I look at it, people have muscle memory and can form habitual patterns and can adapt around interfaces that can become transparent and second nature. That is, if the interface doesn't go too far away from usability. I think Graffiti was a good compromise between machine and human understanding. Let the machine focus with its autistic intensity on the task at hand, and let the human fill in the gaps. This is why I fully expect to see Intelligence Amplification arrive many, many moons before Artificial Intelligence arrives, if ever.
I doubt that machines will ever come up far enough to meet man, but man and machine can meet halfway and still have an astonishing relationship. So, one can spend enormous resources trying to make computers understand people (who barely understand themselves), or we can make understandable interfaces and mostly intelligible systems and fudge the rest.
[ ... 657 words ... ]
-
Today's bundle of little discoveries:
DSBRTuner has been updated to support AppleScript since last I downloaded it.
DSBRTuner has been updated to record to MP3 on disk.
MacOSX has a command called osascript with which you can launch AppleScripts (among other things) from a shell.
Like perl, osascript has an option -e to run a one-liner from the shell.
Given these discoveries, I was able to cobble together a quick pair of scripts with which to schedule recording radio broadcasts to MP3 via cron.
Wow. Another little project that became, all of a sudden, so much easier than I thought. I love Unix and OS X. Before this, I'd been looking high and low for all the parts: a scheduler, a sound recorder, an app controller, etc & so forth. Given the source code to DSBRTuner, I was almost about to hack some solutions into it, but I'd never gotten the time.
Now, I can happily record and listen to my favorite late Sunday night radio show during the week again!
Oh yeah, and the ugly scripts:
dsbr_start_recording#!/bin/sh
FREQ=$1
MP3_FN="$2-`date "+%Y%m%dT%H%M%S"`.mp3"
OSA=/usr/bin/osascript
TELL='tell application "DSBRTUNER" to'
open /Applications/DSBRTuner.app
$OSA -e "$TELL set frequency to $FREQ"
$OSA -e "$TELL record to file "$MP3_FN""
dsbr_stop_recording#!/bin/sh
OSA=/usr/bin/osascript
TELL='tell application "DSBRTUNER" to'
open /Applications/DSBRTuner.app
$OSA -e "$TELL stop recording"
$OSA -e "$TELL quit"
[ ... 260 words ... ]
-
John Robb writes: "Wouldn't it be interesting to have an RSS variant (new name obviously) for subscribing to personal contact data off of weblogs?"
I read that DJ Adams was just playing with FOAF not too long ago, and at the time it made me want to dig into RDF more. But, work got busy and I promptly got distracted away. If anything, though, I could see something like FOAF being really nice as a start for this purpose. Of course, there's vCard, but I think it wouldn't be very hard to convert to it from FOAF. The universality and connectivity that RDF could bring to this seem terribly nice. Throw in periodic auto-refresh, either literally by scheduled re-query, or by pub/sub notification, and you've got a neat auto-updating address book just for starters.
[ ... 632 words ... ]
-
A very interesting side-effect I hadn't thought of yesterday when I integrated MT-Search into my wiki is that every wiki page is a mini-content index to my weblog. Even the really sparse wiki pages where I've only blurbed a sentence or so about a topic - now they have some decent content in their pointers back to the weblog where I mentioned them.
One idea that immediately strikes me is that I need this at work. I've got a barely attended-to experiment in journalling started there, using a LiveJournal installation. If I could get a similar search hacked into LJ, or scrap LJ and give everyone a MovableType weblog... we could very easily integrate up-to-date topic indexes into our existing company wiki.
For instance, wiki-word-ize a client's name, and create a short wiki topic page for that client. Or, refer to the wiki words belonging to our products. Then, be sure to include those topic strings in any weblog entries you post internally, and those wiki page will pull in your contributions. The cross-threading of this seems great to me. Show me all mutterings about ?ClientAlpha, and then show me all mutterings about our ?InstantWin product. In some cases, a particular weblog post will appear in both.
Wow. That's getting very close to what I wanted.
[ ... 367 words ... ]
-
One more thing, before I go to bed: An AmphetaOutlines update.
I'm not sure how many of you are still using the thing, but I've been using it hourly since I embarked upon the experiment. And then, very recently, the thing became insatiable with desire for my CPU and memory.
Turns out, in my spiffy new XML channel/item metadata files, I wasn't deleting data associated with old and no longer available items. This resulted in multi-megabyte XML files which AmphetaOutlines happily munched through for each channel to which I'm subscribed. Well, this update now regularly cleanses those files, leaving metadata stored only for those items that appeared in the current update of the channel.
So, if AmphetaOutlines has been becoming a dog for you, you might want to give this a shot. Upon the first run, the new code will wipe old data from the files. If your poor, battered machine can't survive another run in the current circumstances, then wipe the contents of data/channels_meta and start again. (But don't wipe your subscriptions or channel data! Just the channels_meta data.)
Let me know if this does good things for you. In the meantime, I'm thinking about what I could do by applying these ?BayesianAlgorithms (and those not-quite-so-BayesianAlgorithms) people have been tinkering with for use against spam. What if I could have AmphetaDesk initially sort my news items into ordered buckets of interest, according to my past viewing and scoring behavior? I really need to do some machine learning research.
Hell, what if I could go further and have a spider crawl blogrolls, looking for weblogs that seem to match other things I find interesting? Seems promising, though I think I'm still too naive about the subject.
Okay. Time for bed.
[ ... 783 words ... ]
-
I just discovered and integrated MTSearch into DecafbadWiki by using a TWiki include to pull in a search constructed with the current wiki topic's name. It took all of 15 minutes, including the time to login and download MTSearch to my server. This, along with my MTXmlRpcFilterPlugin, completes a simple but effective automatic loop between blog and wiki. I think this pretty much satisfies my original goal of a WeblogWithWiki.
That was so much easier than I'd thought it would be. One of those things I kept thinking "Wouldn't it be nice if?" but kept procrastinating because I thought it'd be so much harder.
I'm still amazed that ItJustWorks.
[ ... 599 words ... ]
-
In happier news, I'm very glad to see string of annoyances and disasters along the way. I still think she should've switched to a Mac though. :)
[ ... 40 words ... ]
-
I don't really want to add noise to the signal and would really rather just see some work get done, but I'm still disgusted by what Dave is writing:...One of UserLand's competitors Kevin Hemenway, the author of Amphetadesk
Competitor? He was originally a customer of yours. Then, just for the fun of it, he made his own implementation of the news aggregation features of your software, and even acknowledged his source of inspiration when he released AmphetaDesk, calling Radio "a wonderful piece of work". AmphetaDesk isn't for sale - it's free and open source. If that makes him a competitor to your selling product, I think you need to work harder or smarter....explains on his weblog how he intends to kill me. Even he says it's too harsh; and it may be a joke, if so, it's not funny. I don't see the humor in my own death, esp at the hands of a person like Hemenway. (He also coined the term Jewgregator,
Morbus is over the top on a frequent basis, and sometimes too far over the top. This is a known fact - his sense of humor is obviously dark and a bit off kilter. He also produces good working software, and writes useful articles. But three obvious things: 1) He didn't state any intention to kill you - it's just that it seems his mere presence would be enough to set you a-boil. 2) He referred to Kevin Burton's account of meeting you, which depicts you as someone very easily set a-boil. 3) You're providing an example in support of the account.and calls RSS 2.0 "Hitler" for some reason.)
The "some reason" to which you alude is this bit Morbus said in IRC: "I say 'proposed' rather innocently - its more 'shoved down everyone's throat by nazi dictator'... we should code name rss 2.0 'hitler'". There's frustration in there, and his wasn't the only head nodding in the room. Morbus is over the top and says charged things I'd choose not to, but the frustration is real and genuine, and shared by more than one member of the community out here. Yet, you always seem to "take the high road" by focusing on the over-the-top aspect, no matter the degree, ignoring the genuine gripe....Bill Kearney has sent me private email about my deathbed, and what he hopes to teach me there, so I've chosen to filter his mail to a place where I never see it.
Referring to private email is cheap - it's your word and his.I tried to come up with a word to describe how I feel about these people, this is what I came up with: monster.
What a nasty thing to call potential collaborators and customers. And what a viral, contagious thing, as you later demonstrate with Ben. This doesn't seem very cluetrain-ish....Hemenway has crossed that line. What happens next is stuff that will involve the police. I won't stand for these kinds of threats.
What threat was made, and when will you be calling the police? And how seriously will they take you? You said yourself that you knew he wasn't seriously threatening you. What stuff "happens next"?None of this means that RSS 2.0 will be delayed by even one moment.
Thus, you avoid having to address the concerns all the "monsters" raise.I thought competition in the software business in the 80s was rough, but this is so much worse. Competition used to require a certain collegiality and professionalism. It's not true today. Anyone who works with Hemenway or Kearney should be aware that these people are nothing less than monsters, who will stoop to any level to get their way. Their perversion may even be the reason they're involved.
But the funny thing about all of this is that most of this isn't business - it's hobby. You've got a business, he's got a hobby, yet somehow he's competing with you. I'm not a businessman by any stretch, but this comparison seems very odd. (Hint: Morbus is not acting as a professional in this context. He can correct me if I've mistakenly assumed this.)
These are people screwing around, trying things, playing with code. And in order for these people to "get their way", they have to be nice to people and convince them to help out. Otherwise, the cats wander off in search of fatter mice. It becomes apparent rather quickly what sort of people they are from just a short bit of interaction with them. And I've seen them "triangulated" as very nice people.Mr. Hemenway goes by the name Morbus Iff on his weblog, and writes for O'Reilly Associates, and for Ben Hammersley's syndication weblog. Mr. Hammersley is a reporter for the UK Guardian newspaper.
Postscript: Ben Hammersley threatened to sue me if I don't remove the previous paragraph. But every statement is true...
Specious reasoning, at best. Yes, Mr. Hammersley invited Morbus to write with him. So, you feel free to splash him with the monster paint by association?The Guardian requests an apology. For what? They ran a tainted review.
Oh, now we see the reason: He didn't plug your product in his review. Though, he did say in the article, "Did you notice how all those programs are free to use?" Perhaps he should have made that more a focus of the article, but he was writing about free programs. I'm neither in his head, nor in the head of any Guardian editor, but maybe they didn't want the article to become free advertising for a commercial product? Who knows. He didn't mention you. So that makes him a monster?Hammersley is a participant in the debate over the future of syndication technology, yet he wrote a review for the Guardian where that was not disclosed.
This is obvious: Many people who write about technology are involved with technology, even helping shape its direction. It's what makes them most qualified to write about it. This argument is starting to sound like politics - from whom did he get his funding? I don't see you complaining when a "participant in the debate" does mention your product in an article.Now, either Hammersley didn't tell them, or they don't care, or British newspapers run ads without saying they're ads.
Or maybe they didn't want to run unpaid ads?
Okay. I'm done. This has distracted me from work for long enough today.
[ ... 2403 words ... ]
-
Dave writes:Discourse in the RSS community has reached new lows.
Yes, yes it has, and I feel ill. There's more I'd like to say, but I've got to get to work now.
[ ... 32 words ... ]
-
From Kevin Burton:Are these guys serious? Blogwalking?
Yup. I'm vaguely serious. At the same time, I think it's funny as hell. Can't speak for anyone else.Do they actually own a Zaurus? Ha!
Nope, I said I don't, yet. Duh. :) But, he does.The keyboard is totally unusable. I don't even want to type 'ifconfig' and I couldn't even imagine writing this blog entry on the Zaurus!
Hmm. Well, I can't speak to that. A friend of mine is pretty happy with it and hacks perl on his. If anything, that endeavor exercises the keys.
Then again, I'm very tolerant of bad interfaces in early stages of an experiment. It's the combo of Linux and Java on a PDA that I'm more interested in. Hell, if the keyboard pisses me off after awhile, I'll implement a Dasher-like UI (and walk into telephone poles), or a dictionary-completing UI, or make it interpret a personal code of taps on the space bar. I used to put up with graffiti, and later FitalyStamp, to post entries to my LiveJournal account.
So my first interest, once I have a Zaurus, is to see how difficult it is to get it into the publishing loop of a blog. Then, I want to play with the UI. Actually blogging and walking at the same time, with the present UI, would obviously be comical at best and stupidly tragic at worst. But if it could somehow become streamlined, demand little attention, and become as easy as talking to oneself...
I'm reaching here, but I think it would be neat. I also think digital watches are neat. (See: NeatLikeDigitalWatches)
[ ... 268 words ... ]
-
Ahh, the joys of upgrading Fink for OS 10.2. I started it Monday morning on both my iBook and the dual G4 450 I have at work, and they're both still going at it.
[ ... 94 words ... ]
-
Getting religion, Dave says:Nathan Torkington is a humble servant of our lord, Murphy.
I wonder if Dave, or Nathan, or anyone would mind if I used CafePress to make available a set of merchandise based on phrases such as "Praise Murphy", "Murphy Willing", "What Would Murphy Do?", "Murphy Saves", "Have you accepted Murphy as your personal savior?" I might type them all up in a sufficiently imperious gothic font and set up shop.
See, although they might get popular, I wouldn't want to do it for the money. In fact, I'd give it away to charity. I just want the merchandise :)
[ ... 138 words ... ]
-
Neat! I love the blogosphere. Bryce is attempting to implement what I'd babbled about last week: BlogWalking with a Zaurus. I'm still saving my pennies, and I've yet to acquire a Zaurus of my own. So, I'll be watching this experiment eagerly.
I don't expect it to be perfect or necessarily go smoothly, but it's a first step. Someone had mentioned that a PDA is inappropriate to host something like MovableType, since it's usually off or easy to lose. Personally, I want to head toward having an easily wearable or pocketable device that contains (or at least has seamless access to) all my personal data, so a PDA seems ideal to me. However, maybe a large server at home behind my cable modem would work better as a personal data sink, with the the PDA being more like a personal data buffer. This was suggested in comments on my previous entry as well, I believe.
So, MovableType itself on a PDA and paired with rsync may or may not be nifty in the end. I'd like to try it, and then maybe think about doing something like a BloggerAPI / metaWeblogAPI client that can buffer up entries and fire off the XML-RPC calls at a given sync time.
Hmm... more to think about.
[ ... 214 words ... ]
-
About switching to Mac, Torrez says (among other things):I haven't loved a brand in a while. The last computer brand I had the hots for was the Commodore Amiga, and that was over 12 years ago. It's nice...but weird.
My thoughts exactly. My first home computer was a Commodore 64 (if you're curious, you can see a picture of me getting it for Christmas). While I was learning to program on the Apple ][e and the Atari 800, my C=64 was home turf after school. Man, I miss 6502 assembly and screwing with a kernal whose complexities I could mostly encompass in one brain.
And then, when the day came that I could afford a new computer.. I saved my pennies and bought an Amiga 1200 (sorry, no picture). That lasted me all through college as friends bought and upgraded (and upgraded) PCs. It wasn't until I was a year or two past graduation, when my poor A1200 was really straining to keep up, that I finally broke down and built a PC.
But now, I feel like I'm back full circle, and the PC's days are numbered in my home. The Mac is my new Amiga, and Apple my neo-Commodore. Now I just hope that they don't munge the whole company like Commodore did - I was there on [#amiga](/tag/amiga) on EFNet on the day when they announced the first of many buyouts.
I'm not too worried though - Commodore didn't have anyone like Steve Jobs.
[ ... 322 words ... ]
-
A strange little idea I had on the way home today: Movable Type on a Sharp Zaurus equipped with wireless ethernet? Or maybe Bloxsom if/when it has static publishing? Just use rsync to publish whenever the thing finds itself on a network, wireless or otherwise. Maybe that happens while you're out Warwalking - better yet, maybe that wireless network detector you cobbled together autoblogs what it finds while in your pocket.
But, beyond that, I wonder what else having your blog in your pocket might give you? Toss in a GPS unit somehow, maybe some other things like a thermometer device? A compass? Thinking about ways to automatically capture metadata about your present environment. Why? Why not, I'm sure if I thought longer, that stuff would seem useful.
And then there's the non-automatic writing you might do: jot down thoughts occuring on the spur of the moment; capture the scene as you sit in the park; report on the scene of an accident - or a disaster? If you have a digital camera, and if both the PDA and camera had bluetooth, integrate the two so that you can easily combine the picture and 1000 words while they're fresh in your mind.
But what about the other end of things - aggregation and reading? Install AmphetaDesk along with, maybe, a web cache and spider that proactively slurps down new news items when it's near a firehose net connection. If you're in a town with frequent dips into the bandwidth pool as you wander around, maybe you'll catch another BlogWalker in your referrers, linking to what you just posted. Meet up and have some coffee. Hell, become a smart mob with a few other BlogWalkers.
Eh, I think I'm starting to ramble and get carried away, but in between reading VernorVinge and RayKurzweil books lately, I'm in a mood to immanentize the eschaton and tinker my way on into the Singularity.
(And, oh yeah, I'm in a pretentiously linky mood. (And could that be a valid mood in these days?))
[ ... 931 words ... ]
-
John Robb says:Damn. I have 95% of my PC's processer available at any given moment. In a year that will probably be 98%, in three years it will be 99%. This model of the Internet is so messed up. The fact that over 90% of the computing horsepower on the Internet sits idle at any given moment is insane (in fact, 98% of my DSL connection is dead too). It is going to change. It has to change....
Exactly. This one of the main reasons I don't think I want to run a "LiveJournal done right, according to me" site. I'd rather help build a decentralized mutant spawn of LJ, Radio, Gnutella, JXTA, and other things I've yet to realize I should be looking at. I really need to get some time this Winter to research, think, write, and tinker.
And the thing John says about everyone converting to notebooks is dead on for me. I haven't touched my desktop in ages. My iBook is becoming more and more my primary computing device. When I first got it, I thought it would be a satellite. Instead, all my other computers have become peripherals for it - extra storage, little daemon processes, all serving me via my laptop. Now I just need an excuse to go get myself a ?TiBook :)
[ ... 411 words ... ]
-
Dave says about adding namespaces to RSS v0.94: Could peace possibly be that simple? Could RSS 0.94 be the format everyone agrees to go forward on? If not, how long would a 0.95 take to get in place?
I say: Go, man, go! And then, time permitting, weave some nice hooks into Radio's aggregator to let us make Tools that register to handle the intrepretation/display of a namespace's tags.
I'd like to play with some more RDF eventually, but I don't know that RSS is the place. The thing that I really like are the namespaces and the possibility to throw plugins into aggregators to handle alien elements.
[ ... 108 words ... ]
-
On the hypothetical RSS 2.0, Mark Pilgrim says:A basic RSS 2.0 document is no more complicated to learn (or type by hand) than a basic RSS 0.9x document, and a complex RSS 2.0 document can be just as metadata-rich as a complex RSS 1.0 document.
Great - I love it - let's go! If Rael already mocked this up many moons ago, why hasn't it been adopted?
What's the catch? I just snatched Mark's RSS 2.0 draft template for MT and tossed it into my config. Try out my 0xDECAFBAD feed in the RSS 2.0 draft format and tell me what part of it burns down your house or frightens the children.
[ ... 138 words ... ]
-
Got an email today from David F. Gallagher with regards to my pondering why LiveJournal seems largely ignored. He pointed me to his new article about LiveJournal in the NY Times: "A Site to Pour Out Emotions, and Just About Anything Else"
All in all, it seems a good article for which the right amount of homework had been done. Good exposure for LJ, yay! It also again answers my question in the same way a lot of you who responded to my first post did: It's the culture, stupid.
I also just noticed a referrer from over at Radio Free Blogistan that echo much of what I've been thinking: What's interesting is that feature-by-feature, LJ's functionality is comparable to or better than that of most other tools. The difference seems to come more from how the tool tends to be used than from its inherent capabilties. I wonder if having the word "journal" in the name (see also diaryland) tends to promote the more diaristic uses of application?
See, I think my problem is this: In a lot of ways, LiveJournal is my old neighborhood. My first successful attempt at semi-sustained online narrative happened there, so much of what I consider a part of the experience comes from LJ. Now, 0xDECAFBAD is my attempt to get a foot into the bigger neighboor out here. But ever since I stepped foot out of LiveJournal, I've been trying to figure out ways to bring things I miss from in there to out here.
In one of my quickies from yesterday, I vaguely mentioned maybe launching a LiveJournal-based site whose explicit goal is to be more outward-facing to the blogosphere, and to be more blogish than journal-like. I think a site like this would be a good idea, maybe.
But... here are my problems with being the guy to launch that site:I like making and breaking toys, not taking care of and feeding them.Unless you pay me a lot and then don't bother me at all, I don't want to host your junk. :)I've been wanting to see journals & blogs more decentrallized, to avoid the growing pains that LiveJournal has.
In short, I've seen what trouble the LiveJournal team have gone through, and I'm not all that interested. Besides, I think that a decentrallized solution could all but erase the maintenance side of things, if everyone's responsible for their own personal servers. Maybe a pipe dream, but it's the only one that I think will eventually work.
Hmm.. have to think some more, but must get back to work now.
[ ... 428 words ... ]
-
Et tu Dave?
I'm mostly in catch-up mode on the hubbub surrounding RSS, so I can't say much other than that I like both flavors though I prefer the RDF approach best. But... it's strange reading about UserLand's abandoned trademark application for RSS.
Chimera rocks my socks:
Mark Pilgrim likes Chimera. Personally, I've been using it as my primary browser for about 5 weeks now, and update to the latest nightly every few days. I had a glitch or two, but it's come light years from when I first started playing with it and it already seems to leave Mozilla-proper in the dust.
Caffinated scraping:
I've been cobbling together an (X)HTML-to-RSS scraper using what I've learned of Java, XSLT, and XPath lately. I've been tempted to slap together an aggregator of my own, too, but no: AmphetaDesk has not made me itchy enough to do it. The scraper might be of some use to someone though.
Strange connectivity urges:
I've been having these strange urges lately to start playing with P2P-ish things again and build a collection of rendevous that piggyback on a number of existing infrastructures (ie. IRC, IM, NNTP, email, etc). I want to get back on the path of investigating complete decentralization, or at least some robust thing which lies in between. At the very least, though, I want to start doing some sort of IM-RPCish thing between behind-firewall PCs. And this Jabber server I just installed on my new JohnCompanies system should be nice. (It's at jabber.decafbad.com)
Soaking in LiveJournal:
Blessed be, I need help: I've convinced them at work to let me pilot a weblog/journal system on our intranet - and I've started by installing the LiveJournal source.
I've also installed LJ here on my new server from JohnCompanies, but I'm not quite sure what I want to do with it beyond tweaking and personal hacking. I've been musing at possibly enhancing some bits - particularly with regards to RSS syndication and aggregation, maybe some backlink tracking.
Maybe I'll polish the thing up a bit and offer it up as a sister to LJ where the 15-year-olds will not reign and reciprocal connections with the outside world are encouraged and facilitated. Would any of you pay for something like that?
This seems pretty ambitious, and it's likely I'll never do it, but hey.
And on other fronts:
Still in the underworld with Java, trying to get this project dragged past the finish line. Jaguar rocks and I took my girlfriend to the Apple Store opening in Novi, MI; we didn't buy anything, though it was close. There's a second Apple Store opening in my area in Troy, MI; the danger has not passed. And finally, I have succumbed lately to playing Grand Theft Auto 3 and it has affected my driving and given me pedestrian-smacking instincts to subdue while walking around town.
That is all. For now.
[ ... 591 words ... ]
-
Hmm, just read that Charles Nadeau released an XML-RPC to NNTP gateway. I still think it would be neat to have an NNTP to XML-RPC gateway to use as a wonky, distributedish message queue.
[ ... 298 words ... ]
-
So I took the plunge and snagged a FreeBSD "colocation" account with JohnCompanies, to address my desire for more experimental freedom on a server hosted Somewhere Out There. I may eventually hook up with a few fellow hackers to spread the monthly rent, and I may even consider floating some trial balloon for-pay services - assuming I hack together something I'm presumptious enough to think is worth money. :) But for now, the cost is very affordable for what I get.
So, I haven't dumped my current webhost yet, but I'm slowly going about installing services and software up there, including but not limited to: Apache, mod_perl, PHP4, Tomcat, Jabber, INNd, IRCd and whatever else seems like a good idea (whether or not it actually is a good idea). I might even throw a LiveJournal installation up there.
And, once I come to my senses, I may pare this list down and disable things until I get around to actually doing real things with them. More soon, I hope.
[ ... 489 words ... ]
-
Caught this snippet on tweeney.com via JOHO: "...LiveJournal.com (which most weblog news stories overlook for some reason) boasts more than 650,000 [users]..."
Why does everyone seem to ignore LiveJournal? It's very, very, very rare that I see a LiveJournaller's posts linked into the Blogosphere at large. Granted, I know that the median user was a 15 year old female who complains about Mom and her boyfriend, when last I checked. But, as the adage goes: 90% of anything is crap. There are, nevertheless, a good number of worthwhile streams of narrative in that space.
On the other hand, I don't see many of the people behind LJ stepping out and making noise in the Outer Blogosphere either. I think many of them are just plain busy keeping the site afloat, or having lives, and LJ is world enough for them.
But 650,000 users... that's a lot. More than Radio and rivalling Blogger.com. Is there a real qualitative difference in writing between the groups? I would still imagine there's a lot of crap to be found via Blogger.com. I'm not sure about Radio, though, since I get the impression that the 15 year olds have yet to flood into the userbase and its following seems more tilted toward professionals.
But as for the software & service itself... As far as I can see, LJ is one of the easiest paths out there to starting a weblog/journal online. And it was one of the first sites I ever encountered that had a desktop-based client app for posting to it. And, though not prominently placed, they have RSS feeds for every single journal on the site. They're even working RSS aggregation features into the place by gatewaying external RSS feeds in as special LJ users to be added like any other LJ "friend".
So, to me, LJ sounds like a top competitor to every other blog/news aggregation product or service out there - yet I rarely hear about it. Hrm.
Anyone have a theory why?
[ ... 1126 words ... ]
-
Even more gas is flowing - from andersja's blog: Movable Type notifications to Instant Messenger?
I really want to see this, and I want to see news aggregators exploit this - why poll a feed once an hour when you can just have the feed tell you to come 'n get it? Basic publish/subscribe model.
But the advantage Radio has over MT in this regard is this: Radio is a persistently running app/server/daemon thing. MT is a collection of scripts that does nothing until asked to run. Radio can connect to IM and stay connected. Something like MT would need to login to an IM service each and every time it wanted to ping. Maybe that's not such a big deal, really. I also have a hunch that there would be some difficulties with web hosting sites who don't really want customers emitting IMs from their CGIs. Maybe not a big deal either.
Just seems like an impedance mismatch, though.
What I'd like to see is something like this: An XML-RPC/SOAP <-> IM gateway. And then, eventually, I'd like to see a decentrallized P2P network with XML-RPC/SOAP entry points that can smoothly replace centralized resources that have XML-RPC/SOAP entry points, maybe using IM networks as one possible rendevous point. (Just remembered this project: JXTA Bridge. Mental note - play with JXTA again and poke at SOAP some more.)
[ ... 483 words ... ]
-
Dave writes, "I have my instant outliner going again."
Kick. Ass. Now we're cooking with gas. I think I need to get a fresh install of Radio going on this iBook again, especially since it seems renice works now and can tame the CPU hungry beast to managable levels. I was kinda waiting to see if/when the people hacking with IM in Radio would close this loop. Now I want to see what this can do to news aggregation, pub/sub, change notification, etc and more.
[ ... 86 words ... ]
-
Sam Ruby responds to Dave's question:I am not a stakeholder in this naming issue, nor can I claim based on personal experience that any of the above references are authoritative, but based on these references alone, it seems to me that one could ask the converse to both of these questions: i.e, why is RSS 0.91 called RSS, and why did the RSS 0.91 branch use instead of [rdf:rdf]?
This is basically what I'd started writing as a response the other night, but I lost it in a browser crash.
I browsed around a bit, revisited some old links I'd followed when first I started hacking with RSS and frustratedly discovered the format fork. As Dave tells it, his adoption of RSS started in cooperation with Netscape, and the later resulted in his continued support and development after Netscape had wandered away from it and shed the developers responsible. So from his seat I would think that it looks like RSS was abandoned in his lap and thus he felt free to continue on shaping it toward his own ends, namely those of simplicity and real dirt-under-the-nails use versus further design and planning.
Then, later RSS 1.0 comes into the picture. About this point in history I'm fuzzy. Most accounts and articles I see do not mention how v1.0 came about and who birthed it. Most of what I've ready just basically says, "And then there was v1.0" But who said so and when and why?
That 1.0 has more in common with the v0.9 roots than any of Dave's v0.9x series is clear, so maybe in this respect one could say that this naturally lends v1.0 a "natural" claim on the name by virtue of bloodline and original intent.
On the other hand, one could also say that Dave's v0.9x series has a claim on the name due to virtue of having actually been in active development directly after the original v0.9. Call this a claim by virtue of squatting rights? By virtue of principles of Do-ocracy, as Sam wrote about Apache?
But then, on another set of hands belonging to another body altogether, why did Dave keep the RSS name if he was so radically changing the nature of the format by ditching RDF? (But the history confuses me - I seem to remember that change started when it was still in the hands of Netscape.)
So where does that leave things? Ugh. Seems like Dave's RSS should be so because he kept the torch burning in the Dark Times. But it seems like the Other RSS should be so because it's the heir to the original throne.
Then again, I could have the whole history mixed up.
I say, everybody get up! I'll start the music, you all run around, and pick new format names before I hit stop on the CD player.
[ ... 547 words ... ]
-
...that's what's got me so bothered about people musing in their weblogs about projects they'd like to do. Stop talking about it an just build it. Don't make it too complicated. Don't spend so much time planning on events that will never happen. Programmers, good programmers, are known for over-engineering to save time later down the road. The problem is that you can over-engineer yourself out of wanting to do the site... [Andre Torrez, Even You Can Do It]
Just found this post today, via Danny O'Brien. This is why I threw this site together, and why I have a link to ReleaseEarlyReleaseOften on the front page.
Before this, I would spend years working on something in silence, only to have it fall over on top of me and end with me never wanting to touch it again. For almost 2 years, I was working on a Zope replacement in Perl called Iaido before I finally created a ?SourceForge project for it and invited some people in to play. By then, I was already disgusted with my ugly code, wanted to scrap it and restart, but wasn't nearly enthusiastic enough to do that. And by then, there was just too much code - and yet too little documentation - for anyone I invited in to really dive in and muck about.
And this project was to be the core of a community site for coders, web designers, and general all around mad scientists. It would be named ?NinjaCode, at http://www.ninjacode.com. Well, you can see, the community never got off the ground, and I don't even have a hold on the domain anymore. It coulda been a contenda.
But you can see now, on my Projects page, that I've been gradually working up and spinning out little hacks and widgets. Eventually, they combine into bigger widgets, like MTXmlRpcFilterPlugin. I'm thinking that this is the way to go. And, even if/when I do get a grandiose idea, I need to start off releasing the widgets early and show the build up process here in this weblog. Then, there's some documentation from the start and maybe even some enthusiastic co-conspirators from the start.
[ ... 425 words ... ]
-
D'oh. Just realized that the download link for the fixed AmphetaOutlines was at the bottom of the page, so I was puzzled that I kept getting emails about it doing the same broken things. I seem to be missing many things lately. :)
Anyway, the correct download is here:
AmphetaOutlines-20020822.tar.gz:
This should include the corrected "URL=" redirect.
[ ... 76 words ... ]
-
Well, that sucked. I managed to get an all but complete dump of my comments.db file with db_dump on my server, but then nothing would parse or load that file after that. Banged around with it for awhile until I finally realized how to parse the dump file myself with a perl one-liner and rebuilt the comments DBM file that way.
Immediately after that, I migrated everything to MySQL. Not a silver bullet, but it seems a better idea than relying on those DBM files. They seemed neat & clever at first - but now that I have a bit invested here, they're an annoyance.
I think I managed to recover the last few comments I got while things went wonky, but if I happened to miss one of yours, I'm sorry. I'm glad I recovered the comments, though, because I tend to think that comments left here are often more valuable than whatever dross I may have spewed in the first place.
[ ... 164 words ... ]
-
Grr again. As a few of you have told me in email, and as I noticed toward the end of yesterday, my comments feature here is dead. Seems that as the disk filled up on this server, someone tried leaving a comment, and as Murphy came into play, the comments DB file got corrupted. The odd thing, and fortunate thing for me anyway, is that I'm still getting the comments emailled to me. So if/when I get this DB file recovered, or if I wipe it and start over (not happy about that option), I might try re-posting the comments.
Got some good pointers on some hosting options and may be checking out one or another of them soon. Also looks like there are still issues with my AmphetaOutlines, even after a re-release.
More soon.
[ ... 136 words ... ]
-
A few words of warning, which should be pretty obvious, with regard to MTXmlRpcFilterPlugin use: Every appearance of the tag makes a separate web service call. This could be painful if you stick it in your individual archive item template, and then rebuild your site. (ie. at least one hit on the filter service for each item in your weblog) :)
[ ... 62 words ... ]
-
Ack! I just realized, even though it took two emails make the light go on, that my tarball of AmphetaOutlines for download in the wiki has still had the broken click-count page bundled for the last month or so. For some reason I thought that I'd fixed that and re-uploaded the tarball, but no!
So: It's there now, hopefully this is a fix for any of you who tried it and saw the insane refresh loop happen with you clicked on a news item. Hop on over to AmphetaOutlines and grab yourself a new copy.
Next... I screw around with Bayesian filters as applied to incoming news items, categories of interest, and alert levels. Maybe.
[ ... 288 words ... ]
-
Grr, shame on me for getting cheap hosting again. For the most part, PHPWebHosting has been just fine - they give me SSH access to a shell, about 120MB of space, a few MySQL databases, and generally leave me alone for US$10 per month. But at this point, the leaving me alone bit isn't working out so well, since the server I'm on over there has had its main disk fill up regularly, thus bringing down all the sites hosted on it (including mine).
I'd joked a little while ago about getting myself an XServe and finding some affordable colocation. Now, though, I'm not laughing so much. I should probably just find another webhost, but it's tempting to have my own server.
In the past, I had a few friends at an ISP, so they let me stash a little linux server on their network for occasional favors. I miss that. Didn't cost me anything, and my impact on their network was negligible. Meanwhile, I had a stable, semi-reliable box on the net at which I could point a domain. Services could catch fire and other things could tank, and no one got hurt.
The problem now, though, is that shelling out for a server and colocation costs just to play around is a just a smidgen outside my budget. As I've pondered before, maybe I could actually host a money-making service on it with some of these hacks I'm percolating through.
On the other hand, a co-worker put the idea in my head that it might not be unreasonable to think that I could invite some tenants onto my server to share some of the cost. I figure I'd take the burden of buying & owning the server, but I'd like to spread the service cost out. I wouldn't want to make a profit, I'd just like to make a kind of nerd commune. It'd be nice if I could get the cost for everyone under US$10-15, though I imagine my cost would be higher given a server payment. If I could get mine below $40-50 a month, I think I'd be happy, though this seems like a pipe dream.
The devil being in the details, however, I don't really know how one would go about such a thing, considering service agreements and contracts and taxes and blah blah blah. I'm a financial moron, and I'm also certainly not into being a 24/7 available customer service rep, or to perform tech help. Basically, I'd like to run a server to host mad scientist experiments performed by a relatively small cadre of mad scientists who can mostly clean up after themselves. I'd like to run an IRC server, maybe a Jabber server, maybe a few random half-baked server ideas.
On the other other hand, is anyone out there doing this already and want a new tenant? :)
[ ... 821 words ... ]
-
Yikes. I think that was the longest entry I've posted here - I usually reserve my rants, opinion posts, and various longer prose for my LiveJournal. Maybe I'll actually start showing a little more personality around these parts too. :)
[ ... 41 words ... ]
-
Scanning news today, I see in my referrers a link from the new aggregators discussion group to my outliner AmphetaDesk skin. Curious, I go check it out and see what looks like a bit of grumbling between Dave Winer and Morbus Iff. I make my way back to near the head of the thread to see what's what, where I find a link to Morbus' post of opinioned and somewhat ranty disagreement toward Dave's definition of the modern News Aggregator. What disturbs me is that the grumbling quickly ends with:Bill -- I'm going to unsubscribe from this list now. The neener boys are in charge. Take care. Dave
Now first of all, I too disagree with Dave's definition, and have posted as much. Although I did so in a less rant-mode manner, both Morbus and I pointed to the fact that the news aggregator definition served the goals of Radio 8.0 marketing, being that the definition very specifically identified the features of his own product. Rant or not, the point is made.
What disappoints me about Dave's response on-list, though, is that instead of addressing the content of the disagreement, he goes after the ranting and hits back with a smidgen of ad-hominem of his own while calling for the "high road". Then, once he's tagged on the ad-hominem he unsubscribes and dismisses the list altogether, never actually acknowledging the original point made.
Now, though I don't have a link, I've read several times where Dave has disparaged discussion groups and offered linked weblogs as the superior alternative. I've read his essay on Stop Energy. I'm aware of his position toward the grousings and grumblings and flamings that go on within a discussion group. And I've nodded my head almost every time he's written about these things - freeform discussions on the net can sometimes - but not always - be a clusterfuck of morons and Stop Energy.
Maybe it's just because Dave is so intolerably fed up with any hint of a clusterfuck that he dropped off the list. I wouldn't put up with much shit either, especially after heart surgery. But to me it looked like pretty defensive behavior right from the get-go. Morbus even gave somewhat of an apology (for him :) ) and moved on to politely hit the issues.
I guess this bugs me a bit, along with other UserLand-related thoughts I've had lately, for this reason: Here's a discussion group devoted to the very nature and future of one of the headline features of RadioUserLand. And the people there are some of the most prevalent names I've seen appear time and time again in relation ot the subject. I would think that this would be a dream group for him, in terms of driving and benefiting from innovation. Instead, it takes only a small amount of rant to drive him away.
See, I've never met him, but I think I'd like Dave personally. Yes, I know his writings often have one foot or both feet in marketing for his products. But, I can't recall his denying this, and I see it as the natural behavior of someone who really thinks he's got something hot. He gives favor to his customers in his posts, and he has his own opinions. But I know these things, and I still like his software, and I still buy it.
On the other hand, it's clear that sometimes he doesn't give equal time to points which may be contradictory or unfavorable to his side of things, and sometimes he gets things wrong. But he's not a journallist - he's a blogger who writes from the hip and sees what flies. Whether he lies outright is another story I haven't the first inclination to follow up on, but the very nature of his product stands to check and balance to him there. He can't get away with too much with his own customers calling him on things.
Anyway, I'm starting to ramble. What really bothers me about the quick dismissal of the list seems like a "See?! This is what I mean!" demonstration on-list, and I almost expect a post later again promoting weblogs over DG's altogether. But I didn't even see him give the list a chance, and I think that's sad given the brain power in that roster. Granted, it's his perogative to have slim thresholds for annoyance, but I'd think this would be value for his company and product. As I said, I like his software, but he's refusing free help and consultation on the future of his product. I see those involved in groups like this taking their products beyond his, even those that were originally direct clones of his. Maybe this ends up tying into his grousing on Open Source versus commercial software eventually.
On the other hand, maybe he just thinks he doesn't need any help of this sort and has the sheer magnitude of innovational mojo to leave us all in the dust. If so, rock on. :) As for me, I just joined the list.
[ ... 988 words ... ]
-
Well, imagine that! Someone doing something in the spirit of my site! Neat! :)
[ ... 29 words ... ]
-
Quick mental notes:
Ditch the blogroll or make better efforts at updating it.
Publish my news aggregator subscription list in addition to, or instead of the blogroll.
Examine referrer records for readers of my RSS feed to find readers of whom I'm unaware.
Visit all the sites listed under my stats at the blogging ecosystem site.
Write more, both here, and over at LiveJournal. Possibly revive The Spirit of Radio and my RadioUserLand hacking efforts. (ie. Make Radio run blog entries through my Wiki filter)
Reskin this place, simplify and make accessible and less ugly. Reduce number of toys, or at least give them all off switches.
Get back to work.
[ ... 112 words ... ]
-
Well, it looks like Josh Cooper has gotten the MTXmlRpcFilterPlugin and XmlRpcToWiki working on his blog. Yay hooray!
Now, I just have to see about improving this thing if I can, and to make this place look better. This sudden easy abundance of links makes me think that they draw too much attention from the surrounding text. This is desirable, but not to the degree that they do now.
[ ... 70 words ... ]
-
This ISP has the right idea: treat the RIAA hacking threat like any other hacking threat. Let's see how long before this is made illegal. In the meantime, I'm trying to come up with a way to convince myself that it's worth buying an XServe and colocating with them. But first, I've got to come up with a business model to make some money to afford the toys. :) This will likely never happen.
[ ... 75 words ... ]
-
Last week, after reading what Mark Pilgrim had to say about macros in MovableType, I made a mental note to finally circle back around to hacking together my WeblogWithWiki now that MovableType has plugin features.
Turns out it was so much easier than I thought it would be. MovableType's plugin scheme is dead simple, which hopefully means that plugins will flourish like mad.
First, I hacked together MTWikiFormatPlugin. This plugin simply implements a new container tag, MTWikiFormat, which runs the contents of the tag through CPAN:Text::WikiFormat. This doesn't actually integrate with any existing wiki, but it is very simple to install and does bring some wiki-ness to blog entries, including some limited formatting and Wiki:WikiWords. This doesn't provide everything Mark had posted in a lil wishlist comment to me, but it's a start. Maybe I'll look into tearing the formatting guts out of some wiki to make a Text::WikiFormat replacement, or maybe I'll submit patches to the original module.
The second plugin though, MTXmlRpcFilterPlugin, is what I'm really happy about.
Whereas MTWikiFormatPlugin filters content through one perl module, MTXmlRpcFilterPlugin can filter content through one or more XmlRpcFilteringPipe interfaces. I have a handful of these filters available on my site right now, and in a little while I will catalog them in the wiki. For now, I'm just filtering this entry through DecafbadWiki. In the future, I may get more adventurous with my content filtering pipeline.
One drawback to using MTXmlRpcFilterPlugin for the purposes of a WeblogWithWiki is that I've only got support for TWiki so far in my XmlRpcToWiki project. Other wikis still need some hacking before they can provide filters. Some assembly required, fellow AlphaGeeks.
So, ShareAndEnjoy. Time for bed.
[ ... 1317 words ... ]
-
Please excuse the noise and dust - I'm working on a few new MT plugins to support Wiki formatting and XmlRpcFilteringPipe. Are these wiki words working? PipeFilters, MovableType, ShareAndEnjoy, etc.
How about unordered lists?
* One
* Two
* Three
How about ordered lists?
1. One
2. Two
3. Three
And ''how'' '''about''' '''''this'''''?
This is another test. PipeFilters, ShareAndEnjoy.
One
One a
two
three
four
two a
How about a table?
this is a
test table format
[ ... 166 words ... ]
-
Just read this on Scripting News:Heads-up, some time in the next few hours (Murphy-willing) we're going to release tcp.im, which allows Radio and Frontier to be an instant messaging client or server (either can be either).
Okay, this is insanely great, or at least has gigantic potential to be great. I just hope ActiveBuddy doesn't swoop in and claim to have invented the whole kit and kaboodle. Barring that, I can see a whole suite of new P2P apps with desktop servers, kicked off by the UserLand crowd.
This makes me want to find a decent machine for Radio to run on and give it a fair amount of attention again. I wish it weren't so demanding of my poor iBook. But I do need excuses to convince myself to invest in a new ?TiBook...
[ ... 135 words ... ]
-
Do not stand in our way - we will walk around you.
Spam is in our way.
We will use the inhuman aspects inherent to your sales pitch to walk around you.
Cluetrain on statistic-driven autopilot. :)
[ ... 186 words ... ]
-
Further self-directed dot connection via Sam Ruby:I talked Sanjiva Weerawarana into creating a Radio weblog. He's one of the driving forces behind many important Web services standards including WSDL and BPEL4WS.
Welcome to the Blogosphere, Sanjiva! :)
[ ... 37 words ... ]
-
I just read about ChoiceMail, a whitelist-based email service that "turns your inbox into the equivalent of your front door" where people have to knock and identify themselves as human via a quick web form (hello, Cluetrain) before their message is allowed in. This is exactly the idea I've been tossing around in my head and reading about for some time now.
I would pay for this service in an instant, but unfortunately it apparently requires Windows. So, I just got SpamAssassin and Razor working on home IMAP server, so I think I might just poke at finally implementing a service like ?ChoiceMail on my own. Hell, maybe I'll even have it send me Jabber messages for permission confirmation and accept messages for whitelist additions, but keeping everything in the context of email seems better.
[ ... 344 words ... ]
-
Another reason why I need to poke around at Java Web Start development, found via the rebelutionary: jlGui, a 100% Java music player that looks great, seems to work nicely, and launched right from my Mac's JWS panel.
[ ... 67 words ... ]
-
Sam Ruby helped in connecting the dots between me and someone responsible for my recent Bean Markup Language obsession. Thanks Sam! :)
[ ... 23 words ... ]
-
Better living through regular expressions from Mark Pilgrim. This has got me thinking of two things:I need to hack together some MT macros or a plugin that finds & links wiki-words in blog entries to my wiki, and I need to start writing in the wiki again.I really dig his blog's skin, and I need to finally read though all of his accessibility tips and re-skin this site. This place is butt-ugly.
[ ... 288 words ... ]
-
Wow, that was fast. I got a direct response from one of the two original creators of BML, Sanjiva Weerawarana:...I still believe BML has a useful role in life. We still have some of our original trusty users who periodically email us asking for status / updates etc..
What's the right thing to do? I am quite certain that we can get it open-sourced (I mean from an IBM process point of view), but I haven't yet been convinced that there'd be a willing community for it yet.
This tells me that there's still some life left in the project, so I certainly won't be running off to release my unauthorized decompilation. Besides respect of ownership and authorship, it'd be nice to see a sanctioned release of the real code, comments and all.
Sanjiva asks what to do and who wants it. I say: Gimme, and I want it! But of course there's more, in terms of effort and consideration, to an Open Source project than that. So, where's the interest (besides me) to make it worth doing?
I'm not sure. Seems very nifty to me, but again, I see barely a mention on Google. But, I look at Thinlet buzz that recently bubbled through my news scan, with raves about the XML wiring for GUI and quickness in assembling apps. It's very cool. And then I wonder how cool might it be to combine the two, or use BML instead of Thinlet.
Is the lack of interest a product of a lack of word getting out? Or, again, have I missed something in my return to the world of Java?
[ ... 418 words ... ]
-
Hmm... The plot thickens a bit for me with regard to the Bean Markup Language project. On a whim, I Google'd for a Java decompiler, having remembered that I'd gotten some use and enlightenment back in the day from Mocha. So I found, downloaded, and tried Mocha on the BML class files. It choked a bit, produced some source files, but the collection as a whole was not trivially easy to recompile. Pretty much what I expected, given that Mocha is a relic these days.
But then, I noticed JODE, a few search results down the page. Google's description said, "Java decompiler with full source code. Supports inner/anonymous classes..." Well, inner/anonymous classes are new to me since I was last very active with Java. (Yes, I know, that's been awhile.) So I figured I'd check JODE out. Besides, the last release of JODE looked newer than the last release of BML.
Much to my surprise, JODE consumed the BML jar file directly and gave me a directory full of source without a complaint. For the hell of it, I compiled everything there, and made a new jar file from the results. I cleaned my project out, replaced the original BML jar with my new decompiled-recompiled archive and... everything worked just fine.
Skimming through the various source files JODE gave me, things look surprisingly less ugly than I'd expected. Of course comments are nowhere to be seen, and variable names are nothing like the original would have been, but the source was is still readable and I can follow it. So this means I have the source code to BML now, after a fashion.
So my question now is... I read the IBM ALPHAWORKS COMMERCIAL LICENSE AGREEMENT which came with the original download, and I see these paragraphs:Permission to reproduce and create derivative works from the Software ("Software Derivative Works") is hereby granted to you under the copyrights of International Business Machines Corporation ("IBM"). IBM also grants you the right to distribute the Software and Software Derivative Works.
You grant IBM a world-wide, royalty-free right to use, copy, distribute, sublicense and prepare derivative works based upon any feedback, including materials, error corrections, Software Derivatives, enhancements, suggestions and the like that you provide to IBM relating to the Software.
Does this mean, basically, that it's okay for me to distribute this source as a derivative work, with various potential enhancements, &etc, as long as IBM is still able to grab a copy of it back from me with no strings attached? If so, that's great.
My only other question remaining, though, is whether or not my naivete toward Java has me yet to find what the community at large considers the Right Way to work with ?JavaBeans. Because, this BML thing seems great to me, but it seems to have gotten next to zero attention. This usually tells me that I'm missing something, but the thing itself works nicely.
Any thoughts out there in the blogosphere?
[ ... 494 words ... ]
-
Ack. Okay, so I started tinkering with Bean Markup Language as the configuration glue for my Java app. So far it seems to be working very nicely and doing everything I want it to do. The problem, though, is that as I look around for examples and things I notice that it seems as if this project is dead. I look further, and I realize... I can't find any source code to this thing. And here I am working it into the core of my app.
Well, I'd hate to start over and reinvent this wheel, since it seems to be such a nice wheel. Between the clean definition of beans and their properties, and the ability to toss little bits of script (using the BSF) in there to further tweak things, it seems perfect. But the apparent abandonment of the project and my lack of source gives me the willies.
So... Does anyone out there know anything about what's happened with the Bean Markup Language project? Could anyone get me the source? Or, could someone point me to an equivalent alternative? Because, if I can't find source or an equally powerful alternative, I'm tossing around the notion of re-implementing BML with a clone project if the license doesn't prohibit it.
Or am I completely crazy and barking up the wrong tree to begin with? Is there a better, more elegant way to create and connect Java components together than this? And is it free?
[ ... 945 words ... ]
-
So I've been in the underworld these past weeks, head down in battle with this project in my return to Java, as I'd written last month. It's been great fun, and I've not missed perl at all along the way. (Although, I did miss CPAN.) I've raided the stocks of Jakarta and other shores, and I've flagged a few new people to watch on my daily scan.
These past 4 weeks or so, it's amazing to me during how many new or long-neglected (to me) technologies at which I've thrown myself, and managed to use productively. Among them are XML, Xerces, XSLT, XPath, Xalan, SVG, Batik, Xindice, JUnit, Log4J, Ant, BML, BSF, BeanShell, NetBeans, and a slew of others my poor sore brain won't produce at the moment. I've plucked each of these from the ether and carefully tied them all together into a loosely confederated beast that, surprisingly, works as I'd intended thus far.
My biggest regret during this voyage is that this beast will likely never roam beyond the fences of my employer's realm. But, I still have the new experience with building the beast. And besides, you'd probably all laugh at the poor thing anyway since its the product of wobbling new steps. Though, actually, that would be a good thing for my company and myself since some of you might tell me why you're laughing.
But, no matter - that's a political fight I'm not yet prepared to enter here at work, so maybe I can think of other things with which to tinker, out here in the open.
The Java support in OS X makes me very happy, as does having Java Web Start come installed out of the box. And so, today I was also happy when I saw ThinRSS and managed to launch it with a click. I'd been musing about maybe doing a news aggregator in Java, but AmphetaDesk works fine and I've got some hack time invested in the fun outliner skin. But this really makes me want to do something with Thinlet. Possibilities abound.
I've also started toying with an XPath-to-RSS scraper to replace my ?RssDistiller usage. I know that there's a Python-based one out there, xpath2rss, but I've yet to quite figure it out and I feel like tinkering on my own anyway. So far I've pulled in JTidy, Xalan, Xerces, and JUnit to help. Amazingly, to me, everything seems to be working pretty well.
Anyway, back to the day's work, but I figured I'd ramble a bit and share my recent fun since I've been largely quiet around here. Hopefully I'll be back soon with updates to the AmphetaDesk outline skin, and with a new toy scraper project.
[ ... 451 words ... ]
-
When I see links in my referrers like this from Niutopia, it makes me wish I had translator microbes instead of the Babelfish. But, at least I have the Fish, and for free! (Thanks ?AltaVista!) The translation is sketchy and random at best, but it's decent enough for me to get the gist of things. Now if only I could respond to things in a foriegn language and not accidentally start a war or insult someone's relations, or at best sound like a moron. It is nice, though, to be reminded that English is not the only language of the web.
[ ... 269 words ... ]
-
So are any of you guys getting itchy? I've had a great Summer all around, but without much hacking or progress on little projects. And I've noticed that there's been relative quiet and little hacking amongst my reading list's authors. Could be that I'm just not looking in the right places - I did lose a few subscriptions in my bounce from Radio to Ampheta for daily use. But I just don't see as many daily innovations as I did toward the end of Winter and beginning of Spring.
As for me, as the nights get longer and things get colder, I'm starting to feel an itch to start tinkering again. Maybe it has to do with not-yet-dead back-to-school reflexes - summer vacation's over, and its time to get back to learning and experimenting. Starting to look again at my neglected wiki, and of course I've started tinkering with my AmphetaDesk outliner skin again. I've also been thinking about my other little projects, referrer links, whitelist-based spam filtering, XML-RPC pipelines, and other things I've not touched in months.
The fact that I'd not felt like playing with these toys for so long worried me a bit, so the return of the itch is reassuring. So, I should get back to playing soon. And, now that my financial situation has greatly improved, I also need to figure out where to start and to whom I should talk on really heading back to school.
So Summer has been just a nice, long nap in a hammock. Time to start making the coffee. (Not decaf, of course.)
[ ... 316 words ... ]
-
At last: I had a good solid hacking session with the beast under WinXP, and I think I've flushed out the showstopper bugs. So, here's yet another release (hopefully zaro boogs) of the thing for you to grab and try:AmphetaOutlines-20020806-2.tar.gz
Thanks for bearing with me out there and not beating me about the head and shoulders. :) I mean, yes, it's free and its experimental... but it's still damn frustrating.
Basically, the trouble started when I tried to use "use lib". Seems like a reasonable enough request, lib.pm being very much a core part of perl. But, Morbus hadn't used it anywhere in AmphetaDesk, so Perl2EXE cheerfully left it out. Well, I had to hack by other means, since I wanted this thing to be as drop-in compatible with the current release of AmphetaDesk as possible.
And then, there were other bits I'd left out. And also there were the bits that I'd written at 3:30 am the other night and wasn't precisely sure what made it possible for them to actually work anywhere, let alone on my iBook. But, I think now I've got things under control.
It makes me appreciate the OS X environment so much more. OS X is like driving a nice big, fully packed RV that handles like a station wagon down the hacking highway. On the other hand, Windows is a Ford Festiva with a hole in the floorboards and a nagging suspicion that I forgot something back at the house. Okay, so that was a bit contrived. I just wanted to say something smarmy after all this grumbling. :)
Is this thing working for anyone yet?!
Update: In case you were wondering, this release should be usable via the original instructions back here. Which are, basically:Replace templates/default with this archive's contents.Create a directory named channels_meta in your data directory.Share and Enjoy
[ ... 1231 words ... ]
-
That'll teach me not to test on other platforms before I release :) It appears that Windows users are missing 'lib.pm' in their bundle of AmphetaDesk joy, which really perplexed me at first. So, if you've already downloaded my tarball from yesterday, download the following template and replace templates/default/index.html:ampheta-index-20020806.html
If you've yet to grab my initial release, grab this instead:AmphetaOutlines-20020806.tar.gz
This all seems to have worked on my machine at home, but that Windows machine also has a full Perl installation. I'm going to try everything out on my Virtual PC WinXP install here at work, but I seem to be having trouble downloading a copy of AmphetaDesk from ?SourceForge.
Let me know if you feel the love yet. :)
[ ... 262 words ... ]
-
Okay, after several comments, emails, IMs, and other encouragement, I stopped adding features and cleaned up a few things in my new AmphetaDesk template and have uploaded a tarball of it:AmphetaOutlines-20020806-2.tar.gz
Again, some highlights:Leaner template code on the server side, leaner HTML on the browser side.Ampheta remembers things about items now, such as # of clicks, age, and # of times seen.Ampheta can act on this memory, sorting and hiding items.Old, stale things tend to go away.Often used & visited things tend to come first.
To install this thing:Rename/backup the directory default in the templates directory of your AmphetaDesk installation.Replace that directory with the contents of the above-linked tarball.Create a directory named channels_meta in the data directory of your AmphetaDesk installation.Start AmphetaDesk, and let simmer for a few days to see things start workingThe thing's still a bit rough around the corners. To change thresholds and other settings, you'll need to edit default/index.html directly for now. I plan to add some fields to the settings page to manage these settings, eventually. Also, you really want to have the Digest::MD5 perl module installed, but I have included Digest::Perl::MD5 in the template. This means that things will work, but could be much faster. As for the outlines, give thanks to Marc Barrot for his activeRenderer package from which I borrowed nearly all of my outline code. (The only ?JavaScript that's mine is the expand/collapse all logic.)
Okay, I'm off to lunch. Let me know how you like it!
Update: It appears that some people are having problems with their AmphetaDesk failing to find the custom modules I include with the template. If you have trouble running the template, try copying the contents of templates/default/lib into the lib/ directory of your AmphetaDesk install.
Update 2: I think I've fixed the showstoppers, as I will write here in the future. So, I've taken the liberty of going back in time and replacing the tarball here with the fixed one in case anyone has linked to this story. Enjoy!
[ ... 490 words ... ]
-
So, I was happy to discover I'd gotten a link from Doc. But then, I was not not so happy to discover it was because I'm contributing to the downfall of his website due to popularity. Gah! The link, she is a double edged sword, and of course I only meant to point, not to prick or stab.
This makes me recall blog posts to the Open Content Network I'd seen awhile back. Sounds like a little bit of the HTTP Extensions for a Content-Addressable Web and Content Mirror Advertisement Specification voodoo magic would hit the spot here.
[ ... 176 words ... ]
-
Everyone else is linking to it, but this is beautiful:
Infrastructure: Why geeks build it, why Hollywood doesn't understand it, how business can take advantage of it. by Doc Searls, Co-author of The Cluetrain Manifesto
[ ... 36 words ... ]
-
So this past weekend, I wrote (thanks for the link, Jenny!) about continuing down the road of my experiments with news aggregation and the tweaks I've been doing to AmphetaDesk's interface. Well, I'm at it again. I'm debating whether to post what I have so far for download yet, since I'm still refining some things, but I will very soon. I've noticed that my first attempt gathered some fans, so you all might like what I'm adding now:Template code seems easier on memory usageMuch, much more sparing with tables, items are displayed using only DIV's and margin widths for indentation. (Seems to have saved a lot of memory.)Outliner javascript now uses the browser DOM exclusively. (Seems to be slimmer and faster.)Per-channel metadata annotation & storage - a feature already planned for Ampheta, but I was impatient and hacked a quick scheme together. (The future AmphetaDesk implementaiton will replace this.)A new interstitial redirect page via which links are visited, to count clicksIn the per-channel metadata, I track:Unique MD5 signatures for itemsDate-first-seen for itemsNumber of times an item has been shownNumber of times an item's link has been clickedItems clicked for the channel as a whole
Using the newly recorded metrics on channels and items, I do a few nifty things now:Item hiding thresholds can be set on # of appearences, # of clicks, and item age. Hidden items disappear behind an expandable outline handle.Channels and items can be sorted on # of clicks, age/last update, number of items shown.Channels with no shown items can be automatically collapsed.
The gist of all this:Channels and items I like and visit regularly tend to bubble toward the topStale channels and items tend to disappear and get demand less attention during my skim-and-scan.Hidden things are out of the way, but are still available within a click or two
After letting this run for a few days (since all items are new at the first run), I've noticed that my news scanning has gotten much faster, even with occasional checks to be sure I'm not really missing anything with my thresholds.
The reasons I haven't immediately uploaded it for all of you: No preferences for thresholds - I'm editing constants at the header of my template for now; dependancy on Perl modules that don't come with Ampheta - only a few, but I'd like to wean away from them. Oh, and it's also nasty code that I want to refactor into a bunch of subroutines, some of which will be factored out of the template and eventually replaced with calls to core AmphetaDesk code. (Too much logic in my template.)
I also want to add some "karma" features so that, beyond metrics of age, visits, and appearances, you can add your own manual rating and opinion to the process of sorting and show/hide. And then there's the idea of categories/playlists I want to steal from iTunes as well. But, I might just clean up what I have by this weekend and do the early release thing so you can all cheer or jeer it at will. I also need to drop back into the AmphetaDesk dev crowd. I miss those guys...
[ ... 614 words ... ]
-
So just a couple of weeks ago, I was laughing at Monday: (Their colon, not mine. They wouldn't like my colon.) The rebranding struck me as the last spastic hyper-hypo twitch of the 90s' super-high-energy-please-God-make-them-like-me theme. The Register, bless their souls, don't like Monday:'s either.
It reminds me of when one of my old employers tried rebranding themselves from a very lawyerly name (Wunderman Cato Johnson) to a suffix: ".ology" It would have been great, I heard. We would have been studies of everything: marketing.ology, analytics.ology, urinal-cake.ology.
Instead, at the last minute demand of an assumedly more rational high level muckety-muck, they dropped that and picked "Impiric (with a funny all-encompassing bracket)" at the last minute. They also tossed out months of work and research into positioning, presentations, and common corp speak. This new name, however, was to imply empirical knowledge of all our subjects - i.e. experience. A fun, hip way to spin the fact that we were all working for a practically ancient company in the computer age. I felt really bad for the team who had to jackknife their whole process, throw out their baby, and throw together a shitty last minute collection of branding consisting of a Fatboy Slim song ("Right Here, Right Now"), and a vaguely topical epileptic flashing stream of clip art images.
But then, after about two years of this crap, they decide they (kinda) liked their original name better, and wandered back to "Wunderman". Of course, it's worth noting that the "they" are probably the old guard who never wanted it changed in the first place and who are happy all the morons who thought "Impiric (funny all-encompassing bracket)" was a good name are out on the street now.
And, today, The Register tells me that IBM has put Monday: out of our misery by purchasing them. Monday: has now returned to the more respectable "PricewaterhouseCoopers Consulting", soon to become just "IBM". Thankfully, Monday:'s demise took about a month. Any longer, and I'd've been cringing at the day when we would have started pitching to them as a client.
Thing is, I loved the 90's. There were a lot of good things, even if many of them were all a "Who Shot J.R.?" dream. I'd like to see the genuine ingenuity and innovation survive. I'd also like to see all the fake raver-boy-wannabe marketing execs lined up and shot. And, it'd be neat to see us all try again at a "new economy", only this time let's start with real things that do real stuff for real people and make real money.
Hmm, which reminds me: I wish my company would rebrand, and drop the 'e'. It really doesn't do us justice since we survived the dot-com days with a real business model and solid products.
[ ... 465 words ... ]
-
From The Shifted Librarian:I am a media company's dream. I get the whole digital cable package, I love music and own hundreds of CDs, I frequently go to movie theaters, my study is lined with books, I like DVDs because of the extras and the quality of the picture, I attend plays, I like going to concerts, I take my media with me, and I've always wanted my MTV.
Ditto, on everything but the plays. (I really need to start getting out to the great performances in my area.) Here I am looking at ways to more effectively consume just the media I like, even to the point of building my own somewhat pricey entertainment-center PC to take care of DVR and radio recording functions. I really would pay someone to help me with this, and not screw with me after. And I really would rather pay for TV and radio, in return for zero-advertising (including product placement).
/me jumps and waves at the satellites. Hi up there. I'm one of your best customers. I have money. /me waves money. But you're not getting it unless you do what I want. Hmm.
Between Hollywood and my lovely government... were this back in the days of eyepatches and parrots, they'd've had the Seven Seas drained to catch the scurvy bastards and then wondered where all the spices went at dinner time.
[ ... 293 words ... ]
-
Oh, and one more thing: I assume you've seen the new iTunes. I want to steal the new features for my news aggregator. I'd been thinking of these things long before the new version came out, but of course these are not unique ideas. And how well they seem to work in iTunes now (for me at least) seems to be a good thing to point at and say "that's what I mean".
So... I want a count of how many times I "play" a channel (ie. visit a link or story from the RSS feed.) I want to rate my channels. I want to make "playlists" (ie. categories), and "smart playlists" (ie. active, persistent searches). I'd also like to synch my subscriptions and things with my website and.. hmm, well, no one's made an iPod analogue for news aggregation/reading yet. ...or have they?
So far, I've been sticking with AmphetaDesk and still using my crappy old bloated hog of an Ampheta outline template. (And I see that someone else has just mentioned it today. I really need to revisit it and make a bunch of sorely-needed improvements, space savings, and updates.)
I started playing with NetNewsWire a bit, and I think it's pretty keen. But, I don't have the source to tinker with, so I'm less interested. (No slight on the app, or the developer. You might love it. I'm just more D.I.Y.) So I was starting to think of coding up my own work-a-like in Java, but then I start thinking further: I don't think I can really learn to love a native-GUI web news aggregator, period. I think it's got to be web-based in some way. It's about context switching, in a way I think. If RSS channels had all of the content from the sites they describe, then maybe I could just twiddle around in the UI of the app... but I always end up in the browser. So... Why ever leave it?
So, back to Ampheta I go, I think. I just need to start hanging out on IRC more again. I keep doing vanishing acts.
[ ... 352 words ... ]
-
Yes, I'm still out here. Busy and occupied and A.W.O.L. from most of the communities in which I was squatting this Spring, but I want to wander back in. Maybe I can consider this a Summer break of sorts, assuming that I rejoin with feeling when the weather gets colder. Is this the experience for anyone else around here? That your play tech projects suffer in nicer seasons and weather, and your blogging frequency drops way down? I think this may be my cycle.
[ ... 85 words ... ]
-
Okay, I do this in what seems like an annual cycle: Juggle hardware and operating systems around on my home PCs. The tool I always seem to lose and need again is something to safely allow me to resize and move hard drive partitions, whether they be Linux or Windows. Has anyone finally made a reliable Open Source tool for this yet?
Generally, I reach for Partition Magic for the job. It's never failed for me. But, generally, I need to buy a new copy of it every time I get around to using it again. I mean, who really needs to repartition a drive more than a few times in a year? (Okay, you can put your hand down. You need to get out more.)
Well, this time it looks like I need it again. I have v6.0, which apparently makes WinXP choke and drown in its own vomit. I don't really want to buy Partition Magic again. Could I just rent your copy for about $10-20 or so? I've considered wiping WinXP and re-installing Win98 just because things seemed easier then. It just seems like this is such a common task when you're screwing with Linux that this would be Done.
Hmm, maybe I'll just pay for the fine product again. :) It does work flawlessly for me, even if I only do use it once.
[ ... 433 words ... ]
-
Okay, so I've seen my first spam via my referrers. Checked my referrers RSS feed this morning to see the following pages in the list:
http://www.voodoomachine.com/sexenhancer.html
http://www.voodoomachine.com/linkoftheweek.html
http://www.voodoomachine.com/druglinks.html
http://www.voodoomachine.com/awards.html
Not linking directly to these pages, out of some vague sense of "Don't encourage 'em." The pages don't really link to me either, of course, they're all just pointers to an ad-ware site for this AMAZING DEVICE that does everything from making sex feel better, helping you study, aiding in sleep... it even makes DRUGS better! Wow, this site is so amazingly derivative of the classic snake oil pitch, with a touch of the modern and dot-com-days (ie. Works with DRUGS! Click our FAQ link!). But I can still hear the wheels on the wagon squeaking as it trundles through town.
I'm just trying to imagine by what criteria they picked my site to spam - oh, that's right: there is no criteria. They don't even know who I am.
On one hand, I'm annoyed that whenever I get back around to working on referrers, I'll need to add a blacklist feature (shared / collaborative blacklists?). On the other hand, I'm annoyed at the owners of that site, and even more annoyed at the 0.01% or so of their targets who'll BUY NOW. And then, on another hand still (I have many hands in the blogosphere), I'm amused by the whole thing: it's almost self-parodying in these post-Cluetrain days.
Oh well, back to work...
[ ... 445 words ... ]
-
So we've managed to get the go ahead at work on what seems like it should be a nightmare project: We're going to re-examine, re-design, and re-implement our in-house web app platform, the Toybox. Not only that, but we're going to switch languages, from Perl to Java. It's got me immensely excited, though this may be naive of me.
It's been years since I was last knee deep in Java, and years that I've spent up to my ears in Perl. I'd almost forgotten how much of Java I'd reinvented to make my ideas work in Perl. (This may not be so much an indictment of Perl as of the way I think.) And the last time I worked seriously in Java, there were no IDEs yet, so starting to work with NetBeans or even Project Builder under OS X for Java work is a dream. I love using NetBeans in particular, occasional hiccups aside. Besides all the obvious integration of build, test, run, debug, etc, I love being able to highlight a keyword or class name and pop up the docs in an internal HTML browser. I love that it makes suggested changes to classes when I change an interface they implement.
Yeah, none of this is news to most of the world, but I've been steadfastly sticking to shells and bare emacs or pico for my editing. Maybe a web browser handy for docs. I haven't worked very much with IDEs these past years, since a lot of them just got in my way. Or at least, with hubris, I thought that they did. Then again, I don't see very many equivalent tools for a language as free-form and multiple choice as Perl.
And, though I miss CPAN, I'm loving resources like the Jakarta project over at Apache. Again, not news, but new to me. I feel like a Rip Van Winkel-Java over here, since my last real work in Java was when the API was in 1.0 days, Servlets were this neat thing for the Java Web Server (now at the end of its life), and the dot-com boom was just starting to stretch its wings.
Now, I haven't been completely oblivious to Java over this time. I've poked at it, and played with a few things from time to time to at least stay somewhat current, and I've tried to vaguely keep up with things a bit. I have an overall sense of what does what and where to find what, but really getting it under my fingernails again now is a different experience.
[ ... 921 words ... ]
-
Whew, so where have I been this month? I didn't realize it, but here I am in the third week of July having only made 3 posts to this site. Well, I've been busy at work and busy at life, and without much time for the free time hacking I've wanted this site to be about. Maybe it's Summer - I'm sure once Fall and Winter hit, I'll be back here jabbering your virtual ears off.
Anyway, I've got a few things I can jabber on about today, so I think I'll try to compose a few posts.
[ ... 99 words ... ]
-
Ack! So I was trying to switch my PairNIC-hosted domain over to DynDNS's Custom DNS service last week. I thought all was well, since I followed the directions exactly. As it was, it really only consisted of "Set your domain's nameservers to ns[1-5].mydyndns.org". So I did that, then left for a few days. Come back and I see that it failed miserably, and took 36 hours or so to switch back to ZoneEdit. Grr. The only thing I noticed was that ?PairNIC's control panel appeared to randomize the order of the nameserver addresses I entered, and ?DynDNS had a note in the docs stating that I needed to have only ?DynDNS nameservers listed in my record and no mixing with other DNS hosts. Could it be that ?DynDNS is picky about the order in which my record lists nameservers?
Grr. Well, I'm back on the air. I just hope not too many of you out there with news aggregators and auto-unsub-on-error have had me slide off your lists.
[ ... 269 words ... ]
-
So last month, the crew at LiveJournal finally fixed RSS feeds on all journals. This month, LiveJournal becomes a centrallized news aggregator. Plenty of paths in and out of the LJ "gated community" now to wire everyone up into blogspace at large. This kind of tweak-by-tweak improvement is one reason why I stick around LiveJournal. (I just hope that they're polite about it and periodically poll RSS feeds sparsely for the entire userbase. :) I assume they're smart enough to figure that out.)
Need to get back to that LJ for K-Logs project...
[ ... 207 words ... ]
-
A friend of mine on LiveJournal draws some interesting parallels between the intertextual relationships connecting pieces of Heian literature and the links connecting web pages:Large amounts of Heian poetry have survived, and scholars generally know what's what when it comes to references from one poem to another. These days, given that the average person (probably even in Japan) is no longer familiar with whole of the poetic tradition Heian poets were writing in, the poems can still be read. It just requires lots of footnotes that cite the poems that the original author was referring to, and the patience to actually read the footnotes.
Some of what we've put on the internet will almost certainly survive us. But will it still be readable when all the links are long dead?
[ ... 131 words ... ]
-
Okay, I've only met the people in this movie once, maybe twice, but it made me wet myself. Run, do not walk, to see:
YAPC 2002: THE MOVIE
Damn. One of these years, I have to make it to a YAPC... (like I've been saying on IRC for the past 4 years)
[ ... 53 words ... ]
-
Okay, so NetSol's interface for me to change the DNS server responsible for decafbad.com has been coming up with an error page for me, for 3 days now. I just donated to DynDns.org so I could get some dynamic DNS love going on with some decafbad.com subdomains, but no dice on switching the domain to point at their servers when my registrar's site is broken.
So, I'm going to VerisignOff. I hopped over to PairNIC and initiated a transfer. Let's see how much hassle this brings me. :)
[ ... 146 words ... ]
-
Though I'm far, far from being an "artiste" of a photographer, I want to do neat things with my camera. I want to capture some moments, some feelings, try some odd things. It's like hacking with light. I see the kinds of things done with the Lomo cameras, warm and in-the-moment images. Sometimes they're blurry, but that just lends them a rough handmade voice. I read the 10 Golden Rules of Lomography and think, yeah, that's what I want to do with this new gadget. But I hope that the lens and the digital doesn't kill too much of the warmth. There's a certain style that the Lomo gives to things that I doubt my gadget will get, but I want to capture the moments now that I have something small enough to be omnipresent with me.
Hopefully this doesn't just degrade into 101 pictures of my cats. :) We'll see.
[ ... 152 words ... ]
-
Cool, looks like TrackBack is working and starting to catch on this morning. I caught a nice compliment about my mutant outline skin for AmphetaDesk. Must get back around to the next iteration of that thing, add some polish and some new features.
As for the MT upgrade from last night... looks like it nuked my post categories. Grr. That kind of blows chunks. Might have to see what that's all about, since I sagely backed everything up first. Another thing is that I'd like to look more into this TrackBack feature and the concept of TrackBack URLs and pings. Namely: It's a pain in the ass to cut & paste the TB URL, and the bookmarklet hasn't worked for me yet.
I notice that there's some RDF embedded in my posts now, describing the TrackBack link. Unfortunately, I don't see where the RDF explains that it is, in fact, about a TrackBack link. Though I haven't dug into the workings yet, I assume that this is how TrackBack-enabled pages discover info about pages from which they've been pinged.
What would be nice is if the RDF also identifies itself as TrackBack info. This way, I could hack my Movable Type to pluck URLs out of a post I'm publishing, go take a visit to each URL and try to harvest the RDF, and then automatically discover the ping URL and ping it. All I'd need to do to activate the process is include a link to a page elsewhere that contains RDF hints on how/where to ping.
Always looking to get more for less effort. :)
[ ... 420 words ... ]
-
Checking my RSS feeds, I see MovableType 2.2 has been released, and it comes with support for a feature called TrackBack. Too sleepy to fully investigate and play right now, but it looks like another very cool push toward interconnectivity between blogs.
[ ... 43 words ... ]
-
So I broke down finally, and bought a digital camera. The excuse ends up being that I'll be needing it on my trip to Toronto this weekend. I got a Canon ?PowerShot S200, a 128MB CF card, the belt case, and two extra batteries. That should be enough to cover a rampant nerd photography spree.
I love this little thing, and I've been going insane with it. After some twiddling and organizing in iPhoto, I uploaded a metric ton of photos and grabbed Gallery from ?SourceForge to show them off in the new 0xDECAFBAD Gallery.
Then, I stuck a lil PHP block that rotates random thumbnail images from the gallery into my busy and crowded right-side bar. I can hear the woefully overburdened design of this site creaking and about to fall over. So, next project in a week or two is to tear this place apart and make it a bit easier on the eyes.
That is, after I take a few hundred more pictures. Been thinking of joining one of those online DP clubs to play around in a photo scavenger hunt.
More soon.
[ ... 187 words ... ]
-
Not much out of me yet, busy with many things. But, you can look for the warchalk glyphs out in front of my apartment later today.
[ ... 27 words ... ]
-
From the recently-returned Dave:I am now an ex-smoker. I want to say that in public. Of course I still really want to smoke.
Me too, since January 2002. As Dave explains, I too had a very deep, integral place in my problem solving process for a cigarette. I also had a spot for a cigarette in my avoiding-violence-against-coworkers process. But, many panic attacks and insanity-verge moments later, I think I'm doing quite well. I have started strangulating more co-workers, but my problem solving skills have become mroe productive.
[ ... 122 words ... ]
-
Here's a crazy idea I'm considering: Deploy the LiveJournal source as our company-wide k-logging service. Definitely make some cosmetic tweaks, possibly make some backend tweaks, but it might just be crazy enough to work. This just struck me today, remembering back when I'd last thought about this.
LiveJournal can support a for each employee. It has communities, which I can use for projects or special topics, or other forms of narrative pooling. It has the concept of friends and friends pages, or in other words: subscriptions and aggregators. There are friends groups, which we could use as aggregator feed groups. And the laziness threshold is demonstratedly low, seeing the ease and frequency with which 14-year-old spew trash around their journals. :)
And I'm thinking that any things missing from the LJ source I might want will be easier to add in than to build the whole thing up from scratch. And it's completely Open Source and free, though I might be able to convince some donated funds out of this place if the experiment is successful. Hell, if not, I'd consider grabbing a permanent account the next time they offer them.
Hmm. I still, someday, would like to see a decentrallized, server-less desktop answer to LJ, but this might work for now. Radio + Frontier might get there someday, but the idea of K-Logs has such tenuous support in my organization that any cost whatsoever would kill it.
Must play with this idea. Must insinuate it like I did with the wiki. It was accepted and made policy almost before anyone knew what hit them. That was fun. :)
[ ... 270 words ... ]
-
In case anyone's been wondering where I've been, here's the story: Clawing my way back to the land of the living
Briefly: Been bedridden with something nasty. Too wiped out to do much besides read & stare. Working on getting healthy and productive again.
Be back soon.
[ ... 48 words ... ]
-
Now that I have this recent RSS readers list in RSS (src), I can start to see who's just pulled me into their neighborhood. (It even catches the userWeblog parameter from Radio subscribers!)
And I think I see that I've just had Punkey, Sean Porter, and Nick Nichols join the audience. Oh, and I see you've moved a bit, Mr. Ruby.
[ ... 81 words ... ]
-
Guess what? AmphetaDesk v0.93 is unleashed upon the world - so sing and rejoice, fortune is smiling upon you. I'm proud to say that I built the OS X faceplate for it, and hopefully the toxic chemicals I used in its production don't cause any lasting effects on you, your children, or your pets. So, go get it you infovore freaks!
[ ... 62 words ... ]
-
More on aggregators and extending the concept further into the desktop, Ziv writes about aggregation as a tool for the exterprise. This is starting to cross over into the digital dashboard buzzword area, but I'm thinking that what we're all converging on will be the real deal.
I had to giggle though, because the Sideshow project from Microsoft Research looks like my OS X dock on steroids. Even down to the vertical arrangement, mail indicator, weather reports, IM buddy display (no faces, though), and a few other information readouts. Of course, my dock doesn't have the neato pop-out in-context mini-browser windows that connect to the icons, but that could be a keen OS X project. Have to think about that.
[ ... 121 words ... ]
-
Joe Gregorio: "...I didn't really see a need for CSS when I first learned HTML, but as I maintained some web sites over time and bumped my head enough times on having mark-up embedded in my content which made it difficult, if not impossible, to deploy a new look and feel to site without having to edit every page, I suddenly had a real need for CSS. I still have a lot to learn, still more head bumping to do, and I want to have RDF, SVG, SMIL, and all the rest in my pocket when those days come."
This, along with his reference to Joel's piece, "Can't Understand It? Don't Worry", strikes a familiar chord with me. So does what Dave wrote on XML and practicians vs theoreticians.
The attitude I try to lead with when playing with all these new toys is that of a beginner mind. Or, at least, that's my attempted tact lately. On the one hand, some things might look hopelessly convoluted and needlessly complicated to me. On the other hand, there are people who are both more expert than me and who have worked longer than me on those things. More than once I've decided I could do something better than someone else, only to discover that I was just following in that someone's footsteps down the line and rediscovering all the pitfalls. But sometimes, I do demonstrate that I'm smart too, and figure out something new.
So, I try to reserve judgement until I've grokked the thing. Until I've soaked in it, played with it, scanned through the squabbles surrounding it, caught up with the Story So Far. Sometimes I learn what's what. Sometimes I don't. And sometimes, I come to the conclusion that the thing I'm looking at really is a mess, and I stay away from it.
My particular angle at the moment involves XSLT and SVG. We want to put together a reporting and metrics package in-house at work, and I'm thinking that XML/XSLT/SVG may be a good combination for charting. I understand SVG - or, rather, I get it to the extent that learning it appears obviously useful. On the other hand, XSLT still has me scratching my head a bit.
An evangelist for XSLT at work here was trying to convince me that we could have all the HTML guys switch over from our current template technology, which is basically text or HTML with a small procedural scripting language for presentation work. At present, the HTML guys are used to their process. They've been doing it for years. And as far as I know, they hardly ever touch the functional elements of the templates like the table foreach loops and such. I have a sense that learning both XSLT and LISP will feel wholesome to me, but I can't see the HTML guys doing it. I mean... recursion?! What? I just want to make a table and have the rows fill in with data!
So maybe the problem with this is that this is not precisely the perfect application of XSLT, though it seems to be. With our current template toolkit, the logic of the app passes a perl data structure (a hash) to the engine, which then processes a template which refers to the keys of that structure. I could easily represent that data structure in XML, with structure and lists and branches and all. And I could see conceptually where an XSLT stylesheet could replace our templates.
But forget about our HTML guys for the minute. What about me? Like I said, I think it will be good for me to learn XSLT. But I keep catching myself thinking: Why not just use a full scripting language? Why not just use the template kit I know? I'm thinking I don't grok XSL well enough yet, though a part of me is grumbling skeptically at it.
I vaguely see some magic in it, though. Like, XSLT contains instructions for transforming XML, yet it itself is XML. XSLT uses XPath, which is vaguely starting to take shape in my head as something very powerful to replace many loops and frustrations in my scripting language XML processing. And I keep seeing suggestions that XSLT can be seen as the SQL of XML, and I can imagine those uses. But then, I see an article on Two-stage recursive algorithms in XSLT, and I think, "All this, just to write a loop to calculate a sum?!"
But I'm thinking part of this, too, is me sorting out "Daily Intended Use" versus "Freakish Abuse of All that is Good and Holy". Maybe when it comes down to sums, I'll just do that in Perl.
Hmm. Back to drawing barcharts...
[ ... 1006 words ... ]
-
Oh, and after downloading Mozilla 1.1a last night, I was pleased to see that the navigation toolbar was back. And then, just now, I was even more pleased to see that it picked up the LINK to my RSS feed on my front page:
[ ... 45 words ... ]
-
Ziv Caspi of Y. B. Normal thinks that "Aggregators should bring more, not less, information", and I'm inclined to agree. Also, Adam wants to take the aggregator even further: "...I think the entire computer interface should be overhauled to be more aggregator like, events streaming by."
Along these lines, has anyone played with Scopeware, built from the research of David Gelernter? I think it used to be called "Lifestreams" before it was made into a product. I first heard of this work back when I read Machine Beauty : Elegance and the Heart of Technology, and was very interested but haven't seen anything other than screenshots and mock-ups. From what I can tell, this interface is literally "events [and documents] streaming by".
The little I've picked up from Gelernter's work, along with a few other influences (ie. David Brin's Earth), has really stuck the picture in my head of how I want my future info-sieve to work.
Along with all of this, something else I was vaguely considering adding to my AmphetaDesk outliner skin was an IFRAME attached to RSS items under a collapsed node. When the node gets expanded, the IFRAME is loaded up with the item's link. The notion is that you do all your browsing from your RSS feeds in a single window and within the same context. Neat idea, I think, giving a hook to pull more information into the aggregator upon request. But of course if I do it, it'll likely be messy as hell for a zillion obvious reasons and catch my iBook on fire.
Probably have more to ramble on about this later, but it's time for the first meeting of the day...
[ ... 281 words ... ]
-
Taking a stab at learning XSLT. TestXSLT from Marc Liyanage is making me happy and helping, for many of the same reasons I enjoy learning Python, and have been somewhat enjoying learning Common LISP again.
[ ... 36 words ... ]
-
Okay, I've lost count of how many times I've seen a referer like this: http://google.yahoo.com/bin/query?p=Tiffany+Playboy+Pictures&hc=0&hs=0 How the hell did I become a destination for the web's nekkid Tiffany needs? They're not here. Seriously. Otherwise I'd be charging, at least $14.95/month.
From what I can see by chasing the Google referers, it's my old stats page that got indexed, and that's gone now! I can't remember ever having mentioned the name "Tiffany" on this site (until now), and doing a complete text grep through every file on my server yields no hits.
So, somehow the rumors of Tiffany pics over here got started, and then Google must've initially gotten the notion to index my stats pages when I mentioned them back in April. Indexing my stats page further fed Google from Google's own keyword referers already present in the stats page. Gah. But where's the initial Tiffany-link event?
This is all highly confusing, yet amusing as hell. I wonder if I can work this somehow to further and gaslight Google, making me a definitive source for other things? Say, "wil wheaton monkey sex" :) Naw, I have better things to do.
Okay, back to work.
[ ... 195 words ... ]
-
Just wasted a good 20-30 minutes having been sucked into the IRC Quotes Database that Adam mentioned. It made me spray coffee out my nose. So, rather than visit it everyday, I made RSS feeds from it, scraped by RssDistiller's enclosePat (need to implement that in Perl sometime). You can find a feed for the top 25, 25 latest, or just get a random grab bag of fun.
update: Ack. Looks like my scraping isn't pulling in the actual quotes. Back to the drawing board. :(
update 2: I think they should all be fixed now :) I missed a para tag in the source.
[ ... 222 words ... ]
-
Another thing to think about again: Whitelist-based spam filtering. The spam is just getting heavier, raunchier, and less coherent these days. I'd mused about it awhile ago, along with looking at an application of RDF to share whitelists between trusted people.
Then I see this article up at osOpinion.com on the subject (calling it "reverse filtering"). I hadn't thought of it before, but I could probably set up a quick auto-responder for unknown addresses, asking politely for a response requiring human thought, or a click by a person at a URL somewhere, to get them auto-added to my whitelist. If the laziness/rudeness factor toward friends and acquaintences, current and potential, doesn't outweight the benefits, this might be a very good solution. I wouldn't throw mail away outright, but it would be put in a rotating bin that I might look at every day or so, and wipe every week or so.
That osOpinion.com article wraps up with a little bit of a carrot / challenge to for a business plan around a whitelist / reverse-filtering scheme for mail. I wonder how much people would pay for it, and whether people would be put off by an auto-response asking for proof of humanity? If it would go over well, I'd love to see pobox.com implement something like this. (Oh have I ever mentioned that I love pobox.com, and think you should all sign up there?)
In the end, this seems pretty amusing to me. Humans sending robots to mail humans. Humans setting up robots to intercept robots. Humans sending mail to humans again. Oh I'm sure the spammers will try harder with their robots, but eventually the bastards just have to give up. Maybe not. But at least I can ignore them.
[ ... 356 words ... ]
-
Reading Dave's in-progress What is a News Aggregator?: A news aggregator is a piece of software that periodically reads a set of news sources, in one of several XML-based formats, finds the new bits, and displays them in reverse-chronological order on a single page.
..and the rest leads from there. First thing: I think this is an unfortunately limited definition. It specifies the particular interface by which Radio presents information. And, as a comment on one of my first experments in tweaking AmphetaDesk's interface attests (no author link):I've been wanting to use an aggregator since I first heard of them (RU7) but have always felt that because poor information architecture/presentation they tended to make tracking a large number of sites harder rather than easier.
I didn't expect my first attempt to be called "Brilliant! Lovely! Perfect!", but I did think that I was on to something. Simply displaying the new bits in reverse-chronological order is too limiting - it's only one attempted solution at the problem of aggregation. My solution isn't an ultimate one either.
Aggregators desperately need to grow toward more flexibility and scan-ability. A few things I'd really like to see addressed:De-emphasis of seen & older items from sites, but not complete hiding. Context between entries on weblogs is important.Optional grouping of items from the same or similar weblogs. Context between entries, and between blogs is important.Emphasis of newer items, tracking the time line and signalling attention to changes. Radio does this, but mostly to the exclusion of other concerns.Preventing sites with few updates from getting lost in a wash of frequently updating sites. Some of the best sites may update once every few days with something worth reading, but simple reverse-chronological order pushes the quiet sites out in the maelstrom.
There's more I've been musing about, but I can't remember more at the moment. I've tried to do a few of these things with my tweak to Ampheta: varied (hopefully not obtrusive) font size & weight, dependant on item age; maintaining grouping and order of items within RSS feeds; showing enough information for a visual scan, hiding further details, but making details available within a click or two (I love outlines). I wanted to hit the page-down button less, but it's more than that. I want my eyes to move slow on the first few items of a channel, and then slide down off the rest, unless I intentionally want to be caught.
So, while Dave's working on defining the News Aggregator, I think it's a good time to redefine it a bit while he's at it.
[ ... 783 words ... ]
-
Okay, starting to poke at my referrer scripts again and produced two new things: Recent Referrers in RSS (src) and RSS Feed Readers (src). I simplified my database a bit, collapsing multiple rows into single rows with a hit count column. Also squashed a few stupid bugs that had cropped up in the Great Site Move a month or so ago. Realized that the counts were insanely wrong, sometimes showing a count as high as 15 for a post that might have only 3 back links.
Likely the referers in RSS will be of interest only to me as I obsessively watch my site (though I do subscribe to Disenchanted's Recent Inbound, since it gives me a stream of new sites to visit). But, the RSS feed readers list is a few steps closer to the friends-of list I want to move from LiveJournal out into the blogosphere proper.
One of the next things I want to do is start cobbling a fairly modular & general URL investigator - that is, give it a url, and have it try to track down title, author, contents page, RSS feed, location, and any other metadata that comes up. I could then use this to flesh out all the links everywhere, from backlinks to the friends-of list. Basically what many other people have been doing for different semantic aspects, but all rolled into one agent.
Along with that, I want to implement some manual annotation of sites known by my site. Thinking that, between per-post backlinks and RSS reader links, I could eventually build a decent database of metadata here. With that, all kinds of nifty things could happen...
Okay, time for bed. Starting to ramble, or at least be tempted to do so.
[ ... 367 words ... ]
-
Awhile back, I was musing about linksback, robots, and the web semantic - and to my dumb idea about scraping metadata from HTML comments, Bill Seitz recommended I put my metadata into meta tags in the HTML header. Duh. So today, I pick up this post by Bill Kearney on "Real Neighborhoods" via Jenny Levine. (Man do I enjoy being fed by her site! She gives me a whole new respect for librarians.)
So it's likely not news to you, and there's probably been talk about it already, but I just caught on to Syndic8.com's metadata support. Brilliant. I need to dig into the Dublin Core elements more, as well as other attributes to put in meta tags, but I think this would be another great addition to weblog software out-of-the-box.
Lower the laziness threshold for managing this meta data, increase the ubiquity and coverage, and spread another layer of tasty goodness on the blogosphere.
What's next?
[ ... 158 words ... ]
-
Now, after quickly hacking outlines into AmphetaDesk, I see Adam Wendt playing with RSS and XSLT. Seems like a much cleaner way to do it, and really makes me want to play more with XSLT. The AmphetaDesk template hack I did is still horribly inelegant, doesn't leverage the DOM, and could use some tidying. Would be interesting neat to see XSLT in Ampheta, but that might be a bit much right now.
[ ... 73 words ... ]
-
This past week, I've been playing with the new AmphetaDesk checked out from CVS. Morbus Iff gets closer to a release, and I threw together a Cocoa wrapper and outline skin for Ampheta.
I've been bouncing between Radio, blagg, and AmphetaDesk for my reading this week, and I'm leaning more and more toward Ampheta. Radio's a powerhouse giant, blagg is a tiny gem, but AmphetaDesk is starting to look just right.
Especially after I hacked together an outline-style template for it today. You can grab a copy of it over here: amphy-outline-skin.tar.gz
Back up your AmphetaDesk/templates/default directory and replace it with the one in my tarball. It's still nasty, probably horribly buggy and inelegant, and seems to vaguely work under Mozilla 1.0 and IE 6.0. But amongst the skin's features are these things:All channels, items of channels, and descriptions of items are arranged in a tree of expandable/collapsible branches.Links are provided to expand/collapse all channels, and all items in a given channel.If more than 10 items appear in a channel, the rest are hidden below a collapsed branch, but still available.From newest item to oldest, the font changes from large and bold to small and normal.
Check it out, let me know what you think. It's based on Marc Barrot's activeRenderer code, and inspired by DJ Adams' application to blagg of the outline presentation.
[ ... 403 words ... ]
-
Science Toys You Can Make With Your Kids: Whenever the end of the world comes, and we're all reduced to extras in a real-life Mad Max movie, I want to make sure I have the entire contents of site printed out and stuffed into the front of the high school chemistry textbook I kept. I'll offer my services as "wizard" to the warlord of the local city block. (Found via Backup Brain)
[ ... 73 words ... ]
-
Okay, so now we've got a bit further toward our News Aggregators leaving vapor trails. Thanks, Morbus. Thanks, Dave. There's still some more work that needs to be done, though. In specific, how do I get to you via your trail? Or, to be even more specific- how to my agents get to you?
Sure, I can look in my referers now, and filter on my RSS feed to see a footprints. And for Radio users, I can pluck out the userWeblog=... param added to Radio aggregator referers. For others, like the page I set up, or Adam set up, or Jeff set up, I can try to assume that the URL leads... somewhere.
But, in thinking about this further, none of this actually fulfills my wish. I originally wanted to see these referers lead directly to a reader's blog. They don't. But, that's fine: I kind of like the idea of the thank you / hello / I'm reading your page. However, I still want to find you.
Or, rather, as I alluded to in the first paragraph, I want my agents to find you. And my agents are only semi-intelligent. So how about we set a standard by which bots can autodiscover your weblog, home page, whatever:If you don't care to put up a special page to point to as your referer when aggregating news, then point a URL somewhere, anywhere, but include a URL to your weblog in a query parameter named "userWeblog". This covers an already de facto standard set by Radio.If you do care to set up an acknowledgement page for your readers, include a LINK tag in the page's HEAD pointing to the home page of your site as the table of contents: In this way, rock bangers cobbling together their own blog spiders can tell them where to go and for what to look. Get this meme spread, and we'll be seeing more community crawling bots very soon now.
In fact, I like this LINK tag now. I think it should be put on all of your weblog pages. Especially as I start thinking more about revisiting referers: A LINK back to your weblog home as the table of contents would allow me to maybe enrich my backlinks, especially knowing where your site root is. (If my eyes do not deceive, it looks like backlinks on Mark Pilgrim's blog do some RSS autodiscovery right now.)
Anyway, let me know what you think.
[ ... 575 words ... ]
-
With regard to some new things in Aggie, Chris Heschong writes: "If only Microsoft (or Mono would get a .NET runtime out for MacOSX." I say, "Ditto." I'd like to play with Aggie, and .NET for that matter (evil empire notwithstanding), but I don't really use Windows on a daily basis anymore (evil empire withstanding). I have a box at home running WinXP, but I only really use it for games (less often these days), and for recording my shows. The rest of my daily use machines are either Linux or OS X. Hmph.
[ ... 214 words ... ]
-
Noticed Nicholas Riley's entry about switching IRC clients, and the screenshot reminded me that I needed to download Duality for Mac OS X again. I love that stripe-less ?SilverFox theme. Feels like a cool cloth on my eyes and makes me even think that my iBook is faster.
[ ... 112 words ... ]
-
Whoa. Rock on, Dave. My wish for blog URLs as referrers in Radio while gathering RSS channels is now the default setting:A tiny change in Radio's aggregator makes referer logs more interesting. Please read this if you provide an RSS source for Radio users, and you watch your referer logs.
So what's next? :)
[ ... 130 words ... ]
-
Holy shit, it's done: Mozilla 1.0 Release Notes. No asymptotic versions (ie. 0.9, 0.99, 0.9998, 0.99999314), the fork has been stuck and the fat lady will dance, along with the skinny ones and others of all sizes.
[ ... 38 words ... ]
-
I've (not so) secretly replaced my aggregatorData.prefs.appSignatureUrl in Radio UserLand with this URL. Let's see if anyone notices. Watch your referrer logs for visits from my invisible (s)elves.
/me pops in his Vapor Trails CD, whistling...
[ ... 109 words ... ]
-
From Adam Wendt, in my referrers today: " Hey there! I'm reading your RSS feed!" This is precisely what I was wishing for, back when I said "I wish Radio sent me blog URLs as referers on news aggregator hits" Now, I just have to tweak the 3 news aggregators I hop between to supply something similar. Anyone want to jump on this bandwagon and help push it as fast as the recent RSS autodetection meme? :)
Catching up, I see that Adam wrote about what Jeff Cheney wrote about changing aggregatorData.prefs.appSignatureUrl in Radio UserLand to point to a custom page. Hey, Radio crew, how about making this a default, eh? Point that signature at my blog, or a canned custom page? Yay!
[ ... 124 words ... ]
-
I love it when LiveJournal gets some props, and via Dave no less: LiveJournal to support RSS and discovery. RSS was gone for awhile from journals, but now it's back and with autodiscovery link tags.
Aggregate me:
Autodiscover me: http://deus-x.livejournal.com
Radio subscribe me:
[ ... 44 words ... ]
-
So I see this referrer on one of my postings today:http://referers/are/easy/to/fake/First reaction: giggle Second reaction: "Well, duh." No one's ever claimed that they weren't easy to fake, break, subvert, or otherwise derail. It's just that the default, unmolested behavior of many browsers is, happily, to report from where they came to find your page. This is fun data to play with.
Taking this "statement" seriously for a second: One thing I intend on doing with my next iteration of referrer tracking, is to chase referrers and attempt to verify their existence while trying to harvest some metadata from the page and surrounding site. That would filter out some fakes and provide some context. This is still frustratable, though, because you could provide me with metadata stating "Metadata is easy to fake."
But, in my mind, the goal isn't precise, panopticonical monitoring of visitors' activity. The goal is to provide easy cow paths for cooperation in building a semantic web, planning later to bring in the paving crews.
So, sure, you can fake the data I'm trying to lift from your activities. That's your right. You can even choose not to give it to me. No skin off my back. The nature of the endeavor is cooperation, so it's as up to you as it is up to me. It's the wiki way, it's the way of civilization.
You can kick my house over, walk by, or help me lay bricks. In any case, the house will still get built as long as I still like building it.
[ ... 269 words ... ]
-
Ack. Must remember to close my blogchat window when I leave for the day, so that I don't come back to find people having been looking for me, seeing me in the room, and wondering why I'm so rude. :)
[ ... 41 words ... ]
-
Oh yeah, and I just updated the LINK element on my homepage, as per Mark's announced revision.
Cheers!
[ ... 19 words ... ]
-
I've talked briefly about the attempt I've made at segmenting my writing between here and my LiveJournal account, assuming that there would be overlapping yet different audiences for both. To be exact, I assume any readers who find me here are looking for my techie nerdery, while my LiveJournal readers are looking for something a bit more "human".
But, it's all me, and it's all of human endeavor. Should I segment, or try to blur the bounds more? For example, here's a bit of what I wrote in my LiveJournal today:First thing is on memories: they do lose their meaning. Sometimes the lesson you should have learned from them changes or is forgotten. Sometimes you remember them differently, maybe more glowingly. Everyone has his or her own Myth of the Golden Age, particularly with respect to the past.
At dinner this weekend with missadroit, I was slipping into some bemoanment toward my "lost college years" and my seeming inability to "recover" some of the wonderous things and times I had then. The friends, the illumination, the learning, the hope, the excitement. Problem is though, all that's a myth I've created. While there were good things back in college, there were also shitty things. My college days were not a Golden Age. Just as she reminded me how pathetic it is to say, "High school was the best time of my life," treating my college days as my Golden Age is just as pathetic.
The thing I've not been facing is exactly what grlathena writes: "The future is very do-it-yourself."
The world didn't run out of wonder or friends or opportunities for fulfillment after I graduated, it just got a bit more miserly in just tossing them at me. See, life in school is still very scripted. There are many choices made in advance for you, based on only a handful from you. Out here in the "real world", things are much less scripted, and many fewer things are just given. It's very do-it-yourself.
I know all my fellow rock bangers are human beings, too. I just hesitate to waste their time with too much off the topic of rock banging and more on the topic of human beings. Should I overlap more? You tell me.
[ ... 375 words ... ]
-
I'm thinking of getting a digital camera. Price is a concern, but mostly my concern is small size and pocket-ability. I want this to sit in my jeans pocket, maybe my coat pocket, and take somewhat decent images. I'm not quite as concerned about super high-res and printable images, as I am about capturing 50-100 decently bloggable images (~640x480 and smaller?). I want a device inobtrusive enough to keep with me at all times, so I can pull it out in the moment, and capture what I usually miss. And preferrably without needing a Batman-like utility belt. I don't really like that style anymore.
Basically I want to start a bit of a personal photo journal blog. That's photo album enough for me.
So how about this camera to start?
Kodak ?EasyShare LS420
I've held this one in my hands, and it feels good and small, yet seems to more than satisfy the performance I want out of it. I've also looked at the Spyz, Cubik, Eyeplate, and one or two other tiny cameras, but they seem to be too under powered for what I want, in terms of memory. And they also seem cheap. Then again, I haven't held any of those in my hands. And then again, again, cheap might be good if it's banging around in my pocket.
I'm sure I'm not the only person thinking of doing this, or already doing this. Any one have some pointers for me?
[ ... 337 words ... ]
-
Sam Ruby wants to go Beyond Backlinks, and I'm right there with him. He writes about the various means we've tried so far to discover connections (ie. referrers and linksback and Jon's analysis of blogroll connections), and muses further. I love the idea of further automation in surprisingly discovering connections and automatically exploring other feeds, based on discovered connections.
A plug for LiveJournal: I love their user info pages. I've been idly musing for a while now on how one might decentralize this and extend it web-wide throughout blogspace. I love seeing the friends and friends-of lists, analogous to blogrolls and inverse-blogrolls. And, I really love the interests lists, since just by entering a catalog of phrases, you can see unexpected links to other people interested in the same things. Not quite correlations or deep analysis, but it helps.
But it's the decentralization that rubs. I could probably start a service that provides user info page workalikes to bloggers. I could offer it for free, but then I might get popular and have to pay more for my altruism than I can afford. (Sometimes I worry about BlogRolling.com.) I could offer it for a small fee, but then the service would probably never see widespread use. Were it decentrallized, I could write some software, and others could pay their own way in server resources. More to think about this.
Also, if I can get time this weekend, there are a lot of parts of David Brin's novel, Earth, that I'd like to babble about. Reading it right now, and seeing that he wrote it just around 1990, I'm amazed at how fresh it still is. Sci-fi and speculative fiction rarely stand the test of years and unexpected advances, but a lot of the stuff in this book - particularly about the way in which people deal with information, how they discuss and create and manage it - seems to be happening right now.
Anyway, more soon.
[ ... 736 words ... ]
-
Found these hot little things via Phil Ringnalda and via Matt Griffith: Mark Pilgrim's Amphetadesk Auto-subscribe bookmarklet and Radio auto-subscribe bookmarklet. So, now when you visit the site of someone who's joined the RSS autodiscovery via HTML LINK element bandwagon, you can snag their RSS feed into your aggregator.
This makes me really want to get back to studying some in-browser scripting and DOM manipulation. It's been awhile since I played with that, and I see more cool things done with it all the time.
Tasty. Now I just have to wrap a few more things up, and I'll hopefully be contributing an updated Cocoa-based OS X faceplate/installer for AmphetaDesk to Morbus before the weekend's out.
[ ... 117 words ... ]
-
Matt Griffith suggests using an HTML link element as a way to provide robots and news aggregators with means to find a site's RSS feed. Mark Pilgrim chimes in with a few thoughts and an improvement. And then, I see the buzz coming from Jenny Levine too.
So, well, it's easy enough. I just joined the bandwagon too.
[ ... 152 words ... ]
-
Ouch. Remember that no matter how expert you may be on some things, if you start from false premises, you're doomed from the start. Just caught this article over on Linux Journal entitled "Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performance".
In the article, the author expounds at length on the nature of microkernels and their performance hits, and makes a very clear and unabashed claim that "the microkernel project has failed". Well, I don't know a great deal about microkernels, other than vague skimmings here and there. But what I do know, is that you don't say "Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performance" and then let slip something like "I'm not sure how Darwin's drivers work...", as well as never providing any metrics, analysis, code, or proof to back up the headline's claim other than to beat up a theoretical microkernel strawman.
The interesting thing to me though, is that rather than read the article first, I read the comments. And the thing I saw was an enormous number of comments all pointing to Apple's Mach Overview, so almost instantly I was informed about the credibility of the article itself. When you have a hundred geeks telling you to RTFM after a lenghty article, it says something.
In particular, the Apple document says:Mach 3.0 was originally conceived as a simple, extensible, communications microkernel. It is capable of running as a standalone kernel, with other traditional operating-system services such as I/O, file systems, and networking stacks running as user-mode servers.
However, in Mac OS X, Mach is linked with other kernel components into a single kernel address space. This is primarily for performance; it is much faster to make a direct call between linked components than it is to send messages or do RPCs between separate tasks. This modular structure results in a more robust and extensible system than a monolithic kernel would allow, without the performance penalty of a pure microkernel.
So, from what I gathered, this little blurb counters much of what the author actually claimed would slow OS X down. I may be mistaken on this, but it seems to be what everyone else was saying as well.
So, whereas in another circumstance, this article might've been taken as Linuxite Anti-Mac-OS-X FUD (what's the world coming to? :) ), in this circumstance the article is just taken as an obvious demonstration of a lack of research. Other than a bit of a sensation when I saw the headline in my aggregator, the steam got let out of it almost instantly.
Lather, rinse, repeat, and apply to all other utterances and publications.
[ ... 442 words ... ]
-
Does anyone out there know of a workable, preferably free app for WinXP with which I can launch and quit a program on a scheduled basis? Starting to work my mad Google skillz to find this, but having a hard time coming up with search terms that give me good results. See, Radio UserLand demands too much CPU from any of the machines I currently own. The desktop where I want to park it also serves as my PVR, and when it starts up to record a show, Radio keeps stealing CPU from the video encoding, and I end up with choppy and sometimes dead video. I even tried screwing with priorities, and that only seems to help moderately.
So, I want to start up Radio, let it churn free. Then just before a show comes on, I want to quit Radio. I was doing this manually via a VNC connection from work, but that's just stupid. The other alternatives I've tried are running it under Wine on my Debian Linux box, which doesn't quite seem to work happily, or to run Radio again on my OS X iBook, which seems to crush the poor thing.
I suppose my desktop PC is due for an upgrade, containing only a 600MHz Athlon (though 512MB of RAM), but I've been waiting on that. Funny, the last time I upgraded, it was to run a few more games. This time, its to make Radio happier. :)
(Addendum: Hey, wait, Radio's an intelligent, reasonable platform... I wonder if I couldn't just get it to shut itself down at a given time, and then have Windows launch it again. I think the shutdown is the main issue, since Windows seems to come with a scheduler already for launching programs.)
[ ... 590 words ... ]
-
Rock on. Kuro5hin now has a companion wiki named Ko4ting. I'll be very interested to see where it goes. (Thanks to nf0 for the link!)
[ ... 26 words ... ]
-
Still busy busy, but had to drop in for a minute to try out the Metalinker code here. Two wishlist items: 1) Maybe use a micro-icon for the metalink, and 2) some indication of the number of links to the link on Blogdex would be hot, maybe even a green-through-red series of micro-icons for the link. Could maybe count the links on the blogdex page via some sort of scraping, but that would require some server-side stuff since I doubt the client-side can do it. (Or I just don't know enough modern browser scripting lately.)
Because of the obvious connections, this makes me want to get back to my new-and-improved linkback implementation very soon. That'll be next after I tie up this Cocoa AmphetaDesk wrapper. (Actually got it 2/3 done last night, yay!)
Back to work.
[ ... 214 words ... ]
-
Oh yeah, and I am still alive. Just heading toward the light at the end of the tunnel of a long project at work over many late nights. Also have been living life a bit lately. But I've also been re-reading David Brin's Earth, VernorVinge's A Deepness in the Sky, and have a few musings about them. Also have been doing some intermittent Lisp and Python hacking. Oh, and I also will be trying a bit of Perl/Cocoa hacking for AmphetaDesk in the very, very near future. Hope to get back to at the news aggregator and web log controls soon. Anyone miss me? :)
[ ... 138 words ... ]
-
Hmm. I think I need to snatch up some of Marc Barrot's outline rendering code and apply it to my Movable Type weblog. Wrapping many of my front-page elements in outline wedges would be very nice. I suppose I could swing back toward the Radio side of things and use it more, but I really need to find a machine to be its permanent home. Eats too much CPU on the iBook, disrupts PVR functions on my Win2K box, and seems to work half-heartedly via WINE on my Linux box.
[ ... 91 words ... ]
-
Just saw Gordon Weakliem mention, via Roland Tanglao's Weblog, this article over at New Architect: Orbitz Reaches New Heights. This snags my eye for two reasons: 1) Orbitz is a client of my employer - we do a ton of web promotions for them; and 2) LISP is one of those things I keep meaning to get back into, kinda like I keep meaning to read things from that gigantic Shakespeare collection I asked for and recieved for Christmas from me mum.
The quote that Ronald pulls out from the article is:The high-level algorithms are almost entirely in Lisp, one of the oldest programming languages. You're excused for chuckling, or saying "Why Lisp?" Although the language can be inefficient if used without extreme caution, it has a reputation for compactness. One line of Lisp can replace 20 lines of C. ITA's programmers, who learned the language inside and out while at MIT, note that LISP is highly effective if you ditch the prefabricated data structures. "We're something of a poster child for LISP these days," says Wertheimer. "Lisp vendors love us."
Funny, if you did an s/Lisp/Perl/g and an s/LISP/Perl/g on that text, you'd almost have a quote from me. I've also heard Perl often compared to LISP, amongst the upper ranks of Perl wizards. Oldest language-- hmm, no, but it's been around the block. Inefficient without caution-- check, hand-holding is at a minimum. Compactness-- check, many bash it for obfucation facilitation. Effective after ditching prefab structs-- check, if you enjoy slinging hashes all over, like I have until recently. And so far, we're a poster child for Perl here.
What is it that I have in Perl? Well, I've named it the Toybox. It's a web application platform. We do everything with it. Reusable software components composed into applications with a web-based construction tool. The components contain machine and human readable descriptions on properties and methods, to enable inspection by tool and documentation generation. Also, the components and the framework are designed to provide predefined collaboration patterns so that one component can supplement or modify the behavior of another without that other component needing to be modified or altered. I've also just recently added a persistent object layer for all those pesky little parameterizing trinkets we started needed to throw around. (I really wish I could Open Source this thing. :) )
So there's a continual grumble here to switch to Java, sometimes from me, and sometimes from another developer or two. In some ways, a switch and reimplmentation is a no-brainer, considering tool support and labor pool. But, is this overhyped? Wherever I've gone, I've just picked up whatever was used there. My basic computer science background lets me switch technologies pretty easily. Is this rare?
But as for the languages themselves... From the trenches, doing object-oriented stuff in perl is painful and dirty. In a lot of other ways, it feels nice because you can jump out of the OO mindset and do some naughty things when you think you need to. And if you're careful. But when you do, you don't get any help from the language, which I think is one of the major selling points of Java.
And then occasionally, a client demands to know the specifics of our platform. Such as, what vendor's app server are we using? What database server? And when we say Perl and a platform almost completely developed in-house, noses crinkle and doubts arise. But they rarely have a complaint about the end result and the speed at which we execute it.
I guess what I'm getting at is this: Having a hard time untangling politics, job market, and the right tool choice. LISP seems to have done alright by Orbitz, and Perl's done alright by us. So far, that is. I always vaguely worry about the "non-standard technology" I use here, though. Is that such a valid worry, or am I just swallowing vendors' marketing pills like our clients? Because the "standard" just happens to be the buzzwordy things that a number of companies sunk money into and work hard to justify.
But, hell, someone had to have started creating something alien and new to come up with what's "standard" today. I seem to see stories from time to time about companies whose "secret weapon" is just such a "non-standard technology". They avoid many of the pitfalls that the herd faces by taking their own path, but trade them for new ones. There's risk in that, but then again there's risk in everything with a potential for gain.
Then again, there's the argument against wheel reinvention. I seem to like reinventing wheels sometimes for the hell of it, simply because I want to know how wheels work. Sometimes I even feel arrogant enough to assert that I can make a better wheel. But there is a point where just buying the thing and moving on is the smart option.
Oh well... I've come to no conclusion. But I'm still writing Perl code in my xemacs window today, and about to get back to it when I hit the "Post" button. And things seem pretty good here-- I mean this company managed to claw through the wreckage of the dot-com collapse and still edge toward a profit. We lost more clients due to their bankruptcy than through customer dissatisfaction.
I suppose I can at least say my choice of Perl, if not a secret weapon, didn't break the bank. :)
[ ... 1003 words ... ]
-
Jotting down some wishlist ideas for a next iteration of a linkback implementation and/or service.
This reminds me: I want to borrow Marc Barrot's activeRenderer code, combine it with maybe a nice ?JavaScript UI or at least Wiki markup to produce a simple web-based outliner. I'm sure this has been done before.
[ ... 53 words ... ]
-
From Chris Heschong:...While I don't believe it's 100% done, I've put up the code to my new pet RSS aggregator here for the moment. More to come shortly.
A nice, simple RSS aggregator in PHP that seems to run nicely on my iBook. Planning on poking at it some more, so I hope the alligator is of the non-bitey variety.
As Rael's little blosxom has hinted at, OS X has the potential to be the perfect end-user desktop website platform. I even had Movable Type running on it without much fuss. If only Apple had shipped with mod_perl and PHP modules active in Apache, it would be that much more amazing. I suppose that's easy enough to rectify.
Makes me feel strange running Radio UserLand on my OS X iBook, besides the CPU consumption. So much duplication of what's already there. Of course, there are the plusses of cross-platform goodness, integrated environment goodness, and things that just feel generally slick about it. Eh, but I don't have to feel strange about that anymore, since I moved Radio to my Windows box. Now Radio fights with my PVR program for CPU. Grr.
I've been thinking about this lately, too: Cocoa GUI talking via XML-RPC or SOAP to a web app locally hosted. It's been mused about before, though I forget where. I seem to remember that Dave had mentioned a company working on something like this. Could be interesting.
Seems to me that the potential and power of OS X for these sorts of apps (ie. desktop websites, networked info filters, etc...) has barely been tapped yet.
Unix on the desktop. Really. Who woulda thunk it?
[ ... 310 words ... ]
-
We discover all kinds of harm done to ourselves by environmental pollutants, decades or centuries after the fact. What if someday we discover that mass media and consumer culture, as we know it, is literally detrimental to one's health?
From Pravda.RU: NEUROLINGUISTIC DEPROGRAMMING(NLDP). NOUS-VIRUSES - A NEW THREAT FOR HUMANITY:... There is ... danger [of the deliberate using NLDP for harm]. ... NLDP-effect arises ... when a person is plunged into the intensive field of influence received by the optic, acoustic, kinesthetic perception ducts. ... often called TV programs, listening to the music, moving in the space of different texture, contacting with technical devices, etc. ... In some industrial countries such aphasia disorders as dyslexia and agraphia ... are unaccountably widespread. This "aphasia epidemic" can be easily explained by NLDP-effects. ... in the communication informational field certain informational-semantic blocks circulate. I call these blocks NOUS-VIRUSES. They get into the brain of a child or an adult, and, if his "anti-virus" defense does not work, the result of the destruction is a psychological disorder, which is not accompanied by the organic affection of the brain. ...
In case the awkward translation threw you for a loop, what this author is basically saying is that there are certain "idea viruses" circulating in our surroundings which make it past certain mental barriers ("anti-virus" defense) to cause mental disorders such as dyslexia, aphasia, and agraphia.
Sounds very Snow-Crash-like. Later on in the article, the author suggests establishing a new branch of science ("NOUSEOLOGY") to deal with these things. Maybe the translation missed it, but I don't suppose this author has heard of memetics...
Anyway, no research is mentioned to prove the claims, and there's nothing else to convince me that this is anything other than a wild rant... but the idea is interesting. Another Cluetrain tie in for me, at least in my head: What if some day, communicating to humans with a human voice (whether literally speaking, or in other channels) is determined to be the only medically safe way to communicate? :)
I'd like to think our minds aren't so fragile, though.
[ ... 351 words ... ]
-
Still working on getting all the bits of this site working again on the new host. One thing in particular that's broken are my beloved referer scripts.
But, I'm working on replacements in PHP. Noticed that my linkback scripts are linked to and described on IAwiki. Also notice some decent wishlist ideas for linkback improvements-- such as first/last referral date and link karma. And of course there are the improvements I've been meaning to make-- such as metadata harvesting from referrer pages and some improvements on filtering out some bogus/unwanted links (ie. Radio UserLand aggregator links). Might also be nice to allow someone to submit URL patterns they'd like excluded-- that is, if you link to me and don't want me to publish the linkback, you can submit a URL pattern for your site.
Have also been thinking of throwing it together as an open service, like yaywastaken.com's Link Feedback and Stephen Downes' referral ?JavaScript-includes. I can offer the service, and the code. The only drawback is, well, what might it cost me to offer others a free lunch if the service actually happens to be good. :)
Need to keep thinking about colocation.
[ ... 225 words ... ]
-
From Aaron Swartz:Here's an annotated version of the schedule from the Emerging Technologies 2002 conference. Under each session are links to the blog entries about that session. If I didn't include yours, send me an email...
You know what would rock for something like this? Provide a conference schedule, with each event in the schedule as a URL-addressable page or anchor within a page. Tell bloggers to link to those URLs when blogging about a particular event. Grab referrers and display links back to blog entries on those pages and on a summary page like Aaron provides.
Automatic conference annotation, and you don't even have to worry whether Aaron included your blog entry or not.
[ ... 115 words ... ]
-
Had to do a bit of manual transmutation of DB files for Movable Type to work. Let's see if I was successful...
[ ... 34 words ... ]
-
Oh... I'm looking around for some sample code & docs for this, but maybe someone out there could drop me a few quick pointers:
I'm using Mac OS X. I have a Griffin iMic and a D-Link USB Radio. I can control the radio with DSBRTuner (with source code!) I thought that that would be the hard part. Now, what I want is a simple way to record sound to disk in AIFF format, preferably from the command line or at least with source code. I've tried a few other sound recording apps, but they're all mostly inadequate and overkill at the same time. Most of them won't record to disk, so capturing 4 hours of audio into memory hurts a lot.
I want to, in part, record radio shows with cron jobs. This shouldn't be rocket science. So, anyone know how to write a simple audio-to-disk app for OS X?
[ ... 152 words ... ]
-
So, I feel really out of touch. Seems like the majority of the authors on my news aggregator scan were all blogging from the O'Reilly Emerging Technology Conference this week. For some reason, I hadn't even heard of it until last week. This mystifies me. I can't believe I hadn't checked this conference out long ago and found a way, by hook or by crook, to get my butt over there. Seems like they covered an alarming number of the topics about which I'm fascinated. But, beyond the content of the conference itself, I would have loved to witness firsthand the phenomena of an audience full of bloggers tick-tacking away in realtime. And I so would have been running EtherPEG on my iBook like Rob Flickenger was.
It's been forever since I managed to get out to a technical conference. Not that I managed to globe-trot with any great frequency before the Internet bubble burst, when money was flush and expense reports were easy, but I was starting to feel like I was making inroads into a community. Now I feel cloistered in over here at my company and I don't know if my employer necessarily sees a value in me attending things like this. Marketing and sales make heroic efforts to streak around from coast to coast-- man, I don't envy them-- but I always stay here. Sometimes I wonder if it's because they're afraid I might let some of the mojo slip. But that's being pretty presumptious of the quality of the mojo I make here. :) (For what it's worth, there are nervous chuckles when I talk about Open Sourcing bits of my work.)
This is starting to sound like spoiled geek whining. It isn't that I want the company sugar-daddy to fly me to Milan every month-- I just start feeling a bit isolated and claustrophobic working with the same few dozen people, only a tiny handful of whom are actually technically-minded, week in and week out. So it's nice to feel a part of a wider community of like- or at least relatedly-minded people who are passionate about this stuff.
I'm an obsessive nerd. This is what I tell people who ask me, "Don't you get tired doing this all the time," or, "Why do you have more than one computer?" When talking to myself (as I often do) or to other geeks, I call it passion. I appreciate passion for its own sake, and I love talking to other impassioned people.
I always try to be tenative about my pretensions, but here goes another one: All the way back to when people were dingy, loin-cloth-clothed stereotypical and mythical cavemen sitting around comfy fires, there was always some oddball or scrawny thing who insisted on playing around at the edge of the firelight and further. Or banging around with an odd pair of rocks, making sharp things.
I want to hang out more with my fellow rock bangers.
[ ... 494 words ... ]
-
Whew. Been swamped this week, with work and with life in general. So, sadly, 0xDECAFBAD got a bit neglected.
When last I was working on things, I was in the middle of some decent-sized reworks of various parts to go easier on the servers and do things in a smarter way. Amongst all that, I notice that since my suspension the admin of my web host has been poking in from time to time and still tinkering with my site. This week they've been doing things like turning off execution permissions site-wide (and therefore disabling all scripts here) and shutting off various cron jobs and calling it abuse-- eg. a nightly tarball of my site, to be scp'ed down by a home machine's cron job later in the night. Supposedly they have backups, but I don't.
On the one hand, I understand an admin wanting to maintain the performance of a server he owns. On the other hand, I'm a tenant. Does your landlord continually come into your apartment when you're not home, without notification? Does he or she wander around, unplugging your clocks and appliances, shutting off your heat while you're gone? I could understand if the apartment was on fire, then some immediate action is required. But you know, I'd just like to have a little notification.
Moving on...
[ ... 222 words ... ]
-
Starting to think about this, too: What's better to use? an IFRAME, or a ?JavaScript-powered include? I can see elegance and hackishness in both.
[ ... 503 words ... ]
-
Hmm, looks like I was completely wrong when I wrote that the post-hash bit of referring URLs weren't showing up in my referral logs. Duh. I see them right in my "Today's Referrers" block on my site's front page, and a quick SQL query shows me tons of them. It'll be easier to linkback-enable Radio UserLand blogs than I thought.
[ ... 68 words ... ]
-
Dang it. Jon's blog lists a link to my MovableType management console. Glad I deleted "Melody" :)
[ ... 18 words ... ]
-
Just noticed that Stephen Downes emailed me about his implementation of a Javascript-based referrer linkback system. I'd been planning to get around to making one inspired from our blogchat, but he's got it first. Cool. :) Looks like Jon's picked up on it, too.
Jon muses:I haven't yet looked into what it will take to make the reporting item- rather than page-specific. It's doable, I'm sure. Thanks, Stephen! A ?JavaScript-oriented solution like this will appeal to a lot of Radio users.
The biggest issue I see for Radio, obviously, is that the style there is to have many blog entries on one page for the day. Permalinks are anchors within those pages. However, I haven't seen browsers reporting the post-anchor-hash info for referrals. At first though, one would need to have Radio spew blog entries into individual files to make this JS linkback scheme work at per-entry granularity. Otherwise, I'd love to see this work for Radio. I'd also love to see it woven with community server services.
A few brief musings on the referrers script itself:
Hmm, haven't seen cgi-lib.pl used since my Perl 4 and pre-CGI.pm days. I'd forgotten how serviceable the thing still is.
I like get_page_title(). It's a start on what I was musing about over here.
I want to steal this, maybe transliterate it into PHP (since my webhost frowns on my CGIs now), and hook it up to a MySQL database. Stephen doesn't want to host the world's referers, but I wonder how mad my host will get if I open mine up. :) Would make for some neat data to draw maps with, but probably shouldn't do things to make myself an even more annoying band practicing neighbor.
[ ... 286 words ... ]
-
Had a busy weekend living life, entertaining the girl, and cleaning my cave. A few quick things, if only to remind myself to remember to think about them:
Inspired by a suggestion from Eric Scheid to move from my server-included blocks to client-included blocks via ?JavaScript, I did a little exploration into the idea and whipped up a quick, generalized script to mangle the contents of any given URL into a document.writeln(). Not sure how robust this thing is.
Also, not sure how widely supported the JS-include hack is. On the other hand, Stephen Downes had made a good point in my blogchat the other day concerning JS-based hacks: my visitors can turn them off by disabling Javascript. Having been employed in the field of internet promotions these past 6 years, this seems like a nightmare. But, having been reading the Cluetrain Manifesto and following smart blogs, I start to think this is a Good Thing.
I see that JanneJalkanen and crew are musing out an update to the XML-RPC wiki interface. Having worked on my own implementations of this interface, I need to keep an eye on this, even if I can't quite be as active as I like.
HTTP Extensions for a Content-Addressable Web seems hot as hell, especially for the future decentrallized publishing world I'm dreaming of.
I'm updating my home linux box, Memoria, with Debian, defecting from Mandrake Linux. Wish me luck. Oh, and the HD in the machine has a few cranky parts from having been dropped. Wish it luck.
Running a telnet BBS at telnet://deus-x.dyndns.org:2323. The domain may change to bbs.decafbad.com soon. I miss the BBS days. I may bemoan the loss of local community gateways onto the 'net someday soon. I'm using Synchronet on a poor overworked Pentium 70Mhz PC running Win98, on which I also inflicted Radio UserLand for the time being. No one really calls on my BBS. I've been thinking of hosting the UNIX version on Memoria.
Thinking of trying out Radio UserLand on Memoria under Wine. I've seen mutterings which claim that this is possible. Anyone?
I want to wax pretentious with a few musings on the Singularity, birth control, anti-biotics, glasses, and self-modifying code. I might not get around to it, though.
Wasabi-coated peas are at once wonderful and terrifying.
[ ... 383 words ... ]
-
Speaking of software I want to get deployed at work (ie. time tracking), another thing I want to take the plunge with is k-logging. Basically, I want some software to give every person here an internal blog or journal. Immediately, I want this to capture internal narrative. A future subversive idea I have for it is to eventually pipe some of these internal entries out to our public company website. (Yes, I'm reading The Cluetrain Manifesto again.)
I've gotten almost everyone here on the wiki bandwagon, and we're using it regularly as a knowledge capture and collaboration tool. So, they're used to me throwing odd new tech at them. Unfortunately, the wiki isn't precisely suitable to what I want for the k-logs. These are my requirements so far:
Must be dead simple to use in all aspects, so that it sits far below the laziness threshold it takes to record an idea or narrative as it occurs.
Rich set of categories and metadata by which an entry can be tagged. (ie. On what project were you working? On what project task? With what products? How much time did you spend?)
Arbitrary views on weblog entries, querying on category and metadata, maybe even on full-text search. I want to be able to weave together, on the fly, the narrative of any person, project, product, or any other topic.
I'm looking, hopefully, for something free. At the moment, I'd have a hard time campaigning for the purchase of a fleet of Radio UserLand subscriptions for all of us, unfortunately. Someday, perhaps. (I could just imagine the insane possibilities of a Radio on every employee's desktop.) But, is there anything out there like this now? It's simple enough that I could probably roll my own in a weekend or less, but it'd be nice to jump on the bandwagon of an established k-log tool.
Also really looking at more ways to lower the laziness threshold. We just converted the entire company over to using Jabber as our official instant messaging platform, so I thought it'd be pretty keen to have the k-log system establish a presence to receive IM'ed journal entries. Along the lines of the wiki adoption, I'd probably have to get everyone to embed a certain style of keywords or some other convention to get the k-log system to pick up categories.
Or, to make it even lazier, could I get the k-log system to automatically discover categories by the mention of keywords? Hmm, this could be fun.
Anyone out there working at a k-logged company?
[ ... 1018 words ... ]
-
Oh, and I've swung away from keeping track of it, but I need to get back to looking at masukomi's about-time time tracking software.
[ ... 25 words ... ]
-
Noticed two things from Mike James this morning:
An interesting melding of weblog and wiki over at www.tesugen.com. The combination of Blosxom, which elegantly composes the weblog from a simple pile of files, and UseModWiki, which can cache its pages in a simple pile of files, makes me think about the combination myself... (Oh, and TWiki does this, too, not in caching but in the storage of page content in the raw.) I could make Blosxom search for files by regex (ie. Blog*) and post to the weblog in ?UseModWiki with a naming convention of pages. Seems neatly elegant.
And the second thing are the MovableWorksOfArt that Mike cites. Again, simple and neat elegance. I'm with Mike: I've got a lot to learn, and a lot to strip down from the design of this site. I also really need to dig into some proper CSS. Mark Pilgrim's CSS magic shames me.
[ ... 218 words ... ]
-
Yay! It appears that the new Multi-Author Weblog Tool does almost exactly what I'd mused about doing as a GroupsWeblogWithRadioUserLand, were I to ever get around to it. Been drifting away from Radio lately, but I need to get it working on a decent machine. It ate my iBook CPU time like dingoes eat babies, and on my home desktop Windows machine it would eat enough CPU to cripple my PVR functions. I'd like to see about either moving that PVR function to a Linux box, or trying to run Radio under Wine.
[ ... 94 words ... ]
-
0xDECAFBAD was out cold today for some hours, due to the suspension of my webhosting account.
Seems the SSI-and-CGI combination I was using around here had turned into a monster, and the sysadmins decided things had gone rogue on my site. So, I get suspended, to be later restored with a lecture on the nature of CGI. My scripts were called "fucking insane" by the admin who finally gave me back my keys. And, on top of it, I got a lesson in UNIX file permissions while he was at it.
Well, the thing is... of course I understand CGI, and the expense of launching external processes. And I gathered that the SSI-and-CGI combination was braindead at the end of the day. And I understand the necessity for restrictive file permissions. But still, even with all that, I let things get sloppy.
This is vaguely embarassing.
So, today I hastily reimplemented everything using that SSI-and-CGI scheme in PHP. I'd started leisurely on the PHP replacements this weekend, but this was a kick in the ass to finish it. Almost every *.shtml page is a *.phtml page now. I rewrote my referral link listing code, as well as my RSS feed display code, in PHP functions. There are still some places where things are broken (most notably the referrals in the wiki), but I'll get around to fixing them. Not too bad for starting from zero PHP exposure (somehow) until this weekend.
I'd like to think that this won't happen again, but I suspect it might.
The problem is that this site is my mad science laboratory. I mix things together, set things on fire, and occasionally have something explode. I get enthusiastic about seeing what I can do, without a whole lot of regard toward safety or keeping the wires out of sight. I figure that I'll tighten up the bolts and polish up the shells of the interesting experiments, and just archive the ones that turn out boring.
Occasionally, this leads to me playing loose with security and resource conservation. I really need to watch this more, since the sysadmin reminded me, "Remember, your account is on a multi-user, time-sharing UNIX operating system." My first reaction to this was, "Well, duh," but then my second reaction was... "Oops." It's not that I don't know these things. It's just that I get quite a bit ahead of them in tinkering.
I have to try to find a balance between boiling beakers and safety goggles.
And, I wonder if this webhost is the right place for this site. They certainly don't have the world's best customer service. It's low touch, high grumble BOFH service. It appears that the people running the show are experts (I see the name of one of the admins all over various Open Source projects' patch submissions), but don't have a whole lot of time or patience for bullshit. But, I pretty much knew that going in. It makes things cheap, but it's also a bozo filter.
And with some of the things I'll be doing, I'm likely to be a continual bozo.
The best thing would be, as DJ suggested earlier today in blogchat, to find a cheap-cheap colocation somewhere. It's not as if I don't have machines to spare-- I just need a safe, constant full peer and static IP net connection. I'd love to have something I could run persistent servers on, play with a few different app servers, a couple generations of Apache, etc. The things I want to do can be done safely, given that I pay attention, but I doubt that they will make for a quiet neighborhood. On any server I play, I'll be the noisy bastard down the street blaring the metal and practicing with his band every night.
Hmm.. have to think about that co-lo thing.
[ ... 726 words ... ]
-
Oh, and Ken ?MacLeod was another visitor to my blogchat today. Along with humoring some of my RESTian musings (which I think I understand much better now, thanks), he'd observed the multiple links back to the same blog entries awhile back. We chatted a bit about the linkback thing and the scalability of BlogVersations. Talked a little about the robot link detective I just babbled about.
Also, he pointed me to Purple, which appears to be an decent system to use for arbitrary mid-document linking in a blogspace lacking universal XHTML and XLink adoption. This means something, too.
Time to go home.
[ ... 103 words ... ]
-
I just noticed Ghosts of Xanadu published on Disenchanted, where they make an analysis of the linkback meme and it's historic roots. They cover pretty much all the big ideas I've been poking at in my head, and give props to Xanadu. Heck, they even mention Godel, Escher, & Bach (which my girlfriend & I have started reading again) and ant scent trails.
So along with the ?JavaScript-powered linkback thing, something else I've been thinking about is a little semantic sugar to add to the mix. I keep forgetting to mention it, but what makes Disenchanted's linkback system very good is that Disenchanted "personally visits all pages that point to us and may write a short note that will accompany the returning link." They manually visit and annotate their links back, whereas my site just trundles along publishing blind links.
I'd like to change that with my site. The first thing I'll probably do is set up some triggers to track new referring links to my pages, and maybe give me an interface to queue them up, browse them, visit them, and annotate them.
But the second thing is something that would require a little group participation from out there in blogspace. It might not work. Then again, it might catch on like crazy. I want to investigate links back automatically, and generate annotations. I'm lazy and don't want to visit everyone linking to me, which sounds rude, but I think that the best improvements to blogspace come with automation. (In reality, I do tend to obsessively explore the links that show up in my referral log, but bear with me.)
I can respect the manual effort Disenchanted goes through, but I don't wanna. So, I want a robot to travel back up referring links. What will it find there? Well, at present, probably just an HTML page. Likely a weblog entry, maybe a wiki page. What can I reasonably expect to derive from that page? Maybe a title, possibly an author if I inform the robot a bit about what to look for. (ie. some simple scraping rules for blogs I know regularly link to me.)
What else can I scrape?
Well, if bloggers (or blog software authors) out there help me a bit, I might be able to scrape a whole lot. I just stuck a Wander-Lust button on my weblog, and I read about their blog syndication service. You can throw in specially constructed HTML comments that their robot can scrape to automatically slurp and syndicate some of your content. Not a new idea, but it reminds me.
So bloggers could have their software leave some semantic milk & cookies out for my robot when it wanders back up their referring links. Maybe it could be in a crude HTML comment format.
Or maybe it could be in a bit of embedded RDF. Hmm. Anyone?
What would be useful to go in there? I might like to know a unique URL for the page I'm looking at, versus having many links back to the same blog entry (on the front page, in archives, as an individual page with comments, etc.) I might also like to know who you are, where you're coming from, and maybe (just maybe) a little blurb about why you just linked to me. I'd like to publish all these things along with my link back to you, in order to describe the nature of the link and record the structure we're forming.
This seems like another idea blogs could push along, semantic web tech as applied to two-way links.
Of course, the important thing here is laziness. I'm lazy and want to investigate your link to me with a robot. But you're lazy too. There's no way that you'll want to do more work that I do to provide me with the data for my robot to harvest. So... how to make this as easy as making a link to me is now-- or better yet, can we make it easier to make a richly described link? That would really set some fires.
[ ... 796 words ... ]
-
I meant to post a quick thank you to Jon Udell for the link in his recent O'Reilly Network article, Blogspace Under the Microscope. But beyond the fact that he mentions me, I really like the biological metaphor for blogspace he explores there.
In the short time I've had this blog out here, I've tossed in a few handfuls of small hacks that have incrementally connected it to others and made discovery of some connections automatic. What I'm doing is mostly inspired by what I see other bloggers doing. Something about all this feels very different from what's happened on the web thus far. I don't have, nor likely will ever have, one of the more popular blogs on the block-- but for the short time I've had it, this thing has gotten more connected than anything else I've ever done on the web. It's certainly by no genius of my own that this has happened. There's something big to this.
Pardon the pretention, but it seems that there's this "Reverse-Entropy" going on here through these incremental tweaks and the construction of these small, elegant connecting channels between walls are what will very shortly raise the level of blogspace to.. what, a singularity? Not sure, but it's seeming more and more like David Brin's Earth. (I've got to read that again and pull some quotes.)
So (back to practical matters), Stephen Downes dropped into my blogchat for a visit and we chatted briefly about the linkback meme. One thing we'd touched on was a fully ?JavaScript exploiting referrer service one could use on a site where one could not host dynamic content like SSI, PHP, etc. Jon also touches on pretty much the same thing in musing about a backlink display in Radio.
More centralized services bug me-- I really want to see a completely decentrallized blogspace. But, it's baby steps that need to be taken. Since there's no P2P killer app plumbing for blogspace yet, we need full peers to get hosted.
Some, like mine, are hosted where dynamic content is available and I am capable (and willing) to hack at the dynamic bits. Others are hosted where content must be static, and others have owners who either can't or don't want to bother with hacking on the plumbing. So some central services are still needed to prop up blogs. Baby steps. Get the tech working, observe the flying sparks, and get bored tinkering with what doesn't work.
But it would be brilliant if someday soon, something like Radio can become 100% decentrallized, with installations collaborating to form the space via firewall-piercing instant messaging conduits, pub/sub connections, content store-and-forward, distributed indexing, and the whole utopian bunch of Napster dreamy stuff.
Okay, back to work.
[ ... 518 words ... ]
-
DJ makes a convincing argument for using Blosxom. I tend to agree, having called it a kind of code haiku awhile back.
[ ... 23 words ... ]
-
I have 3 blogs right now, including this one. The other two are 1) on LiveJournal and 2) managed by Radio UserLand.
My Radio blog has been pretty dormant of late, since I never quite got completely pulled into it or migrated my LJ or MT blog to it. Instead, LJ seems not itchy enough to abandon (and I have friends there), and MT seems comfy enough for now. So, Radio remains a place of bunsen burners and exploding beakers for me for now. (This is not a complaint, this is a cause for gleeful and evil rubbing of hands together.)
My LiveJournal, however, was my first successful blog. And by successful, I mean that I managed to keep writing there, usually trying to babble more at length than just copying a few headlines. My writing there is pretty random, by intention. I'd originally started it to supplement my daily writing in my paper journal, as some other outlet for words to keep me writing and maintain my ability to string more than two words together coherently.
On LiveJournal, I'm more likely to rant about things like religion and touchy issues. I'm also more likely to talk about my girlfriend and other events in life more interesting to my closer circle of friends.
I consider 0xDECAFBAD to be my nerd-blog, or my "professional" blog. It's where I'm more likely to ramble about things in my life as a geekish craftsperson. I could draw a Venn diagram, but suffice it to say that I think there are different, but overlapping, audiences for my high-nerdity versus my more personal ramblings.
Does anyone else do this? I'm sure I'm not alone in cordoning my blog faces off from each other. But should I feel the need to separate things like this? Although I can't find a link to it, I seem to remember Shelley Powers writing about tearing down her cordons between her nerd-core and normal-person blog sides. Of course, I try to thin the barriers at least by displaying my LiveJournal RSS feed over on the side as a "sibling" blog.
On the one hand, its a strange form of self-classification. On the other, though, it seems to work for me. Visit 0xDECAFBAD to see my vaguely professional high-nerdity. But if you want to get closer to me as a human being, come visit my LiveJournal and see me and my place in the community there. If you really want to know me... well, email me or maybe IM me. And maybe, just maybe, if you happen to be in town, let's head down to the pub.
[ ... 592 words ... ]
-
Okay, so it's a given that I'm not giving up my PVR soon. So, which ones work best? And why?
So far, all of mine have been homebrew, thanks mostly to ATI video cap / tuner cards. I've never owned a ?TiVO, although I've lusted over them. But my current setup seems serviceable. I managed to record the entire run of the third season of Buffy in rerun onto VCDs with the ATI Multimedia Center software. (No cracks on the Buffster. If you don't like it, try s/Buffy/Babylon 5/ or maybe s/Buffy/Nova/ in your head.)
Now, I'm looking to replace the Windows this runs on with a Linux install. I already record radio shows under Linux with a PCI radio tuner, and under OS X sometimes with a USB FM tuner. So now I see this VCR HOWTO which claims: This is a guide to setting up your GNU/Linux workstation as a digital VCR using the video4linux driver and a supported tuner card. A section has been added for creating VCD's that can be played in any DVD/VCD player as well.
Sounds precisely like what I'm doing right now. And I think that my ATI All-in-Wonder Radeon card can be supported under Linux. If not, I have a backup BT848-based Zoltrix TV tuner that works for sure, but that one only seems to have mono audio, unfortunately.
Has anyone put together a working Linux setup as described in the HOWTO? If so, how do you like it?
On the other hand... Should I still think of getting a ?TiVo? What's so special about it, other than dead simple ease of use? I'd want to immediately crack it open and start hacking more HD space into it, as well as TiVo ?AirNET Ethernet Adapter Board. But I think my Linux box will do this, and more. Though, I'm not sure if Linux supports the TV-out for my ATI card. I've also heard that ?TiVo captures closed captioning. Eh, neat, but I don't need it. Not at the moment, anyway. Then I hear about things like SonicBlue being ordered to spy on PVR users, and I feel much safer having a home-cobbled PVR.
What d'you think?
[ ... 711 words ... ]
-
Earlier last week, I'd ranted in my LiveJournal about the head of Turner calling me a thief. Now, I'm reading what Mike James has to say about what Brad Templeton has to say about the ?TiVO, PVR's, and what they are gonna do.
Brad offers up a few very good suggestions-- ways to offer more options with regards to TV funding, ways to mix up and better target ads and ways to buy out the ad time. Some of the "enter the keyword from the ad" suggestions strike me as eerily similar to the ways warez and porn trading servers on the Hotline network. Go visit 3 sites, usually casino sites and other banners, spend about 30 seconds exposed to each site searching for a certain keyboard, come back to the warez/pr0n server with your keywords in hand.
Again, the porn leads the way. :) Well, sort of. The Hotline servers are pirating the porn industry, but porn tends to always be one of the drivers of tech. I wonder if purveyors of porn might not also be the first to hit on the new business model that rakes in the cash from what's coming?
I can't help but think of this in terms of golden geese. TV studios have had a golden goose in advertisers, and she's laid wonderful eggs for decades. But now the old gal's tired and the eggs are getting smaller and less shiny. Must be something in the water or the goose feed that's changed. Seems that there just might be another goose out there that'll lay even bigger eggs. Not only that, but I hear that this new goose might even deliver the eggs right to them, and maybe even thank them for it. But, rather than questing and finding this new kind of goose, they're trying to sow the goose feed fields with salt and poisons. And they're looking for thicker chains for the current goose's ankles, and reinforcing the bars of the cage.
I really must be missing something. I can't help but think that consumers wouldn't mind being treated like customers, finally, and not as assumed thieves. I have to think that some very smart people work in the entertainment industry. At least a few of them must have read the Cluetrain Manfesto and come away with something. Is this just naive to think? Seems vaguely incomprehensible to me that fear has them working this hard to maintain an aging business model, where the rewards would be staggering if they put that much effort into exploiting a new one. And they'd be praised instead of pissing everyone off.
Wish I could remember the quote (was it Heinlein?) about companies and their making money, which basically gets summed up in Mike's quoting of Spidey, "I think I missed the part where this became my problem." It's already been heavily quoted, but it's worth repeating.
[ ... 483 words ... ]
-
Boy, do I dislike Verisign.
[ ... 6 words ... ]
-
I just grabbed Micropoll to play with from technoerotica.net
Let's see if this works:
Kinda fits with the other little SSI widgets I've been cramming in here. Now if only I could figure out which one is delaying the load of my main page. Aw hell, I was going to throw it all into PHP anyway.
[ ... 57 words ... ]
-
Tinkering around with CSS again for this site. Getting a bit busy and crowded around here, and Liorean had mentioned to me in a visit to my blogchat that things were a bit hard to read and for want of white space. I think I agree on this. Though, I don't want to explode everything out with huge fonts and spacing... Hmm.
Anyone have some hints, suggestions, or complaints about how things look now?
[ ... 75 words ... ]
-
I've been trying out WinXP on my token Wintel box at home, and it really doesn't impress me. I was running Win2K, and Win98 before that, and I can't really see the big deal.
To be fair, I can say that I don't make fun of Windows for crashing as much anymore, although XP did just crash this week with a big fat bluescreen which took out the entire body of preferences in my user account including all my digital VCR schedules. So that was annoying. But rare now, anyway. And most times it can be blamed on drivers or non-Windows software... but gah.
But as for the rest of XP... I can't stand Luna. I turn off all the bells and whistles until it starts looking like Win98. I reduce my Windows Explorer down to about what it looked like in '96, just a folder tree on the left and detail-view on the right. Most of the other "helpful" features of WinXP just annoy the crap out of me. Am I missing the benefits of XP? It just really seems like hairs are being split and sugar's being poured in since about 1998 with Windows. Maybe the thing is that no one's come up with any new dramatic, paradigm-shattering new things, so incremental perfection is all that's left for Windows.
So here's what I'm getting to: I'm thinking of downgrading my home PC to either ?WinME or Win98, and I might just stop the upgrade cycle there. And I'll stop it for good unless I see some dire need to upgrade my Microsoft OS. I don't seem to have any software which requires WinXP. Rarely, something requires Win2K or NT, but most of that stuff I replace with a Unix app. At work I've been running Win98 in a Virtual PC instance on my OS X machine, and all my daily-use software runs fine on it. And it really doesn't crash all that much.
And this isn't just an anti-Microsoft thing. Well, to be honest, in part it is. And in part, I don't want my Windows habit to suck me into recurring Microsoft payments, should they perfect the licensing enforcement and stop letting me buy the thing once and make me sign up for a monthly fee. Unless they can start showing me something like the Radio UserLand radio.root changes RSS feed, I don't see a benefit to me to pay on a subscription basis. I suppose their list of patches and things fits that bill, but it's not the same to me. With Radio, I see a stream of improvements and new features. With Microsoft, I see largely a stream of fixes and replacements for things they've already sold me. But, I suppose it's apples and oranges. Radio can afford to bootstrap and occasionally break, whereas Windows must strive to be solid as a rock.
This makes me feel vaguely luddite.
[ ... 698 words ... ]
-
My webhost had a bit of an outtage, and the machine on which this site is hosted suffered a nasty hard drive crash. Things were down for about a day or so, and when they came back up most everything was broken. Seems that the sysadmin of this server added a few security improvements, such as disallowing execution of CGI scripts with dangerous permissions, which revealed my sloppy and "atrocious" (the sysadmin's word) use of them.
*gulp* Bad me. Shows that it's been a long while since I had a website on a multi-user machine-- not that that's a very valid excuse, but it seems less urgent to tighten up permissions when you own the machine, have a small and trusted team working on it, and it sits behind two firewalls.
Hmm. I need to get schooled in some security mojo with the quickness. Loving the Wiki Way is one thing, but bending over like the goatse.cx guy (no, I'm not linking to it) is another thing altogether.
[ ... 169 words ... ]
-
I &heart; the Internet and weblogs.
What happens when one of my favorite bands' main man, Mike Doughty, and one of my favorite ex-Star-Trek-survived-the-80's actors, Wil Wheaton, collide in blogspace and discussion groups?
Well, first no one believe's it's really Wil Wheaton posting to the DG. But then, when everyone realizes that yes, in fact, it's him, Mike himself posts the moral of the story:so the flip side of the don't believe what people
tell you on the internet lesson is--you know,
people on the internet might actually be who they
say they are. how bout them apples?
Not to have a Jerry Springer moment here, but: Yes, how about them apples? It's the new internet, where some people really are who they say they are, even if possessed of some vague degree of celebrity. And not only that, but your ears turn red and ring when someone's talking about you.
Now, maybe if I say Wil Wheaton's name three times in a Beetlejuician manner, he'll show up over here too. :)
[ ... 172 words ... ]
-
Okay, I'm done thinking and writing about REST for the time being. I think I understand it, but it doesn't seem to light any fires for me yet. I'll just stuff it away into my utility belt and keep it in mind in case the need for a REST-headed screwdriver comes up, or until I discover that my RPC usage causes cancer.
What's much more exciting to me at the moment is whitelist-based spam filtering, and even more so is using RDF to share whitelists. I really need to look more into this FOAF thing over at rdfweb.org.
One project I've been mulling over is how to replace something like my user info page at LiveJournal with something web-wide and decentrallized. RDF seems like a lead on that for a shared data format for personal metadata. I want the friends and friends-of, and especially the interest links that let you pivot on interest phrases to view a list of others who list the same interest phrase.
[ ... 228 words ... ]
-
Here I am, a Busy Developer trying to work his way up to being a real Computer Scientist. In doing this, I subject myself to things like RPC vs REST. I see apparently intelligent people vehemently disagreeing about something, I figure there's something to it. This may be naive. I hope that by hurting my brain with this noise, I might take some of the hunch out of my back that gets me just one more caveman level up the evolutionary chart. I hope you all don't mind hearing my grunts and unks.
Dave linked to me with an Oy. Was that from my quoting of a quoter quoting? Or is that an Oy on this REST thing getting all outta hand? :)
Well, here it goes again: I found Gordon Weakleim linking to me from my referrers (yay for referrers!), where quotes someone on a discussion forum as having said "REST, I'm afraid, is unlikely to get anywhere until it is presented in a more utilitarian fashion. It feels much too much like a philosophy or religion or something." And I agree with this, too. I got introduced to XML-RPC via simple toolkits and real problems solved with satisfaction, and I'm learning SOAP slowly the same way. Gordon also captures some of my other sentiments.
I think I get REST now. Maybe. Between my last post on REST and finally finding and reading Paul Prescod's "REST and the Real World", I feel somewhat illuminated.
I also feel very, very behind. This has all probably been covered ad nauseum on mailing lists I haven't read yet, but this is what I thought on Paul's article:Using someone else's [RESTful] web service requires you to understand their data structures (XML vocabulary and links between documents). . . . RPC APIs merely hide the problem behind an extra layer of non-standardization. First you must figure out the method names available. Then you must still figure out the data structures that may be used as parameters. And then behind those data structures is the implicit data model of the web service.
Hmm. Yes, in an RPC API I have to supply my customer with my method definitions-- including their names, parameters, and return values. This seems directly comparable to supplying my customer with all the URIs exposed by my app's resources, as well as the data structures used by them.
How are these cases so very different?There is no free lunch. The biggest problem most will have with REST is that it requires you to rethink your problem in terms of manipulations of addressable resources instead of method calls to a component.
I don't want to rethink my problem, at least not without a clear payback. When I first learned to rethink my problems in object oriented programming, I got immense payback. I haven't seen the clear payback case for REST yet. So far, it looks like an interesting abstraction with a lot of vagueries, name calling, and intimations of harm and danger.
Of course you may actually implement it on the server side however you want. But the API you communicate to your clients should be in terms of HTTP manipulations on XML documents addressed by URIs, not in terms of method calls with parameters.
My customer doesn't want to mess with this, and doesn't want to learn about it.
Your customers may well prefer a component-based interface to a REST interface.
Yup, they do. My chances of altering their behavior is slim to none-- at least not without a clear case of payback to present.
Programmers are more used to APIs and APIs are better integrated into existing programming languages. For client-side programmers, REST is somewhat of a departure although for server-side programmers it is not much different than what they have been doing for the last several years, building web sites.
The server-side programmers I work with have been building CGIs and various server-side things which treat URI's as mere gateways onto their applications. REST is pretty different than what they've been doing for the last several years.
REST is about imposing a programming discipline of many URIs and few methods. RPC allows you to structure your application however it feels best to you. Let it all hang out! If a particular problem can be solved with RPC, and future extensibility and security are not going to be issues for you, you should certainly use the looser approach.
I still don't see the payoff of few methods with many resources versus many methods with few resources. Either way, should I change my application in the future I'll need to supply my customer with new data descriptions, or with new method descriptions. How does REST make me magically extensible? Am I not seeing something?
The security angle I can sort of see, given the already built-in ability of web servers and firewalls to control HTTP methods and URI accesses. But, most times I want my application to manage its security and access control, not the web server or firewall. Maybe this is a flaw in my application.
Anyway, the simplicity of REST feels like a possible elegance to me, which I like. Elegance feels warm and fuzzy, not itchy. XML-RPC is obviously a bit of a workable hack. Being able to RESTfully reduce my app down to database-like access via a web of URI resources seems neat. But, the whole REST thing feels like an inside-out turn of many things. This is not useful to me unless I can find some abstraction or toolkit to help me wire my apps up to things in the REST way. XML-RPC was quick for me to get hooked on, since it was a mostly drop-in solution with me writing only a few wrapper methods.
Now that I understand REST better, I see that I could possibly do most of what I do with SOAP and XML-RPC in a RESTful manner with a bit of brain bending. It reminds me of threaded versus async / event driven programming. But I don't know why I should bother: I'm still looking for the payback of REST and I'm still looking for the danger inherent in the RPC model.
[ ... 1046 words ... ]
-
Chris Heschong writes: Ken ?MacLeod notes that, in regards to REST, "the only thing holding us back is a marshalling standard." I'd be a lot happier with REST implementations if this were the case.
(Whew, I think I need a convention for quoting quoters. Maybe a new language. I seem to remember hearing on the Todd Mundt Show that the Turkish language has a facility for specifying whether something you're saying originates with you, or whether you heard it from someone else...)
Anyway, since Ken ?MacLeod had taken the time to respond in some detail to a post I made asking about REST not too long ago, I thought I should come back around to it.
So I think my first confusion was with marshalling. This is why I like XML-RPC: I don't worry about much. As a client, I give it a URL and a method name, and then throw a pile of parameters for the method at it. As a server-side method, the server gives me parameters when it calls me, and I throw back a return value. The server takes care of turning the XML into my arguments, and my return value to XML.
In all the languages I've worked with it in (ie. Perl, Python, ?UserTalk, and AppleScript), this works conveniently well. I never actually pay much attention to the XML in XML-RPC. So, I was very confused in reading a few things about REST and not seeing much mention of this, other than along the lines of "Oh, well, you could go ahead and use XML-RPC libraries to build messages if you wanted to." Which begged the question for me: Why not just go the whole hog and use XML-RPC? (Or SOAP, for that matter, but that's another holy war I'm avoiding for the present context.)
Okay, so REST isn't about marshalling parameters. Then what is it about? Well, I think a bit more reading and Ken's response to me have helped illuminate me a bit.
The REST point seems to me to be that all operations being attempted by Web Services can be distilled into a few actions: retrieve, create, update, and delete. REST says that these fundamentals are already defined as GET, POST, PUT, and DELETE, respectively. I think. Is this right?
So, I apply these verbs to URI nouns. To concretize the concept: I recently wrote & exposed an XML-RPC API to a scorekeeping component on one of our promotions. Some of the methods of this API were along the lines of int points.get_points(string email), points.award_points(string email, int points), and points.create_points_account(string email). To make myself a new account I'd call points.create_points_account("deus_x@pobox.com") and then points.award_points("deus_x@pobox.com", 100) to drop myself some points in the scoreboard. Then, I would do points.get_points("deus_x@pobox.com") to check my score.
I'm afraid this example is too simple. Jon Udell wrote that he wanted to see the "stock quote example" retired for being too simplistic to stress the technology in mental experiments. Hmm. Oh well, let's see where it goes.
So, if I were to RESTify the above example, would the sequence of things be like a POST, PUT, and GET, all to a URL that looks something like:
http://myhost/promotion/players/deus_x@pobox.com/points
Whereas POST does the account creation, PUT updates the points, and GET of course grabs the points total?
Okay, maybe POST needs to post to .../players/deus_x@pobox.com with a request body specifying "points", to create the new URI? The request body of the PUT should contain a points value -- positive for award, negative for debit? And if the thing needed more complex data, I could use something like XML-RPC to encode a data structure as arguments, or as Chris Heschong wrote, use WDDX?
Do I get it? Hmm... okay, have to run to a meeting, but I wanted to post this and see if anyone could give me feedback on my understanding. I think I see how, if all resources are manipulatable in this manner, one could envision a more abstracted and uniform interface on web resources than a pile of published web services APIs. But... can it really be that abstracted?
Hmm.
[ ... 935 words ... ]
-
When I talk about Radio and my love/hate with it, this is one of the things I absolutely positively adore, fawn over, and plan to mimic in as many of my projects as I can where I can. Back when the Mozilla source code was first released, and I happened to be at a Q&A with some Netscape guys, this was the [#1](/tag/1) feature I pestered them to spend time looking into.
John Robb of UserLand points it out, in relation to .NET:[Alchin said, "]The hot patching technology will not find its way into the upcoming .Net Server family, but we have made progress on reducing reboots.["]
Radio already does this.
Yes, yes it does, and this is dead sexy. I love that the system is put together in such a way that this is possible. My monitoring of the Radio.root Updates RSS feed reminds me daily of where my $40/year is going.
Now, can we have Instant Outlines do this? :) Heehee.
[ ... 295 words ... ]
-
I just had to link to this: A coder's guide to coffee @ kuro5hin
I think I'm going to buy myself a french press tonight.
Oh, and just for fun, visit The Caffeine Archive.
[ ... 73 words ... ]
-
One more thing, directed at the Radio UserLand crew: With my recent discovery of and mania for referers, I've seen that Radio sends me people claiming to have come from http://radio.outliners.com/instantOutliner and http://frontier.userland.com/xmlAggregator, for I/O and news aggregation respectively.
Here's a wishlist idea: Make Radio send the URL to the user's blog instead of URLs to UserLand documents. While the current referer URLs tell me why I'm getting the hit, I'd like to have a better handle on from whom the hit is coming.
[ ... 85 words ... ]
-
Playing more with PHP, writing a replacement for ShowReferers. You can view the source of my first attempt ever at a PHP page, or view the results. I just replaced the front page sidebar referers box with an include to this page. Soon, I'll replace the entire front page template with one written in PHP, and I might add the sidebar stuff to all my blog entries. This is fun. :)
[ ... 72 words ... ]
-
Hmm. Okay, so Zoe looks promising, but.. umm.. I can't figure it out.
I added an IMAP account. Stuff seems to happen, which ends up in a lot of ?FolderNotFoundException and ?OutOfMemoryError exceptions to my terminal. No mail appears on the front page. I tried changing my SMTP server to use Zoe, and then was going to foward email to import it as the FAQ suggests, but umm.. to what address do I forward it?
Urk. This makes me feel dumb.
[ ... 82 words ... ]
-
Oh, and I bit the bullet and turned off the authentication requirement on the wiki. I've decided that ultimately I agree that LoginsAreEvil. I didn't really want to put up fences, or raise the laziness threshold. I mostly wanted to identify people, but I was mistaken. Now, I'd rather AvoidIllusion.
There still are a few fences, however. My reasoning is that my use of this wiki slides into the realm of cheap content management, and I don't prefer public input on certain things. A few pages are access controlled, such as the wiki's front page and my home page. For this purpose, there's an alternate password-protected set of wiki commands. The non-authenticated are under /twiki/bin whereas the authenticated set are at /twiki/sbin. Just as easy as putting an 's' in the URL.
Access to the authenticated commands doesn't necessarily mean you get to edit everything though. :) That's access control for you.
Eventually, maybe I'll drop the access controls too. I was also thinking of moving out of TWiki for another wiki implementation like MoinMoin, too. Still tinkering.
[ ... 1396 words ... ]
-
Via Prof Avery, I just found ZOE.The goal here is to do for email (starting with your personal mailbox) what Google did for the web... The Google principle: It doesn't matter where information is because I can get to it with a keystroke.
So what is Zoe? Think about it as a sort of librarian, tirelessly, continuously, processing, slicing, indexing, organizing, your messages. The end result is this intertwingled web of information. Messages put in context. Your very own knowledge base accessible at your fingertip. No more "attending to" your messages. The messages organization is done automatically for you so as to not have the need to "manage" your email. Because once information is available at a keystroke, it doesn't matter in which folder you happened to file it two years ago. There is no folder. The information is always there. Accessible when you need it. In context.
Rock on! I'm either terribly unoriginal or my mind is being read or there's just a common Alpha Geek wavelength I'm tuning into. By the description, this is precisely what I wanted to do with a PersonalMailServer. Getting tired of making folders, filtering rules, and MailToRSS needs a bit more work and tweaking to be really useful.
From the FAQ:Q: On which platform does ZO? run?
A: ZO? has been known to run on the following "platforms":
...
MacOSX 10.1.4, jre 1.3.1, Mozilla 0.9.9
...
Rock on.Q: How much does ZO? cost?
A: ZO? is free of charge for personal usage. Keep in mind, that you are getting what you are paying for... ;-)
Rock on.Q: Is ZO? open source?
A: No.
Awwww. That's no fun! I want to play! I don't see it within 5 minutes of installation, but I think this thing really really needs IMAP. IMAP would rock for both message import and external mail client access.
But, from the author's terse response to this question, I'm imagining there are many who've asked it and he's tired of answering why he's not being trendy :) I guess I'll play and see what happens.
[ ... 649 words ... ]
-
Playing with a range of news aggregators once more, since Radio UserLand is making me itchy again. Pretty much the only ones I really like are Radio, AmphetaDesk, and sometimes Peerkat.
For the last week or so I've been mostly alternating between Radio and AmphetaDesk. The difference in aggregation styles is interesting: While Radio slices and dices and aggregates feeds by items and serves them up to me in an interwoven chronological order, AmphetaDesk serves my feeds up to me whole and in order of feed modification. I'm not sure which I like more now.
I like Radio's style, because I see what's new and only what's new. Usually. But, I like AmphetaDesk's style because I see everything, and have realized that I miss things with the rush of new items from all my feeds. For instance, if someone posts something once per day, I'll likely miss it with Radio unless I check every hour. But, with AmphetaDesk, I get to see what's new with every person or source whenever I check, and I only miss items if that feed has scrolled them off.
Shelley Powers of burningbird.net wrote a bit about the context destroying nature that RSS and aggregators have on weblogs. I agree with her somewhat, in that pulling the words out of the surrounding context of the blog and its presentation and community has an altering effect, I wouldn't say that it destroys the weblog.
For me, when I see something on my news aggregator, it's as if I'm overhearing it from another room. I don't get the whole context or see who else is listening or responding, but I hear the gist of something. And, when subscribed to 100+ RSS feeds, it's like I'm floating in this Nth dimensional space where I can overhear voices from hundreds of rooms without being overwhelmed. When something triggers some of my mental filters and watchwords, I click the link and delve deeper.
There's no way, without an aggregator, that I'd be able to track 100+ people and news sources in a day. But, because I can, I've been able to learn and discover things and hear voices I never would have before.
But, I think the way AmphetaDesk merges these sources might be a nicer alternative. By not chopping up the feeds, some intra-item context is maintained at least, so I can see developing trains of thought.
Okay, must go back to work now.
[ ... 1182 words ... ]
-
Is it just me, or did my last entry make me sound like an obsessive, compulsive nerd? :) Funny, I don't think I am. If I am, it's fun anyway. And not all that expensive. And I actually do get out and do things and have a bit of a social life. Really, I do. Sometimes.
Oh, and I just noticed my wiki was broken for editing. Gah, that was unfun. Fixed now.
[ ... 74 words ... ]
-
John Robb writes about the next generation of PCs and such:Here is how I think the battle will evolve in the next five to ten years:...
1) A home server. This PC is always on and lives in a closet. It serves multiple users that connect to it using mobile wireless screens and keyboards. ...
2) An extremely mobile PC ala OQO. This PC will be attached to a single individual. ...
This sums up much of what I've been anticipating and have found myself building.
In the case of the home server, I have two of them actually.
One is a headless Linux box behind a cable modem that has accumulated all sorts of autonomous functions: it gathers my mail from various accounts into an IMAP server that I access from everywhere; before I was using Radio UserLand, it used to host all of my news aggregation hacks; it controls and monitors all the X10 devices in my apartment (though these are dwindling away); and until my radio reception got bad when I moved, it used to record radio shows for me using a D-Link USB FM radio. I actually have a Mac at work that does that does that for me, and dumps the sound files to my server at home periodically.
The other "home server" is a PC running Win2K with an ATI Radeon All-in-Wonder card. This machine is my PVR, recording to VCD the few TV shows I actually want to keep up with. That ends up being about 12 VCDs a week, counting all the episodes of Buffy that I capture. (Guilty pleasures.) I keep dumping more and more hard drive space into this machine, and use it as a general apartment-wide file server, as well as a dumping ground for stuff from remote when I'm out yet have net access. Occasionally I play Windows-based cames on the PC, but it mostly just sits there and does things.
I would like to combine these boxes into one big Linux box connected wirelessly to my cable modem, sitting in my closet in my apartment, or if I had a house, across the basement from the water heater. I want this home server to be in the same class of appliances as the furnace, washing machine, and water heater. The only thing keeping me from dumping the Win2K PC is the PVR functionality I haven't bothered to try under Linux yet.
In the case of my extremely mobile PC...
Well, I'm still in search of this, and the OQO looks very very attractive. Lately, my mobile PC has been my iBook. In most of my usual haunts (home, work, coffee shop), my iBook is present and tends to have net access. I have scripts which auto launch some SSH tunnels back to my Linux server and mount shares on my Win2K box when I switch networks. It usually works. I have all my current developing projects on the thing, and I do news aggregation with Radio UserLand. Occasionally at home, the iBook is what I have with me in the living room, so with the A/V cable I use the iBook to play internet radio on my living room stereo, or I stream movies over my LAN from Win2K PC in the other room.
But, what I really want is the Global from Earth: Final Conflict. I can't find a link or direct info on the thing as used in the series, but it's an amazing fictional device. Palm-sized with a pull-out screen, clips onto the belt. Has world-wide satellite video phone, global positioning, and seemingly endless PDA capabilities. I think I've even seen someone drop it into a cradle and pull up their work on a desktop PC. I've at least seen some general scenes like: "Hey, can you drop me a couple gig of that data onto my global so I can look at it later." Seems to have ubiquitous net access, even in low earth orbit. :)
But until someone makes that, I'm going to eye that OQO up for awhile and see what people think. $1000 seems like a sweet price if I can run *BSD or Linux on it. Funny thing is, so far, I haven't spent very much money at all on all these things I have. They've all been acquired used or on sale or for free. Most all of it was cobbled together from scraps.
Living in the future is fun. :)
[ ... 743 words ... ]
-
So I'm starting to play with PHP and working on rewriting my Movable Type templates as *.phtml.
Having never really payed much attention to PHP, I'm amazed at how close it is to Perl (obviously on purpose) yet how much effort has been made to sand off the rough bits. Not sure how much I like it yet, but at least it's a familiar tune they're playing. The mildly annoying thing is that it's familiar, but there are just a few things I would habitually reach for in Perl that I haven't sussed out yet in PHP. Like autovivifying data structures. I abuse those constantly. I really need to wean myself away from that, methinks.
One thing that I was pleasantly surprised to find is PEAR, "a framework and distribution system for reusable PHP components". Hello, CPAN, my old friend. :) Finding all kinds of things that are immediately useful, like a Cache I can use to more intelligently and easily do the output caching voodoo I do in the perl CGI widgets right now.
You know, a lack of a centrallized CPAN-like system is what has kept me from leaving Perl for many other technologies. I really wish Java (CJAN?) and Python (CPyAN?) had one supported by their respective communities. It's just so nice to do a perl -MCPAN -e"install Date::Parse" and get what I need. Maintaining CPAN bundles for my perl software is tasty, too. Single-command installation of all my app's requirements, and sometimes I can roll it right into the app's installation itself. Mmm.
Anyway, it's nice (to say it again) to have a running personal site to tinker with, now that I've gotten off my butt and done it. This laboratory is letting me manufacture reasons to play with tech I hadn't bothered with before.
I mean, I've used ASP and JSP, and for most of the things I've done, I've grown a severe dislike for them both. I left the "Hey, you've got HTML in my code!" paradigm behind, wandered through that "Hey, you've got code in my HTML" model, and eventually settled on my standard pattern now:A central app logic controller takes in GET/POST data, dispatches to a method which processes the form data. That method then constructs data structures, which are in turn passed through a template engine to be rendered by a pile of templates independent from the controller.
This, along with some very special self-assembling component-based automation sauce, is the core of what my employer's offerings run on. But, this has crystallized as a habit for me, and I've not even considered other possibilities for a long time. This of course has made everything look like a nail for this hammer I have.
For example, while PHP is not quite the right tool for the things we're doing at my day job, it seems like a perfect option to quickly and easily replace SSI pages on my site with something meatier yet still simple to maintain and doesn't stink like ASP or JSP. I've also been looking at Cocoon, which if I can ever quite get in a groove with it, looks like a highly refined instance of my standard hammer.
And then there's Radio UserLand. I love it and hate it. The hate mostly comes from the slower iBook on which I run it, I think. The bootstrappiness of it makes me itch sometimes, but other times that just makes it endearing. The whole self-contained development biodome it represents is pretty sexy, too. Speaking of autovivifying data structures... I just have to love a system which has a live, manually tinkerable giant outline/hashtree for a persistence mechanism.
Next, I really want to swing back around to playing with Flash. Last time I did something major with it, I was making a game for my employer which really wanted to use web services but I hadn't known it yet. The game worked pretty well, but I want to see what it can do since last we met. First thing in mind that seems mildly nifty might be a slick, live updating lil "Recent Visitors" app for my front page.
I'm really feeling what Jon Udell means when he writes about thinking by analogy. It's also something one of my favorite Comp Sci professors harped on, with regards to what makes a Computer Programmer versus what makes a Computer Scientist. A small part of his speech always pointed to the notion that a a programmer is almost always pragmatic, memorizing the patterns and greasy innards of whatever tool he or she uses daily. On the other hand, the scientist is an explorer and finds joy in confusing him or herself by finding the universals and generalities across a range of tools. In the end, the programmer becomes specialized in a limited domain, while the scientist knows can pick up just about anything that comes along. And sometimes, many times, the scientist makes new tools for programmers to specialize on. I want to be and am working toward being a scientist.
More soon.
[ ... 991 words ... ]
-
Also, I just realized that the title of my weblog here alphabetically trumps most other weblogs.
[ ... 17 words ... ]
-
Prof Avery writes:Bad enough that the IM blog idea isn't new, the 0xDECAFBAD guy beat me to "404 Correction" in a Personal HTTP Proxy via Google's Cache.
Oh well, guess there's only one thing for me to do: quit whining and write the code...
So I'm the 0xDECAFBAD guy now, eh? Hee-hee.
Well, I haven't gotten around to writing the proxy yet, so you can beat me to that still. :) I've had a metric ton of good and sometimes new ideas throughout my relatively short life, but I have a habit of not getting very many of them done. That's where it counts, not necessarily in the novelty of the idea.
I mean... just look at Microsoft.
[ ... 248 words ... ]
-
This having a website to tinker with thing is kinda neat. Right now, I'm growing a nasty beasty using SSI and perl CGIs, just because it was quick and seemed like a good idea at the time. Obviously, now that I'm starting to get more than 2 visitors per hour, the SSI/CGI mix starts to slow things down. I do try a few smart-ish things, like making the CGIs do heavy lifting only once or twice per hour (at most) while spitting out cached results the rest of the time. But, there's still the overhead of external perl process launch.
So, I'm toying around with the idea of using Mason, whose design and purpose I seem to finally get, or learning PHP, whose design and purpose I dislike as much as ASP and JSP but whose simple utility I get now. Nice thing is, my web host has both PHP and mod_perl installed for me, and I can toy with either or both. There are other things I might play with learning as well. These are all things that pretty much everyone has already gone through, but it's fun to grow a site by tinkering and see things pop up.
I've been making and optimizing the same sorts of sites and applications for so long that I've started falling out of the loop on technologies not quite applicable to what I've been devoted to. This site of mine is proving to be a fun laboratory, complete with bunsen burners, Jacob's ladders, and those little curly tubes running between beakers.
[ ... 260 words ... ]
-
Mark Pilgrim writes: IO is more controlled [than linkbacks] -- you have to subscribe to other people's feeds to read their responses -- and is therefore better suited for intentional collaboration. Auto-linkbacks are more about exploration and manufacturing serendipity. Must explore this further.
This is why I went bananas with the referral links everywhere. Manufactured serendipity. The referrals do something that a weblog's comment feature just doesn't do.
Dave & the Userland crew often assert that weblogs are better than discussion groups, for various good reasons. Radio UserLand shipped without a comment feature (although it has one now), which I assumed was because the UserLand opinion was that the convention to comment & respond to someone's weblog was to do it in your own weblog.
The big problem I see, though, is that you're in your own bubble. If you have something to say about something I wrote, and you're not already in my RSS subscriptions, I'll never read you. If I don't know you already, chances are that I may not come to be introduced to you. The same goes for Instant Outlining. While I appreciate the intentional nature of this tech, and its strengths in avoiding spam, I want to meet you half way. I want to be surprised and have my ears get warm and turn red when you say something about me.
Referral-driven linkbacks on all pages on my site do this. If you post to your weblog and include a link to me, then I hear about it the first time someone traverses that link. This, to me, is even better than the comment feature. And, as Mark Pilgrim observes, this is better that a single referers page because these linkbacks appear in context. The conversation is built up from links in place and on topic and where the action is. To me, this is the two-way web really in action
[ ... 458 words ... ]
-
Bill Seitz says I've gone bananas with this referers thing. And, well... yup I guess sticking it everywhere on my site qualifies. :) But, it was quick to write, and even quicker to rework since I just broke it away from using DBM files and switched to a MySQL table. (Wow, DBM files. One of those ideas that seemed cool at the time. A full-on Berkeley DB would have been better.)
I'm pretty much going bananas all over the place lately with all these fun things going on around the net. I'm like a kid in a candy shop, or is that a bull in a china shop? Well, I'm far too skinny to be a bull. Sooner or later I'll settle down, but it's fun having a working website and mad scientist laboratory to play with after the past few years of being too much of a perfectionist.
[ ... 212 words ... ]
-
Oh, and I have to say: This Mach-O build of Mozilla for Mac OS X rocks. It actually makes my little iBook feel zippy. Makes me wonder why they even bother with the other builds. Turn on the Quartz font rendering, and this will be the world's best, prettiest browser.
Thanks to Mike James for the pointer! Someone else I wouldn't've found without referrals.
[ ... 65 words ... ]
-
Well, I bit the bullet and pitched in the extra bucks to upgrade my hosting which, among other things, finally gave me an access_log. The first thing I did was install Webalizer over here. Should my access log reports not be public? Hmm. Well they are for now, if that becomes a problem, I'll put a password on it.
The second thing I did was attempt to copy the Disenfranchised linkback act and make a wiki and SSI includable referers widget, which I'm calling ShowReferers. Mine's not quite so slick as their implementation, since with theirs you can construct your links to refer to paragraph numbers in a page so that a link back to you is injected right there on their refered-to page.
The neat thing here, though, is that I stuck it into the view template for DecafbadWiki so that every wiki page will show referral links, if there are any. I also dropped it into the pages for each story on the weblog. (Which reminds me that I need to make the story pages nicer, since I hardly ever look at them but the referrals tell me that other people see them more.)
I didn't quite get the value of referral links before, but I do now. :)
[ ... 212 words ... ]
-
One brief thing: When I switched over to Mac OS X on my laptop and on my desktop at work, and I bought the WindowShade X haxie, and then installed PWM as my window manager on every X11-running machine I use (including the OS X boxen).
I've come to this conclusion today: Windowshading is the outliner of window management.
TY HTH HAND PDT
[ ... 64 words ... ]
-
OTLML makes me think that I need to play more deeply with XML. I need to learn XSL, how to use Xpointers, I really have to spend more time with RDF, and I need to work with SOAP more.
I think I see the difference between things like RSS 0.92 and RSS 1.0, and between OPML and OTLML, now. Not quite sure if I can explain it precisely enough yet, but I have a slight groking of it. The upshot of it is that I really need to throw myself toward the side of latter in both cases.
Back to work.
[ ... 102 words ... ]
-
I've noticed that Dave's been getting some submissions of further HexOddities since he linked to me a few days ago. So, I've started collecting them. Feel free to come and contribute to the catalog. :)
[ ... 36 words ... ]
-
Now this is pretty cool (via Bill Seitz): Disenchanted has referral-based automatic backlinks. Funny thing is, since I started using the little SiteMeter web bug, I've been refreshing the referrals fun. But, constructing the referral report right in the entry is another sort of WeblogWithWiki-esque feature I hadn't even thought about. Now if only I had access to my access_log here. Maybe I'll be checking out another host soon. There's got to be a way to make this work with Movable Type and / or Radio UserLand.
[ ... 88 words ... ]
-
Jon Udell wrote a few columns and weblog entries about pipelining the web, and the power of the URL-line as akin to a UNIX command line with pipes. His examples did nifty things with a publicly available XSLT processor to use an XSL stylesheet at one URL and an XML document at another to produce a new document.
So, this is what I've been playing with a bit this week, expecially with GoogleToRSS and RssDisplay. But, this is what the URL looks like when I string the two together (line wrapping forced):
http://www.decafbad.com/web-services/url-based/rss_display.cgi?xml_img=htt
p://www.decafbad.com/images/tinyXML.jpg&src=http%3A%2F%2Fwww.decafbad.com%
2Fweb-services%2Furl-based%2Fgoogle_rss.cgi%3Fquery%3Dlink%3AXGnxCbayl9UC%
3Awww.decafbad.com%2F%26title%3DLinks+to+0xDECAFBAD%26description%3DTop%25
2010%2520Links%2520to%25200xDECAFBAD%26
What a pain this was to build. I had to make a little form in a throwaway page to trick my browser into doing the hard work. I suppose I could make a lil utility script to do the meta character escaping more easily. But, man, if people are already making fun of the punctuation and obfuscation possible in Perl, imagine what they'll say about scripts on the URL-line. (Assuming I'm not missing reams of existing ridicule already. :) )
Jon does make a note of this little problem, but I'm thinking it's going to be what makes me wrap up my URL-as-command-line experiments. What would this URL look like if it had 1 or 2 more levels of pipeline?
I suppose I could, as he'd also mentioned, employ a few tricks like reducing script names and parameter names down to single characters, but then I'm sacrificing one of the virtues he'd mentioned: the human readable, self-documenting nature of URL-based services. Well, that gets scrapped at the first layer of pipelined URL indirection with the escaping of URL meta characters. Hmm...
Still poking away at things, anyway.
[ ... 285 words ... ]
-
In doing some poking around about REST, I'm trying out a topic-specific Google search in the wiki via GoogleToRSS and RSSDisplay. (Thanks for the pointers, by the way, Sam!)
I called it a "permasearch", just because it's kind of a permanent search-in-residence. Basically a Google Box, only I didn't use RadioUserLand to make it. (That wouldn't've been as much fun, since Radio's already got verbs to handle it! :) Sometimes reinvention is fun.) I think I need to do some more homework on it to make it worth of a new name, like do searches when visitors aren't visiting it. (Currently, it updates at most once per hour, at least once per visitor.) Maybe do some time-series search differences... hmm, but what can one do with just the top 10 results?
[ ... 132 words ... ]
-
I don't get REST, specifically in the context of it being the RightThing to do web services. I see many vagueries about how it's "more scalable" and more "right" and better in theory and there's a big dissertation on it and everything. Eventually I will get down to reading it. On the surface, it seems like a big dud to me. But, it looks like a lot of smart people are into it, so I assume there's something to it since I don't know much yet.
This is why I love Busy Developer Guides, by the way. They're for busy developers. Like me.
I don't suppose anyone could point me to something that lays it out for me? Like... why is XmlRpc considered harmful by REST fans? And what's an example app I could use REST for that will just so obviously convince me that I need to drop my XML-RPC ways?
[ ... 518 words ... ]
-
Ack. Just when I was having fun playing with it, it looks like my GoogleToRSS toy is broken. Did I run out of plays for the month? Damn. I promise that it actually works, though :)
[ ... 37 words ... ]
-
Personally, I think this is the funniest thing Google has reminded me of today: Readers shun browser-OS integration - Tech News - CNET.com. How I came up with a comparison between flushing toilets and Microsoft Internet Explorer, I'll never know.
*whistle*
[ ... 42 words ... ]
-
Peter made a command-line tool called Google2RSS. Then, he mentioned that someone was thinking of making a Perl or PHP version of his tool.
Well, I saw that Aaron made Net::Google, and I already had XML::RSS, so I figured I could make a Google to RSS widget in about a half hour. That was about right.
The code is here: google_rss_cgi.txt, but I'm planning on making a semi-proper writeup for it in the wiki in a lil bit.
Let's see if this vain Google to RSS feed works:
[ ... 89 words ... ]
-
This looks like the kind of bestsellers list I would like to pay attention to: Weblog ?BookWatch. I should also start noting what's in the stream of dead paper flashing before my eyes. Let's see if any of these pop up on the bookwatch...
Currently reading:Fool on the HillThe Cluetrain Manifesto
Waiting on the shelf:The Selfish GeneSmall Pieces Loosely JoinedThe Tipping Point
On the shopping list:Flatterland: Like Flatland, Only More SoGodel, Escher, Bach
Some of these, like The Selfish Gene and Godel, Escher, Bach, I've read before but have not yet actually owned. Time to read them again and actually have them on my shelf :)
[ ... 107 words ... ]
-
Heh, nothing like a link from Dave Winer along with some very nice compliments to make my traffic spike up ten-fold. :)
Thanks, Dave, for the kind words and the link. Oh, and I'm very glad you got the joke!
[ ... 41 words ... ]
-
How's that for realtime? Just popped in over at blogchat.com, signed up for the beta, and now it's working here. Take a look over in the "Gadgets" section and click the Blogchat to spawn a chat window. Or, click here:
Now I have to think about how to make it more prominent and inviting.
[ ... 55 words ... ]
-
Hmm, this reminds me that I need to get Blogchat up and running again. Which I'm working on right now. :)
[ ... 22 words ... ]
-
It's a small bloggy world: I just noticed a few referral hits from Peter Drayton, whose book I just bought this weekend. Hi there! :)
Now if I just had some .NET and C# love on my iBook. This seems like a vaguely heretical thought to me.
[ ... 48 words ... ]
-
Bill Seitz' thoughtspace/wiki says about me: Probably the only person using RadioUserland, TWiki, and LiveJournal? all at the same time.
Hmm... really? Weird. I've just been kinda sprawling out across as many technologies as I can, because it's fun to play. Hell, I have 7 species of wiki installed on my laptop (though not all in regular use of course).
Wheee!
[ ... 112 words ... ]
-
In a column on Zope Lessons Learned, Jon Udell writes:It seems silly to recreate, in a scripting platform, services that already exist in the environment. On the other hand, since those services aren't guaranteed to exist in the same way in every environment, you can argue that the scripting platform should be self sufficient even if that means reinventing wheels...
This is something else I've been thinking about with regards to the PersonalServer / DesktopWebAppServer I want to put together. Thing is, between the slice of a full peer I have at decafbad.com, and the full UNIX environments I have on my 2 linux boxen and one iBook, I have 90% of the environment I want already.
I have databases, I have web and mail servers, I have WebDAV, and I have schedulers. Should I just say to hell with it and get on with writing the top layer? That is, the actual apps I want to run on this nifty personal server framework? I have been so far, and calling it "prototype". Telling myself that these little apps are just "playing around" for until I build a "real" desktop environment in which to host the apps.
So why reinvent all the wheels to which Jon refers? Because they "aren't guaranteed to exist in the same way in every environment", and I would like to distribute and share my work to people who don't have a full peer, 2 linux boxen, and an iBook. So, I'd like this stuff to be a simple little wrapped up package that's easy to drop in on a Win32 box. Or a Linux box, or a Mac OS X box. So, in order to make a cross-platform PersonalServer, I have to reinvent the wheels and create a run-time environment that itself runs on all platforms so that I don't have to modify the upper layers of the app.
Hmm... Or, I could just get to work within the excellent environments I already have, screw reinventing wheels, and actually create some apps that would be worth making cross-platform on some later date. :) Because the longer I work on reinventing wheels, the fewer things I have that are really any fun to play with in the end.
Besides, who would I be kidding if I didn't admit that the stuff I'm playing with right now is for early adopters and AlphaGeeks? So why waste a lot of time making a pretty box with a bow now?
MacOsX exists, and RadioUserLand exists, so I might as well stand on their shoulders. I wish I could find the permalink, but Dave Winer was talking about Open Source developers banding together on a mailing lists to "crush" a commercial product. He works toward the conclusion that this is stupid and useless, and that cooperation is more in everyone's interest. Not to be a dittohead in the Cult of UserLand, but they seem to be bearing this out. RadioUserLand is so open and inviting for tinkering that I think most of what I'd want from an Open Source clone is already there.
Hell, I even prefer MacOsX over Linux now. Am I selling my hacker soul? I don't think so. :)
Okay, okay, enough babbling. I'll forget about writing my OS-within-an-OS for now and write the apps on top of it.
[ ... 552 words ... ]
-
Looking at the LiveJournal XML-RPC Protocol Reference again today. I need to make a metaWeblog API gateway to LiveJournal, now that I can supply arbitrary metadata and now include mood and music. Next stop would be to make a client that can exploit metaWeblog that feels like the LJ client.
[ ... 69 words ... ]
-
Okay after the vote of confidence from Sam Ruby about my thinking out loud about a "404 correction" proxy server, I've been thinking more about writing a Radio-like desktop app server. I want to do more than make a DesktopWebsite, though. I want to make a full-blown PersonalServer app, capable of hosting things like a PersonalMailServer and a slew of other little local web services & etc. I may end up giving up and working more within Radio, but as I noted before I have some issues with Radio's performance and stability, which though balanced by my appreciation of the elegance of the system, is being gradually outweighed by my fears of lock-in and pre-existing experience with other technologies.
Then again, this thing probably won't replace RadioUserLand for me. I use it daily, I bought it, and it's not as exciting to reimplement what I already have. Unless it is exciting. Make any sense?
So, speaking of technologies... which ones should I use to start working on a personal server? My main goals are mumbled over here.
I've got a large amount of experience with Perl, and have written desktop apps with it for Mac OS X and Win32. I'm having more fun with Python, however, and though I haven't written the same apps I imagine that it's on par with Perl.
The main thing I'm trying to decide right now is: multi-threaded vs async/event-driven.
See, I need some sort of concurrency to handle multiple network server hits, multiple agents running, a scheduler, and whatever else wants to take up residence inside the PersonalServer. RadioUserLand, of course, has all of this.
I've worked a lot with POE in Perl to make some event-driven multitasking apps, a few servers (HTTP, FTP, NNTP, etc) and a few things to replace a fleet of forked scripts. I've also started looking at Twisted in Python which I gather to be the analogous thing in their camp. Not the same, but they both are using the same basic idea of event-driven programs.
The problem is that, to take code that you would have written for a forking or multi-threaded program, and make it play nice within the event-driven environment, there's a bit of re-think you need to do. Loops need to be broken up into procedures, turned into self-running events, etc.
Hmm... trying to think of more re-think examples, but the main one I can think of is that long-running loops and processes need to be sliced and diced. I seem to remember more pain than that.
Anyway, I'd rather use threads. In threads, there needs to be a bit of re-think too, in terms of protecting resources from concurrency, but at least the main logic of my code can remain straightforward. Perl doesn't have threads that I want to touch yet. Python has threads, but I'm not sure how kosher they are. Of course, there's always Java, but I want to avoid Java I think.
Anyone tuned in out there with any thoughts? Mostly thinking out loud right now.
[ ... 641 words ... ]
-
I'm running decafbad.com on some pretty cheap hosting that gives me most of what I need, but it's missing one annoying thing: access_log. I'm playing around with Site Meter, but I do have my own CGI hosting (obviously), so I'd like to find something that can closely emulate an Apache-style access_log with web bug images. The Apache format would be nice because then I could use any of a number of standard log analysis packages. Referrers would be a problem, of course, but I think some Javascript could hack around that. Maybe I'll just end up writing it meself.
Hmm.. Looking...
(Or I suppose, if I wanted the access_log badly enough, I could upgrade my hosting.)
[ ... 141 words ... ]
-
I need to turn my subscriptions list into a blogroll. Oh, and make sure that the subscriptions OPML doesn't contain any of my password-protected MailToRSS feeds. Also pondering doing some cute things, like maybe display a random subset of my reading list (since my blog+news RSS list is > 100 items), and maybe use RSSDisplay to pull in the headlines from a random RSS channel I subscribe to, and call it "Featured".
Hmm. blogrolling.com would be hot if it accepted a URL to an OPML file.
Maybe that's what I should do with my subscriptions-to-blogroll thing, kinda like I did with RSSDisplay. Yeah, I know RadioUserLand does or can do all of this, but I'm kind of in a mood to make a pile of small pieces to loosely join out here on decafbad.com.
[ ... 289 words ... ]
-
Hmm... now that I finally stopped babbling and read the docs, I just noticed that the Google APIs has methods to access their cache.
Sounds like I need to write a personal HTTP proxy that includes "404 Correction" by consulting Google's cache whenever one encounters a 404. Could be a new project, too, since someone I was talking to wanted searchable personal web browsing history and I think a personal HTTP proxy could help with that.
[ ... 157 words ... ]
-
I'm glad to see that I'm not the only one who doesn't quite yet get an immediate eureka about the new Google APIs-- searching in particular. Of course there are the non-web crossovers, like searching in AIM via googlematic, but this mostly makes me yawn. Yes, its fun and geeky, but yawn. This is not to say that the Google search API itself makes me yawn.
What makes me yawn is anything that's just an alternative direct user interface on the service. Search from my IDE while I program? Eh, that's okay, but I could do that by just spawning a browser with a cooked URL and not have to re-engineer a UI do display the results. Display some results of a canned search in my weblog? Eh, that's cute, but I could do that with some simple HTML scraping and SSI, if I really really wanted to. Yeah, I know the web service makes that somuch easier, but the thing it makes easier isn't something I was really interested in the first place. Maybe I'm not interested because I don't get it yet, or maybe it really is just a novel triviality.
No, what will make my overhead lightbulb spark up are applications which involve indirection. That is, some application which makes searches to answer some other question of mine. Search results used to spawn further churning. Or, search results as the result of churning. Google's suggestions are intriguing: Auto-monitor the web for new information on a subject; Glean market research insights and trends over time; Invent a catchy online game. But, these sound disappointingly close to a corp-speak shrug.
Not that this is unexpected or a bad thing or a statement of derision. Their Alpha Geeks made the service available, and now its up to the world Alpha Geeks to turn it into magic. I'm just waiting and thinking though... the AG's are churning out all permutations of language bindings, alternative interfaces, and weavings of the service into other apps. This is the first stage of play. I don't know that I'll feel like playing much yet. So I'll watch, and maybe tinker a bit, but mostly be thinking about what the next stage of play will become.
[ ... 518 words ... ]
-
Dave says: Novell now has a white-on-orange XML button on its Cool Solutions home page.
They've got more than that. In case you haven't seen it, they've got an entire fleet of Novell newsfeeds in RSS:
http://www.novell.com/newsfeeds/
[ ... 37 words ... ]
-
The new Google Web API is a mind bomb, but I feel a bit slow because my head hasn't raced out to find a bunch of nifty uses yet... it'll come to me though. And I'm sure I'll be seeing the other smart people on my aggregator start doing some amazing things. I haven't quite caught on fire with Dave's Google Boxes yet, but I feel a slow burn going. Long running searches. Changing results. Makes me itch.
I guess a mind bomb wouldn't be a mind bomb if it didn't take a few to build up power.
[ ... 99 words ... ]
-
From Mark Pilgrim, again:Not that you'll notice any difference, but I'm now using server-side includes to serve up several semi-static pieces of each page of this site, including the logo, the copyright notice, the footer, the CSS declaration, and most importantly, the blogroll.
Funny... Is there something in the air? Not that I've talked much at all (if ever) with Mark Pilgrim, but these are all the same kinds of things I've been playing around with here. Maybe I should drop him a line :)
[ ... 85 words ... ]
-
From Mark Pilgrim:CBDTPA looks destined to die a quiet death in committee. But if anything like this bill ever actually passes, our entire society will instantly turn into Cuba after the embargo, where everybody holds on to their pre-2002 technology, fixing it up year after year, decade after decade, rather than pay for new crippled technology. (In Cuba I believe it was cars in particular; for us it would be computers.)
Funny, I have been considering exactly this. If all this DRM and "Secure" media gets legislated and forced upon the market, I'll likely not buy another electronic gadget for a long while, unless maybe it's a used pre-2002 device. Those, however will probably top the $1000+ range on eBay, unless resale is outlawed as well.
Forcing copy protection and "secure" media and "digital rights management" onto the market is idiocy. We don't want it. Really. It doesn't make anything nicer for us, no matter how much you use fuzzy happy words like "secure" (who's secure? me? nope. my investment in books, music, and movies goes down the drain).
And the stupid horror of it is... this will kill the market for technology toys dead. The current offering of electronic gadgetry is pretty nifty already. PCs are pretty damn fast now. I doubt the majority of people have outstripped the capabilities of the things they own now, if they have bought a PC, mp3 player, CD/DVD burner, PVR, or digital camera recently. If it turns out that all new stuff past a certain point has inconveniences and copy protection and requires more money to run and etc.. well, then I think people might just settle for what they have awhile longer. I really think that technolust and thirst for the next bigger and better thing will chill because it'd be more trouble than it's worth.
My faith in the intelligence of my fellow Americans, especially my elected officials, is teetering now. I've been pretty optimistic. I thought they'd be smart enough to realize these things by now, I never thought they'd get this far. But seeing that the RIAA, MPAA, and all the other money bloated fuckers are actually being taken seriously and not laughed off Capital Hill have me seriously worried. The government got stolen a long time ago, and I never wanted to believe it. Grr.
I just hope it hasn't slid over the cliff yet and all these bastards finally hang themselves with all the rope they've played out. I still hope that there's a sleeping giant of reason out amongst the sheep.
[ ... 426 words ... ]
-
Crossposted from my LiveJournal:
Checking out the info for the automaticwriter community again, and I'm thinking I might have an idea to try out. Not sure if it'd fit in the community, but I think it might.
I need to find the algorithm, but I remember playing with some things that would analyze a body of my writing, looking for word and punctuation correlation and frequency. It could then to a cut up across that body of my writing, use the correlations, and throw a new bit of text together which sounded very surreal but still sounded uncannily like me.
So that's one part of the idea. The other part is this: I monitor about 120 news sources on the web through Radio UserLand. What if I took a random sampling of content out of my daily stream and applied my writing style analysis to it to produce new content? That is, take the content of random writers in my news stream, but do a cut up and automatic re-assembly based on an analysis of my writing style.
I'm sure it'd produce a lot of crap, and I might want to apply a little manual wrangling to things, but it might just produce some interesting results.
[ ... 207 words ... ]
-
Some grumbling before I hit the sack for the night. Since January, when I bought RadioUserLand, I've been getting sucked into the platform. I resisted at first.. I mean why learning another language, especially one bound to a single, commecial platform? Well, the more I played with it, the more the platform looked elegant, and the $40 I spent on it was chump change for what it can do. I can see the foot-in-the-door when I start thinking if there's a way we can use Frontier at work.
So, I'm tinkering and playing... And I'm putting up with flakey things happening, which confuses me. Inexplicable delays, screen flashes, stuttery speed-up and slowdown of text entry. The PipeFilters app I'm working on is killing Radio on my iBook. Sometimes RadioUserLand crashes. I just can't see what's to exotic about what I'm doing in PipeFilters what would prepare me to think it would hobble Radio regularly.
Sometimes, when testing PipeFilters, RadioUserLand somehow manages to bring my iBook to such a grinding, HD thrashing halt, that the CPU Monitor no longer scrolls and I can't even get response from the Dock to kill Radio. My only hope in this situation is to have a Terminal ready with a kill command typed out, launch the script in Radio that only sometimes offends, and then hit enter if things wig out, hoping that within the next 30-90 seconds the Terminal will be able to get a slice of time to eke out the kill command.
Part of it might be the iBook itself. This thing, though pretty and nice, is just not meant to run OS X. It runs it, but I beat the crap out of it. It feels like a 486 laptop. I just switched to a dual 800 G4 Mac at work, and OS X is a dream there. Though... Radio still crashes from time to time.
The more these things happen, the more it starts to make me think maybe I should cut loose soon and take what I've liked about Radio and do some wheel reinvention and cloning in the Python-based things I was thinking about.
But I so want Radio to work well. It's got so many nice ideas in it. I'm just worried that there's too much bootstrap in there.
[ ... 557 words ... ]
-
I feel like I'm discovering SSI again, as if for the first time. I'm using it around here to piece together a few pages with dynamic elements, among other things. Seems like all I need most times for this, as opposed to a more general web app framework or CMS. I'm sure eventually I'll get tired of it after having run into all the problems everyone else in the world has, and roll my own CMS again. Maybe this time I'll stop before I get to that point and adopt someone else's before I get down to wheel re-invention work. :) But it's so fun.
Anyway, my latest tinkering with SSI can be seen on the right side of 0xDECAFBAD's front page: Sibling blogs.
So far, I've got my LiveJournal blog over there, along with my RadioUserLand weblog. Maybe in listing a few headlines over there, I can entice a few readers to my other spaces. LiveJournal is where I do most of my general blogging, link posting, and general grousing. (I think that's my new favorite word) The RadioUserLand weblog is just an experiment at this point, but it may eventually consume the whole site.
Basically, RadioUserLand is competing with server-side CGIs and SSIs at this point. Eventually, they will cooperate. After that, RadioUserLand may take over.
Anyway, enjoy RssDisplay if you like-- you can either download it, or PipelineTheWeb and use it straight from my site.
[ ... 239 words ... ]
-
Via Eric Freeman's Radio Weblog:
Tim O'Reilly: So often, signs of the future are all around us, but it isn't until much later that most of the world realizes their significance. Meanwhile, the innovators who are busy inventing that future live in a world of their own. ... these are the folks I affectionately call "the alpha geeks," the hackers who have such mastery of their tools that they "roll their own" when existing products don't give them what they need.
That's what I want to be when I grow up: an alpha geek.
Well, I already am an alpha geek, only just in fairly obscure circles. Wherever I've worked, I've become the ToolBuilder. I'm the guy who takes the stuff we have that never quite all works together, and I weld it together into some freakish kind of A-Team nightmare that lets the team crash through the brick walls.
And here at my current job, I've worked at seeing how far I can take the ThirdTimeAutomate rule. Where it's lead me (and this company) is to a component-based web application framework with automation support in building up the apps.
I've gotten the system to the point that a Design Tech (HTML-guy) can crank out a dozen promotions with the system in a day, with a large degree of customization. Occasionally, a Software Engineer may need to toss in an hour or two to write a custom component subclass or new component.
The components are built to be self-describing and, within certain circumstances, automatically collaborate. We can mix & match promotion aspects and they'll work to integrate themselves. The efficiency it's given us has allowed this company to survive the dot-com bust with a tiny number of employees and expense. Now that business is actually picking up, productivity is still so high that we don't need many more people yet. And it's kept me in a good job all through these rough times.
It's really good stuff, and I'm very proud of it. In a way, it's the culmination of my last 8 years or so of work on the web. The problem is... This technology will likely never leave this company. I've spent my past two years refining it, and it will probably never be seen outside the 2 dozen or so employees of this company, only 3-5 of whom really know what it's about.
Which brings me to things like this:Getting Noticed? from Eric Olsen (via Steve Ivy, et al.). "As the volume of blogs has ballooned well into six-figures, the need for links from ?star? blogs has become an absolute requirement to be noticed."
But I think this is how things go in the world in general. It's a big, big place. To be noticed in it takes some work.
So here I am, an alpha geek and a ToolBuilder spinning in my own circles, hope someday to have my name up in lights.
http://www.decafbad.com/twiki/bin/view/Main/ReleaseEarlyReleaseOften
[ ... 490 words ... ]
-
Psst. Another version bump. Playing around with breaking filters out into their own definitions, to be referred to by the pipelines instead of embedded in them. This way, once I get around to doing the web UI to manage everything, I can have filter creation, acquisition, and trading all done separately from pipeline management.
Haven't heard much feedback from anyone using the tool, if anyone's using it. So for now, it's a fun exercise in how to get a Radio Tool put together from A-Z. Still learning the idiom. And I like what I see thus far.
Now if only I had a faster Mac and Radio didn't die on me as much.
[ ... 114 words ... ]
-
Awhile back, Aaron was playing with a Memepool-izing web service I whipped up from his code.
Now I'm testing it out with my RadioUserLand PipeFilters tool:
Four score and seven episodes ago our slayer brought forth on this continent a bunch of dead vampires, conceived in the hellmouth, and dedicated to the proposition that all undead are destroyed equal.
[ ... 60 words ... ]
-
Just published a new version (v0.3.5) of my RadioUserLand Tool, PipeFilters:Added a 'shortcuts' filter to use the new Shortcuts variant of the glossary.Virgin data contains pipelines using the new shortcuts filter (you may wish to copy some of them to your pipeFiltersData)Added a to-do list to the pipeFiltersInfoCleaned up a few installation bugsCleaned up a few bugs in pipeFiltersSuite.sendDataThroughPipeline()Unfortunately, no web interface to manage pipelines yet.
[ ... 67 words ... ]
-
4/6/02; 6:51:52 PM by LMO -- Outline diffs and Jabber?
Is Jeremy Bowers doing outline diffs as a Jabber conference?
Screenshot (should open in browser)
- If so... WOW.