Archive for the ‘web’ Category.

Don’t try to make me spam my contacts

High-quality social network sites grow because contacts are real, and site-mediated communication is welcome. For example, LinkedIn from the beginning treated contact information very carefully, never generating any email except by explicit request of a user. Therefore it felt safe to import contacts into it, since I wasn’t exposing my colleagues to unexpected spam. (LinkedIn has loosened up a bit. Originally one could not even try to connect to someone unless you knew their email address already. They made it easier to connect to people found by search only, and you can pay extra to send messages to strangers; nonetheless, in my experience it’s always user-initiated.)

Low-quality social network sites grow by finding ways to extract contacts from people so the system can spam them, or trick users into acting as individual spam drones. (A worst-case example are those worm-like provocative wall postings that, once clicked, cause your friends to seem to post them also. Just up from that on the low rungs are the game sites that post frequent progress updates to all your friends.)

I’m a joiner and early adopter, but I rarely invite people to use a service they’re not already using. That’s my way of treating my contacts respectfully, and protecting my own reputation as a source of wanted communication, not piles of unsolicited invitations.

Google Plus has recently taken a step toward lower quality by changing their ‘Find People’ feature. Previously it identified/suggested Google Plus users separately (good). Now it identifies and suggests everyone on your contact list and beyond, without identifying whether they are already a Google Plus user. Really they are nudging me toward being an invite machine for them.

As a result, Google Plus will get less high-quality social-network building (among people who respect their contacts and take care with their communication), and more low-quality social-network building (piles of invites from people I barely know). If it goes too far downhill, Google will endanger the willingness of high-quality users to let Google know anything about their contacts or touch their email.

Diigo takes over Furl

For a few years I have used Furl as my personal bookmarking tool.
Del.icio.us had a better user interface, published much more pleasant RSS and HTML, but it lacked one feature — cached copies of web content.

Now, Diigo is taking over Furl. It was announced a week ago, Furl is no longer taking new bookmarks, and my old data is now migrating into Diigo (probably without cached content, but we’ll see).

I’m hopeful about its personal usefulness: Diigo goes support cached pages, and seems to be pretty flexible in its other connections to the world. There’s yet-another superfluous social networking database that I’ll be ignoring.

blog backup online – out of beta

I’ve been using the blogbackuponline beta since last April.

It just works.

Now it’s out of beta. I recommend it. (I’d recommend it even if Techrigy didn’t offer a small incentive to share the experience.)

blog backup

I participated in the public beta of BlogBackupOnline.com, and since then the service has gone live, and, for now, free. Signing up is relatively effortless, and now I have an extra up-to-date copy of my blog content without any administrative effort on my part.

They don’t back up image content yet, but they’re working on it. I haven’t tried using their restore feature to migrate from one platform to another, but it looks like that would be a lot easier than my previous export/import from Radio UserLand to Movable Type to WordPress.

Cross Site Cooking

Michal Zalewski identifies a new class of attacks, that he dubs Cross Site Cooking:

There are three fairly interesting flaws in how HTTP cookies were
designed and later implemented in various browsers; these shortcomings
make it possible (and alarmingly easy) for malicious sites to plant
spoofed cookies that will be relayed by unsuspecting visitors to
legitimate, third-party servers.

While a well-coded web application should be designed to resist attacks from hostile HTTP clients, these new attacks turn every browser into a hostile HTTP client, and it’s a good bet that many web applications are hanging on a pretty thin thread of “this can’t happen” assumptions, soon to be violated. Expect a large number of embarrassing vulnerability reports to ensue.

[via http://del.icio.us/emergentchaos/new.attack.class%3F]

The right way to create pop-up windows

Aaron Boodman:

Forget everything you know about creating pop-up windows. Most importantly, forget you ever knew that the javascript pseudo-protocol ever existed . Do you hear me??

Never, ever, ever use the javascript: pseudo-protocol for anything, ever ever ever ever again. Please. Pretty please. The next time I click on a hyperlink, only to have it cause an error in my browser, I am going to hunt down the author and pound them into holy oblivion.

and the correct code is as follows:


<a
href="http://google.com/"
onclick="window.open(this.href, 'popupwindow',
'width=400,height=300,scrollbars,resizable');
return false;"
>
Click me any way you desire, now or later, bookmarked or not.
I will not attempt to control you, nor punish you, for I am a
simple hyperlink; eager to do your bidding, while remaining ever
helpful. I anticipating desires, but never trample possibilities.
This is the way of the Link.
</a>

See also the comments
for subtleties, such as pop-up forms.

[Via Jim O’Halloran]

unescaped, escaped, double-escaped

Tim Bray explores the mess related to escaping HTML/XML information:

The policy ideally should be, I think, that all data in the Your Code block has to be known to be escaped or known to be unescaped. That is to say, you always do escaping on the data at the pointy end of the input arrows, or you never do it.

I think always-unescaped is a little better, since some of those output arrows might not be XML or HTML, but probably they all are; so always-escaped is certainly viable.

and then it gets worse, as treatment of HTML in RSS aggregators varies.

The same problem presents itself in cross-site scripting and code injection attacks.
It’s the bane of macro language beginners too, whether it’s shell or troff.

Ten XForms Engines

Micah Dubinko, author of
XForms Essentials, lists his
XML.com: Ten Favorite XForms Engines

It turned out that progress on XForms technology was happening so rapidly anything in print would have been quickly outdated. An online approach seemed more sensible.

[via Slashdot]

BitTorrent for RSS content distribution

Steve Gillmor: BitTorrent and RSS Create Disruptive Revolution.

My first reaction: a good idea.

On second thought, it’s all a question of balance and tradeoffs.

  • Most RSS publishers are low volume and the cost of supporting a small number of RSS pollers is insignificant.
  • Since BitTorrent’s intended application is content distribution of large files, for small sites the cost of supporting BitTorrent downloads of tiny RSS files may exceed the cost of HTTP polling.
  • At some point in the subscription curve, the multitude-of-pollers model becomes too costly and the publisher wishes they had figured out a content distribution mechanism instead.
  • Sites transitioning from low-traffic to high-traffic HTTP slam their foreheads in just the same way. So it’s not a new issue.
  • The solution for HTTP has been to wait until you need it, then build or buy high-end content distribution. Replicate. Akamize. This works, except when it doesn’t. (Most web servers are small and are subject to the SlashDot effect.)
  • There is currently no trivial smooth transition from small to large.
  • A low-overhead automatic ad-hoc content distribution network would be great for both RSS and HTML distribution. Maybe BitTorrent fits that bill, maybe something else. Further research is called for.

How to link without PageRank