13 Aug 2004 MP hits back over spoof weblog
Proxy Blogs for MPs hits pay dirt. Compare the proxy here with David Lepper's official site here. Even a blind bat could not mistake one for the other. I particularly liked these comments from David and others. Mr Lepper said: "It's highly objectionable that someone is posting information on the internet claiming it has been written by me. "I'm particularly concerned because there is an email address given which purports to be my own. "I would hate anyone to think the responses from this address came from me." The MP believes he was contacted by the author of the web site who complained his real web site has not been updated. Dr Nick Palmer, secretary of the all-party Parliamentary internet group, said: "Blackmailing MPs into creating blogs is not the best way. If they want the MP to blog, they should get in touch and ask. "David Lepper is a serious and dedicated MP and does not deserve to be ripped off in this way." I wonder which bit of Almost David Lepper, MP. I'm NOT David Lepper, the MP for Brighton Pavilion. Brighton Pavilion has to be one of the most wired places on the planet. I believe in democracy and I believe in the media networking revolution. I believe MPs should blog. If David Lepper, MP, did blog, it might look something like this. they find hard to understand? I guess David prefers to keep his head down and out of the blog spotlight. Welcome to the Internet, David! Incidently I discovered all this from a news aggregator of UK Political blogs that I recently launched. [Edited to add] Compare this with Alan Milburn The proxy author went in to an MP Surgery and has convinced Alan to allocate one of his constituency workers to start posting items to the blog. [from: JB Ecademy] [ 13-Aug-04 1:10pm ] 10 Aug 2004 Oh Lazyweb, give me a Firefox - BitTorrent extension built into the download manager so it becomes completely transparent to download things like the Windows XP Service Pack 2 more efficiently and I can act like a source whenever Firefox is running.
[ 10-Aug-04 7:36am ] 09 Aug 2004 Late last year and this year has seen a huge increase in the use of the internet by the USA political parties and action groups. From the Dean campaign to the (somewhat fake) weblogs from the Presidential Election candidates, to Kerry's donation drive, to bloggers at the Democratic Convention, to local activist websites, it feels like they've finally embraced the Internet as a means to do politics.
By contrast, the UK's political use of the net is primitive and stuck in a time warp from 5-8 years ago. There are some bright spots in sites like They Work For You but that's about it. Quite a lot of MPs have websites. But they are generally static, driven by Frontpage and rarely updated. The main sites are formulaic with no community, no route to talk back, no attempt to engage. Amazingly the Conservatives have an RSS feed of news but they are alone. There's surprisingly few weblogs that focus on UK Politics. Generally the Internet side of UK politics looks about as apathetic and uninterested as the offline side. So what I'm looking for is people who can help change this. Specifically, - An email writing campaign at anyone involved to encourage them to build interactive community sites and to get the current sites to generate RSS from the existing news pages. This includes the major media. The BBC, Independant, Scotsman have RSS feeds. That's all. The Telegraph and Guardian have some feeds but nothing specifically on Politics. - People who can build and host community sites for local political chapters and activist groups. - People prepared to lobby their MP or councillors to start a blog. And if they won't, to start a blog on their behalf. - People who can try and get themselves accredited to the upcoming party Conferences as "Blogging Journalists" and then provide an alternate view to the traditional big media TV clip of "Auld Lang Syne". - New and interesting hacks that leverage the existing sites in the style of FaxYourMP or Public Whip. I'm sure there are plenty of other ideas. The aim is to try and drag UK politics into the 21st century. If enough (any!) people come forwards, we can at least start a club here on Ecademy and then whatever else it takes. As a start I threw together a site over the weekend that provides an aggregated view of all the news and Blog sites I could find in UK Politics. This was inspired by a US site that did the same thing for the Democratic party convention. [from: JB Ecademy] [ 09-Aug-04 2:40pm ] 08 Aug 2004 Late last night I flicked the switch on a public aggregator of UK Political Feeds.
http://www.voidstar.com/ukpoliblog/ If people find it useful, I'll give it it's own domain. It could do with a graphics makeover as well. In response to "Can we blog the Labour Conference" and with a tip of the hat to http://www.conventionbloggers.com/ As for Media RSS in this area, I've found the Scotsman and The Independent. While searching for feeds, some things I discovered. - Quite a lot of UK politicians have websites. Almost without exception they are Frontpage driven from a packaged template. Consequently, they rarely get updated and have no RSS. - There's a surprisingly large number of blogger/blogspot driven blogs that have no feed and where people have jumped through frames to put them on real domains. - There's a movement that's gathering momentum to create proxy blogs for prominent politicians. If they won't run one, we'll run one for them. - A LibDem group has started a Meetup. So someone's watching the USA. But generally, UK Political use of the internet seems to be primitive. [ 08-Aug-04 8:13am ] 07 Aug 2004 Now up at http://www.voidstar.com/ukpoliblog
[ 07-Aug-04 7:10pm ] 06 Aug 2004 MS have a beta of a Web version of MSN Messenger.
Unfortunately you can't be in two places at once, so if you sign into this it will knock you off a real MSN messenger somewhere else. [from: JB Ecademy] [ 06-Aug-04 1:40pm ] I'm looking for UK Political Blogs, commentators and newsfeeds. Can anyone recommend some good lists or URLs? [from: JB Ecademy]
[ 06-Aug-04 1:40pm ] 04 Aug 2004 Blunkett's latest initiative (don't get me started) to get tough on crime (and the causes of crime) is an initiative to banish the scourge of graffiti from our inner cities.
I dont have a problem with graffiti. I have a problem with how bad it all is. If all graffiti artists were as good as Banksy, I wouldn't mind a bit. But the vast majority consists of little more than a hasty and unreadable tag layered on top of hundreds of other hasty tags. Rather than an on the spot fine or temporary incarceration in a police cell, maybe what we should actually be doing is sending them to art school. [from: JB Ecademy] Following yesterday's news about record oil prices. OPEC is pumping to capacity. Iraq is effectively off-stream. China is now the second largest oil importer after the USA and ahead of Japan. China's oil imports are growing at 20% a year.
Something's got to give. This wasn't supposed to happen for another 10-20 years. Looks like the curse of "Interesting times" is upon us. So how do we get out of this one? Detailed analysis here. [from: JB Ecademy] [ 04-Aug-04 9:10am ] 02 Aug 2004 I've got photographs that are 50 years old. I've got vinyl records that are 35 years old. I've got letters that my grandmother sent and received that go back to between the wars. Some of my books are more than a hundred years old. Although a big slice of the ones I bought are 30 year old paperbacks on cheap paper.
In these days of digital cameras, MP3s, email and websites, how would you preserve any of the content you are collecting now for similar periods of time? When CDs degrade. Disk drives go bad. Tape loses it's magnetism and so on. And when the formats may not even be around at some stage in your life time. Maybe it's time to pay attention to The Long Now Foundation. [from: JB Ecademy] [ 02-Aug-04 3:10pm ] The last few days has got me thinking about single sign on again. I'd appreciate some feedback about all this. Here's an overview.
Summary A simple scheme to provide distributed login and profile management. A typical use case might be a user logging in to a weblog or discussion site to leave comments. The user does this by referring the new site to their own home site where their login and profile is maintained. Explanation idp.com is an Identity Provider. It maintains a master copy of a profile. It provides authentication to service providers. It manages global login. It can distribute profile changes to service providers. sp.com is a Service Provider. It requests login authentication and profiles from idp.com. It receives profile updates from idp.com. It receives logout messages from idp.com It can tell idp.com to do a global logout. Any participating site should be able to take either role. Entry and function points idp.com - Remote login for sp.com. sp.com redirects to idp.com for a login page and authentication. On success the user is redirected back to sp.com - check Login. sp.com asks idp.com if user #23 is still logged in. No UI. - Get FOAF with UI. sp.com asks idp.com for a copy of a user's FOAF profile. The user gets some UI to authorise this. - Get FOAF auto. sp.com asks for a copy of user #23's FOAF profile to update a local account - Logout globally. The user at idp.com, or sp.com tells idp.com, that user#23 has requested a global logout. idp.com informs all SP's that have used remote login for #23 that the user is logged out. sp.com - Global logout. idp.com tells sp.com that #23 should be logged out - Account update. idp.com tells sp.com to fetch a new copy of a FOAF profile for #23 to refresh it's local details. Federation On the fly In some scenarios, all this is setup on the fly. idp.com knows nothing about sp.com until the first time the user requests a login or profile. The upside is that no prior approval is needed. The downside is that Mike Malware will game the system. We're beginning to see things like trackback spam, so this will happen. In theory, idp.com can maintain a blacklist of SP sites. Equally, sp.com can keep a blacklist of IDP sites. But this is auditing after the fact. Prior Approval idp.com and sp.com need to do a handshake before anything works. This has to have human involvement and approval. It's expected that big sites would form themselves into rings of trust like this. This in turn needs some admin function to maintain the ring. This is not black and white. There are blends where some functions are allowed on the fly, while others require prior approval. Discovery One of the problems I've hit with this as an API is working out the locations of the entry points. Almost all of it can be done on the fly with a very few named parameters. We can use things like Auto-discovery from home pages for some of it, or ask the user on the fly. But some of it looks like it needs to be pre-defined. Previously Josh had suggested there's only a few options. 1/ convention -- "everyone does this specifically at 'http://source.com/login.php'" 2/ registry -- "go to http://registry.com/foaf-login?site=source.com" 3/ discovery -- "go to http://source.com/meta?type=whereToLogin" to get the URL of where the user should login [which is just convention + a level of indirection]. 4/ use the user. :) ] The tricky entry points are - IPD.com : Check Login Status - IDP.com : Get FOAF Auto - IDP.com : Logout Globaly - SP.com : Global Logout - SP.com : Profile Update There's something interesting happening around sxip.com here. I don't yet know what. 01 Aug 2004 Online Business Networks Blog » Scott Stratten's Ryze success story
This guy has discovered a whole set of networking principles. The story is about using Ryze, but they apply equally to Ecademy. Here's a few, read the whole post for the others. - Join some networks of interest and write well thought out, interesting and informative responses to other peoples questions. - Never post something in a network or guestbook where the only person that stands to gain is yourself. - Before you post something (think) "What would happen if everyone did this". - Understand that being on Ryze is not your right, but a privilege and think of ways of how you can enhance things and crucially My goal was simple: Just give. Give good information, create an environment where people could exchange quality information with each other and also build a network of local business owners (Toronto) that could become closer colleagues. [from: JB Ecademy] [ 01-Aug-04 3:10pm ] 31 Jul 2004 This is now on the FOAFnet wiki.
[ 31-Jul-04 7:55pm ] If the proposal for remote authentication for FOAF collection works then we could add some auto-discovery to it.
How about this in the home page html:- <link rel="meta" type="text/html" title="FOAFnet" href="url_for_foafnet_api" /> Then the user on target.com just needs to say "You can get my FOAF from source.com". The application would retrieve the source.com home page, get the url_for_foafnet_api, construct a url like url_for_foafnet_api?return=my_url and redirect the user to it. DanBri has also suggested we look at creating some foaf tags for this so that aggregators could collect together lists of participating sites. I don't think this would be used on the fly to discover the API URL but I can see benefit in publishing the locations in machine readable form. This would mean inserting some triples in people's FOAF that said "This is some cut down FOAF. Full FOAF can be obtained with my permission by using the FOAFnet API that is located here." [ 31-Jul-04 9:19am ] Following on the previous posts, I've now got an implementation at Ecademy.
The API URL is http://www.ecademy.com/module.php?mod=foafnet This is what the user will need to paste in or choose from a drop down. It takes one extra parameter; "return". This is the URL where you want the user to come back to. Don't forget to urlencode this. So the requesting application needs to redirect to http://www.ecademy.com/module.php?mod=foafnet&return=return_url for example:- http://www.ecademy.com/module.php?mod=foafnet&return=http://www.voidstar.com The URL above takes the user to a login form. If they're already logged in to Ecademy they just get an Approve button. On hitting the Approve button or supplying a valid ID+Password they are redirected back to my_url with "foaf=url_to_get_your_foaf_from_Ecademy" appended on the end. The foaf variable is the one time URL to collect the FOAF. It's escaped so you'll need to urldecode it before using. It's typically something like http://ecademy.com/module.php?module.php&mod=foafnet&op=foaf&hash=a_hash a_hash is the first 16 chars of an MD5. The URL will work for 5 minutes and will have checks for validity and that the domain requesting the foaf is the same one that was in my_url. The hash will only work one time. For the moment all but the 5 minute check are commented out. If any of the checks fail you'll get an empty http page. This could be something like a 404. The FOAF returned includes all the contact and private info I have. So including all the stuff I normally keep out of the public FOAF like mbox, street address, post/zipcode and so on. The receiving application at my_url needs to pick up the FOAF URL from the foaf CGI variable, use curl or something like it to collect the foaf, parse it and then do something useful with it before displaying some UI. Assuming you've got an Ecademy account, you can test all this in a browser with a bit of cut and paste. Behind the scenes at Ecademy, I've got a table of valid hashes. This has the requesting domain, a timestamp, the Ecademy ID# of the user providing the approval and the hash. When the FOAF is requested, the hash is looked up in the table, the timestamp and domain checked, the hash regenerated and compared. If everything checks out the FOAF is returned and the record deleted. This is all very similar to work done by myUID for remote authentication. I'm going to work on seeing if I can extend it to provide an open implementation of single sign on. Something I've been wanting to do for a year now. Incidentally, a couple of days ago, I changed the Ecademy FOAF so that if you're logged in, and you request your FOAF, it bypasses the privacy controls and gives you a FOAF file with all your contact data in it. The implementation above gives you a way of telling a third party to get the same FOAF without giving them your Ecademy ID and Password. [ 31-Jul-04 9:12am ] The underlying problem here is to create a mechanism where Alice can tell target.com to get her authenticated and approved FOAF from source.com without giving her source.com ID+Password to target.com.
0)User is on target.com and chooses a link to "create account using FOAFnet" 1) User chooses source.com from a drop down, or copies in a URL. The URL might be:- http://source.com/login.php 2) target.com redirects to the source.com login page with a parameter which is the URL to return to at target.com. eg http://source.com/login.php?return=http://target.com/accountcreate.php (suitably escaped) 3) source.com displays a login page. User logs in. 4) If successful, user is redirected back to target.com with a URL to collect the FOAF as a parameter. This URL stays valid for (say) 5 minutes. eg http://target.com/accountcreate.php?foaf=http://source.com/foaf.php?hash=e42b34b637b3c06d872e5 5) target.com collects the FOAF from the URL 6) source.com verifies that the hash is valid and hasn't timed out. It might also check that the domain requesting it is the same as the redirect URL when it was created. If it all checks out it deletes the record so the hash can't be used again. 7) source.com returns the FOAF 8) target.com processes the FOAF, creates the record and thanks the user. As far as the user is concerned all they had to do was identify source.com (via drop down choice or URL) and then sign in. If some federation is required, then source.com can check the referer field at step 3 and 6 that target.com is known to it. There's probably some MD5 trickery and additional timestamp parameters to avoid having to store the hash. But I'd take the stupid route and just store it on the users record along with the timestamp. So all we've used here is a http redirects and GET calls. We've got two named parameters in "return" and "foaf". And we've got a simple process. This shouldn't be hard to implement in any web aware language. [ 31-Jul-04 8:53am ] Here's a post from the FOAFnet mailing list. I'm going to copy a collection of the critical ones here.
FOAFnet Aims: We want to get to the point where Alice can create a new account at target.com using the account information at source.com and using FOAF as the transport mechanism. In addition, target.com should populate the friends list with people Alice knows at source.com who are already members. Export: No site is going to export contact information for Alice without Alice's approval. Publicly accessible FOAF for Alice should not include contact info because Alice doesn't get the chance to approve the export. Equally, even if Alice gives approval, Alice's friend Bob hasn't, so we should never put Bob's contact info into Alice's FOAF. But we can put in mbox_sha1sum or other IFPs like homepage URL so that existing members can be matched. We don't have a mechanism for an application at target.com to pass Alice's approval to source.com. So for testing, Alice will have to use a browser to save the source.com FOAF file locally. That way we can use existing login processes to authenticate. But then Alice will have to manually upload the FOAF to target.com either by a file browse control or by pasting it into a textarea. This is not a long term solution. But it will let us do a proof of concept and write all the import routines. Import: We still have to write the first import routine. Even using the stone age manual methods described. We've now got some Java and PHP routines that can help. Federation: In a final system, we can imagine groups of large sites that agree to import from each other. The numbers are likely to be relatively small so a drop down list could be provided to the user. The user will still have to provide some ID (like an ID#, nick or email address) and some authentication like a password. We'll need some backend admin to define for each participating site how to collect the FOAF and how to pass the ID and password. We can also imagine large numbers of smaller sites that would like to participate and any one target cannot maintain a list of all of them. So if the target will accept any of them, we'll have to provide a URL text field, the API to use, as well as the ID+Password. It may be possible to combine these into fewer fields the way Drupal has done with their external login. Even if the sites are distributed they may use some standard or be based on the same software. So we might be able to solve this generically for any Drupal, Typepad, Movable type, Wordpress, Jabber, LDAP site and so on. Authentication: Anywhere ID+Password is used there's a potential security risk. So ideally the source.com credentials should never be given to target.com but should use a single signon and temporary key method. The use cases for this and programming patterns have been well documented by the Liberty group. This whole authentication area is not really part of FOAFnet but it's unavoidable because we're talking about information that users rightly consider private. Unfortunately there's no big market leader here with a protocol that has usable implementations on all likely platforms. We have to count out Passport and Liberty mainly due to platform issues. So this is an opportunity for a new player to appear whether commercial/proprietary or free/open source. FOAFnet Road Map: So. Hit the Road, Map. 1) Build FOAF Export using existing authentication 1a) Get existing FOAF export up to scratch 2) Build FOAF Import from saved files 3) Solve remote authentication 4) Build UI to let the user choose a source.com for their FOAF and get the FOAF at run time [ 31-Jul-04 8:43am ] 28 Jul 2004 A few days ago we were talking on IRC about how much RDF and XML there was on the web. We stuck a finger in the air and got 15 Million FOAF and RSS files of structured, machine readable data right now. And its growing at the same rate as the number of Weblogs with spikes as each new major provider joins in.
This prompted a question to which we didn't really have an answer. "What should Google do with RDF/XML/RSS/Atom it finds"? Then today along comes this mind boggling essay that looks at one possible scenario. August 2009: How Google beat Amazon and Ebay to the Semantic Web Truly, a mind bomb. BTW. It's now 2 years since Google introduced their SOAP API. It still doesn't support anything except basic search. There's still no RSS/Atom feed from search, News search, Images, Froogle etc. [from: JB Ecademy] [ 28-Jul-04 1:40pm ] 27 Jul 2004 [ 27-Jul-04 4:40pm ] You asked for it. you got it. (You see, I do read the wishlist)
In member search, either full text or on the advanced page, you can now specify "My Network only" Let me know if you see anything odd (on Ecademy, not in your life, well actually that too). [from: JB Ecademy] [ 27-Jul-04 1:10pm ] |
The Blog


