one more post about ApacheCon 2003

   Print.Print
Email.Email weblog link
Blog this.Blog this
Rod Chavez

Rod Chavez
Dec. 02, 2003 04:20 PM
Permalink

Atom feed for this author. RSS 1.0 feed for this author. RSS 2.0 feed for this author.

i got the opportunity to attend ApacheCon 2003 in Las Vegas (Vegas baby! <g>) two weeks ago. i thought i'd blog my notes so that you could get a feel for what was presented and how it was received. given BEA's growing commitment to open-source and Apache, i was looking forward to an interesting conference (and i wasn't disappointed). oh, there's also an official conference wiki you can check out too

Stefano Mazzocchi, How the Apache Software Foundation Works - plenary

this was a (quite good) overview of how the ASF (Apache Software Foundation) works for those people who aren't already members. topics covered include:

  • lazy consensus
    in order to streamline decision making and to prevent gridlock and bureaucracy, the ASF has method of voting on decisions that ignores all abstaining members. then, if someone votes no "-1", they can't just use that as a veto. they must supply a reason for their objection as well as help to resolve it. the goal is to provide a measure of oversight and input, while at the same time keeping things vital and dynamic. i think some other standards groups could learn a lot from this model
  • project types
    when people think about Apache projects, they tend to think about the software projects (Apache, Ant, XmlBeans, etc), but there are "auxiliary" projects as well. their web-site, wikis, CVS servers, email archives, bug-tracking and distributions and download sites. each one of these is critical to the smooth functioning of the ASF, and without them the ASF wouldn't be nearly as capable
  • the Incubator
    created a year ago, this is where new projects being proposed for inclusion into Apache go while the members of that project get "well integrated" into the ASF. there are no hard and fast rules about how long a project remains there, but the steps involved are outlined here. there are several projects currently in the Incubator, and one that's made it all the way through and into Apache
  • a new license
    Apache is working on a new license. under development for more then 2 years, they hope it will be ready soon. there are a number of goals for the new license, like dealing with patent issues better, trademarks, and others. if you're going to participate in Apache or use Apache software, you should be aware of what's coming
  • security
    mentioned by Stefano, this was a reoccurring issue raised at several sessions throughout Monday. like spam, security is being viewed as a critical issue within the ASF. the security perception of a given project is seen as a litmus test issue of viability

John Fowler, Looking Ahead: Challenges for Open Software - keynote

i was a bit worried when this talk started that it was going to be a pure marketing pitch for Sun because one of the first slides was a list of Sun's "strategic initiatives". but that slide to the contrary, it was a pretty good deck

  • trust
    huge problem for everyone. internet scams, intrusions, viruses (some mandatory Sun digs at Microsoft's expense) are costing billions a year worldwide (USD). he predicts total losses per year to exceed 10 billion USD very soon
  • connectivity
    cost of connecting is dropping, overall network bandwidth is going up. this is leading to a world where everything will be connected, all of the time
  • PRC announcement
    Sun just announced at COMDEX (going on here in Vegas at the same time) that the Peoples Republic of China will be standardizing on the Sun desktop
  • AMD announcement
    Sun also announced at COMDEX a deal with AMD where they will bee making a line of low-cost hardware using AMD chips
  • myth busting
    perimeter defenses aren't the answer. there are a number of reasons for this:
    1. the enemy is probably inside the firewall (he works for/with you)
    2. misconfigured hardware
    3. misconfigured software
    4. poorly defined roles and process
  • identity management
    relying on password based authentication alone is bad. it's way to easy to guess someone's password (for example, he uses the name of his dog as his password for online banking. but he's got two dogs, so it's twice as hard as it might be <g>). the answer is to use multi-factor authentication, like using both a password and a "token" of some kind (security card, etc.)
  • application security
    it's not just friendly apps vs. hostile apps. it's safe apps vs. unsafe apps, and an unsafe app isn't just a hostile app, but a flawed app. there's no one answer here, but a host of strategies need to be applied, like sandboxing, strict access control, etc
  • data security
    encompassing everything from access control to disaster recovery, data security is critical to the modern enterprise. typically, it's process failure and misconfigured (or hard to configure) software that leads to breaches in data security
  • other stuff
    compatibility, intellectual property and security in general were among the other topics discussed

Howard M. Lewis Ship, Beginning Tapestry: Java Web Components - session

the goal of Tapestry is to create an O-O framework for building web-sites. having not played with Tapestry myself, i don't know how well it succeeds on its goals (of which there are many), but it seems pretty interesting

one think i liked about what i saw was the fact that Tapestry enforced a separation between layout (HTML) and code (Java). this allows you to use whichever HTML design tool you prefer to edit the UI template, unlike JSP pages where WYSIWYG tools must play many, many tricks to deliver a similar experience

here's a list of goals and attributes of Tapestry based on the presentation

  • O-O centric
  • 100% component based
  • best practices rolled into framework
  • team friendly
  • simple, consistent and efficient

one interesting question was about a painfully slow development experience one of the developers was having. it turns out that Tapestry has a cache which can take a long time to heat up. in production, this is fine, but if you're developing content, the time taken to reheat the cache after each edit can be pretty painful. there's a way to disable this that you can find in the FAQ. i looked but didn't see it, but i probably just missed it...

Onno Kluyt, Java Community Process - session (plus a sandwich)

Onno Kluyt is Sun's Director of the Java Community Process. in addition to providing lunch for everyone, he gave a presentation on the Java Community Process. he talked about the membership of the JCP, like the fact that there are now more individual members then corporations, and about how the JCP evolves, like the fact that JSR 215 is in final approval (it should be approved within the next week or so)

it was interesting to hear about the upcoming mods to the JCP process brought on by JSR 215. one change has to do with transparency. from now on, JSRs will be made public during community-review, instead of just at public-review. the reason was for this is that the feedback being produced was excellent, but it was coming too late to be used. by public-review, the spec is pretty much baked. now, feedback will come in time to have real impact

there are some other changes coming as well. you can read about the whole thing online. according to Onno, these changes will take effect in the Jan/Feb timeframe

and then the battery on his PowerBook died, and since he didn't have his power supply, his presentation became "old school", where he had to make points by simply speaking. very retro <g>

one rather interesting question that came at the end had to do with the use of NDAs (Non-Disclosure Agreements) within the JCP. several people in the audience objected to their use, and pointed out that the ASF does not use them. Onno replied that NDAs would always be used in the JCP, the reason being is that corporate participants would be unwilling to disclose their reasons for seeking changes to a JSR if they know that any competitor would have access to their comments. if you think about it, this issue illustrates one of the key differences between a standards process at Apache (if you can call Apache a standards body) and at the JCP. interesting...

Bruce Snyder, The State of Apache Geronimo - session

this was a "must see" presentation for me, what with working for BEA and all. and it seems like i wasn't the only one who felt that way, as this presentation was packed

first thing discussed was "why another Open Source Java App Server?". there were several reasons given for this; no current open-source JAS is provided via a BSD derived license, there are already several pieces of the puzzle being provided by Apache projects, and no open-source JAS is currently J2EE certified

next up was a review of status of the various pieces. i hope i didn't miss a piece while taking notes (unfortunately the presentation given wasn't exactly the same as the one on the conference disk, so i'm doing this all from my notes). here goes:

  • EJB support - stateless session beans and stateful session beans are working, entity beans are almost working (they got them working later in the conference) and message driven beans are coming
  • Tomcat - they support the latest version of Tomcat, and one of their key goals in this area is to be able to drop at least part of the bean container directly into a running Tomcat installation to allow developers to play with EJBs without a huge upfront setup cost. then, if you want the whole thing, you can install the whole thing
  • JCA - they've got JCA (J2EE Connector Architecture) working outbound, and inbound is in progress
  • deployment - it's working, but it's not yet pluggable. the plan is to support deployment via JSR 88, which is the JSR defining the J2EE Application Deployment APIs
  • management - this is working via JMX, and the plan is to extend it to support JSR 77 too, which is the JSR defining a vendor-neutral standard for J2EE Management. they are also planning all sorts of additional features, like "hot system" management and an abstraction that makes it very easy to build and deploy POJOs (Plain Old Java Objects)
  • web services - the answer here is Axis. they are in the process of integrating Axis so that it will be the basis for all web services support in Geronimo
  • security - they have a pluggable model for both authN and authZ (authentication and authorization) based on JACC (Java Authorization Contract for Containers). the plan is to allow all security vendors to plug in. one scenario that they are keen on supported is single-signon within an enterprise
  • transaction management - they've got a simple, heuristic TM working now, but they know this isn't production ready. the plan is to support JTA/JTS by integrating with JOTM, but they haven't started this yet

other tidbits: they are currently in the Apache Incubator (or "probation for newbies" as they called it <g>), their target for release of their first version is one year from when they started (Aug 6th), and they invite people to get involved

and of course, they got a question about the current dust-up between them and JBoss. there reply was "no comment", but for those of you interested in learning about what's going on, this is the "letter that the JBoss Group's lawyers sent to the ASF. it's interesting reading, and (IMO) shows that our entire IP rights system is, without a doubt, totally and completely fucked-up

i should also mention that during the presentation, for all the individual area status reports, a different person stood up to deliver the status, said person being the owner/driver for that area. it was really quite impressive. and wandering around the resort, where you saw one of them, you usually saw a whole group of them, talking, hacking, laughing. David Bau and i spoke with them over some beers Monday evening, and you can tell that they are all very proud of what they've accomplished so far, and hungry to do a lot more. this is a project to watch for sure

David Bau, Inside Apache XMLBeans - session

before you read my notes on this presentation, i need to proffer a disclaimer. not like i'm a real journalist or anything, but still. ok, here goes: David Bau is a very good friend of mine, has worked for me off and on over the last 8 years (wow, has it really been that long David?), and was working for me all during the development of XMLBeans, which was done here at BEA where he (and i) continue to work. i think that XMLBeans is one of the coolest things i've ever had the chance to be involved with (not like i wrote any of the code or anything, i'm just a PHM) and i'm sure this colors my judgment. ok, end of disclaimer

so what is XMLBeans? it's a system for allowing Java developers complete object support for XML instances whose type is defined by XML Schema. in other words, if someone has defined an XML type-system using XML Schema, and you want to read or write XML types within that system, XMLBeans is the answer. and unlike many XML-to-Java systems out there, it supports 100% of XML Schema and 100% of the XML infoset (that's all the information that can be represented by a given XML instance). let me say that again, 100%. period. end of story. stick a fork in it <g>

early on in the design of XMLBeans, David decided that in order to really make the power of XML Schema available to the Java developer, you really needed a 100% solution. anything less led to this horrible system where developers would need to inspect the schema of an instance they wanted to read/write, and then pick the system that allowed them full access to that type. the world would be so much better a place if the developer needed to learn just one thing, and could use that whenever needed. so that's what he and the guys built

ok, enough of the high-level, what is XMLBeans? well, it's really 3 things:

  • schema compiler - this is used to turn XML Schema type definitions into early-bound type system objects. so for example, if someone has defined a purchase order in XML Schema, the XMLBeans schema compiler will produce a set of Java interfaces that correspond to the schema type system along with some internal data structures that allow it to support those Java interfaces natively
  • early-bound runtime - this is the backing implementation for the Java interfaces that the schema compiler generates. it understands the XML type system defined by the compiled XML Schema, and in turn uses the core runtime as the final backing store
  • core (late-bound) runtime - this is the XML store within XMLBeans. this is the only part that can be used separately, if desired, to allow a cursor style programming model directly against the loaded/generated XML. so if you wanted to build a generic XML processing engine, the XMLBeans core would be a solid foundation to build upon

XMLBeans are currently in the Apache Incubator, and hope to get out of incubation soon. and they are looking for help, so if you're interested, get involved! v1 is complete and usable today, and v2 work is just starting

Steven Noels, Introducing Apache Cocoon - session

i have to admit, this was the last presentation i attended on Monday, and i was getting pretty tired. so my not taking really suffered at this point. you'll see...

Cocoon is a web-publishing framework for portals. and it's really, really big. it has a pipeline architecture, and runs inside Java web-servers as a servlet

and at that point, my brain froze for the day, and it was time to close the laptop and crack a cold one. sorry for the short-shrift on this presentation. i spoke with Steven a couple of times during the day, and he's a really smart guy

observations and extra-curricular activities

wireless power
that's right, what we need is wireless power. otherwise a laptop just can't make it through the day. since we'll probably be waiting a long time for this little innovation to show up, it'd be great if conference organizers would add power outlets to the list of "things geeks need to be happy". it's pretty much de rigueur that conferences provide 802.11b, but they must think that all attendees are lugging a knapsack full of batteries because they sure weren't providing power-taps. so at the beginning of each session, you could see people unscrewing those brass floor plates to get at power outlets

PowerBooks are everywhere
the number of PowerBooks present was stunning. i'd say at least 1/3 of the laptops present were Macs, maybe more. the 15" was the most common, with a strong showing of 17" PBs as well. i'm sure i must be the zillionth person to say this, but the PowerBook is becoming the laptop of choice for the alpha-geek. people keep talking about Linux on the desktop as the trend that Microsoft needs to worry about. forget it. Linux is the powerhouse on the middle-tier and back-end. on the desktop (and laptop) the trend that should be keeping Microsoft up is the Mac. we'll see...

and the winner is...
this year, WebLogic Workshop 8.1 (yes, the product i worked on) won PC Magazine's 20th Annual Tech Excellence Award in the Development Tools category. and it (along with the other Tech Excellence Awards) was presented Monday night at a party held at the Venetian. had a few cocktails before the awards were announced, and had more then a few after winning <g>. it was really great. David Bau and i got to go up on stage to accept for the team. did i mention it was really great?

ok, back to ApacheCon 2003...

Ceki Gülcü & Mark Womack, What is new in log4j version 1.3? - session

this presentation was very well attended. it started out with a description of what log4j is. for those who've never used a structured logging facility, it's a system (represented by an API) that allows developers to add calls to log4j throughout a body of code, and then control the information that flows from these calls. it can be sent to files, syslog, SMTP, the console, you name it. as a matter of fact, we use it here at BEA throughout WebLogic Workshop. i can't tell you how many times during a development cycle i've received a bug report containing both a stack-trace and a log file, without which i wouldn't have been able to diagnose the failure. logging is a good thing

coming in version 1.3 are domains (new way of organizing logs), more sophisticated log rollover, improvements in speed and memory size, plug-in model, external receiver model (for generating events into the log4j world), watchdogs, interop foundation for integrating with .NET, C++ and Perl, a whole new Chainsaw. anyway you slice it, there's a ton coming in the new version. however, there is no current date set for deliver. it'll be ready when it's ready <g>

one request that came during the Q&A session was for "TRACE" to be supported. at this point there was a chorus of agreement from the audience. it turns out this is in the process of being voted on, and so may be present in future versions. stay tuned...

Chris Pirillo, The Death of Email Marketing - keynote

you know how when you wake up in the middle of the night, there are no lights on, and yet you can still see? i don't mean you can read a book, but you can see the bed, the door, the desk, and can navigate without killing yourself. and then suddenly, without warning, your spouse/sigot turns the light on and even though it's just a 100 watt bulb, or a couple of 60 watt bulbs, you suddenly find that you can't look directly at anything anymore, much less the light itself? it's not the intensity but the contrast that's so jarring

well, that pretty much sums up my initial reaction to the first 5 minutes of Chris' presentation. don't get me wrong, i really liked his presentation, his delivery style, etc. this is a guy who loves to be the center of attention (and it takes one to know one <g>). but after 2 days of solid geek-style sessions delivered in passionate and yet muted tones, Chris' delivery was a bit of a shock, albeit a pleasant one

ok, enough of that, what did he talk about? well, that the use of email as a tool for marketing is over, and will be/should be replaced by RSS. restated, that opt-in/opt-out distribution list messaging via SMTP has too many problems, and should be replaced by RSS via HTTP. among the problems with opt-in/opt-out messaging over SMTP:

  • spam (problem for users)
  • spam filters (problem for marketer)
  • defensive use of email aliases (nobody wants to give theirs out)
  • opt-in/opt-out systems are all mostly unique, and used by spammers to detect live aliases
all of these problems are overcome by using RSS. since the user decides which feeds to subscribe too, the user is in complete control. if the signal-to-noise ratio on a given feed gets too low, you just stop monitoring. thus the marketers job focuses on two things, a) making users aware of their existence and b) keeping their feed relevant to users that are monitoring their feed

overall, i agree with Chris and think this is the direction that things will be moving to in the very near future. however, there is one point where i disagree with Chris, and this is an assertion that RSS is by its very nature unspammable. or i should more properly say, i agree with Chris that RSS is unspammable in the case where all feed info goes one way, from marketer to user. however, many RSS-based news systems are adding discussion groups and traceback capabilities, and support RSS on those news sources. well, now the spammers have a place where they can jump in and wreak havoc

for example, let's say i decide to monitor an RSS feed from Apple about product announcements. and let's also say that Apple allows people to comment and rate those products. i'd want to subscribe to a feed that contains both streams of info, appropriately threaded. so that if Apply came out with a new iPod and i was thinking of buying one for my wife, i could learn from the experience of people who'd bought one. and this all sounds great right up until some user posts a comment titled "defect in my brand new iPod" and i started to read the body of the post and it was removing unwanted body hair. now of course, Apple would remove this post fairly quickly, but my RSS reader might have already downloaded this post. you see where leads

but while this is an important problem to deal with, i agree with Chris that RSS is a significant step forward that should be taken with alacrity. i had wanted to ask Chris about this issue in his talk, but he ran out of time and wound up telling people to email him with any questions. so i'm going to do that and see if he's thought through this wrinkle

Novell Vendor Showcase - session (plus a sandwich)

Novell sponsored lunch on Tuesday to get the word out on it's participation in and commitment to open source and the ASF. Novel is involved in the development of Apache, Tomcat, Perl and PHP, as well as MySQL and Ximian. in fact, Novell has purchased Ximian. they are also a big believer and supporter of AMP (Apache, MySQL and Perl/PHP)

they are also in the process of purchasing SuSE Linux. overall they are trying to do what IBM is doing with Linux, but where IBM is doing it on the middle-tier and backend, Novell is trying to do it on the desktop. they know that in order to really make inroads onto the desktop, someone needs to produce an integrated, easy-to-use experience. many have tried this of course, but Novell thinks it has an edge because it can control and direct an entire "Linux stack". it's a gutsy move, but if someone is going to pick something other then Windows for the desktop, it's hard to see another choice besides Mac OS X. but i'm glad they're stepping up. this is what free-market types call "animal spirits" <g>

they did a demo of some admin functions on Ximian to show the inroads they've made in ease-of-use. it was good work, but as Mark Igra likes to say, "if you want someone to switch, you can't just be 10% better, or even 100% better, you really need to be 10x better". Mark, i apologize if i ruined your quote <g>. but regardless, you get the idea

another demo they did was of them wrapping lots of "conf" file editing (more admin tasks) with a browser-based app scheme. it turns the whole thing into forms, etc. it's a really nice idea, esp. for remote admin. if they bake this architecture throughout Linux, it could give then an interesting leg up on the competition (Microsoft and Apple, IMO)

they wrapped things up with a discussion and demo of Mono (that's moe-no, not mon-o, as they point out). it's an implementation of .NET that runs on *nix, as NT/XP and Win9x. they were immediately questioned about Microsoft's response to this product, either legal or strategic, but none of the speakers from Novell could answer this question. it must be the first time any of these presenters has demoed Mono, for i can't imagine this question not coming up every at every demo. on the other hand one of their presenters felt the need to explain to the audience that GC stood for "garbage collection" on a block diagram of the VM architecture <g>. so it may be that they'd never presented to an alpha-geek crowd before

but their demo was quite impressive. they showed C#, VB and ASPX files all running on their VM. i wish they'd of had someone really technical from the Mono project present, as their are a bunch of questions that i'd have loved to ask about. one fairly wacky idea i had was to focus Mono on being a Java cross-compiler, so that instead of building a VM, you built a compiler and RTL that mapped to the Java VM and RTL. i'm sure there are good reasons not to do it this way (i can think of several myself). but it would have been cool to hear about it from the perspective of someone faced with the job

all in all, Novell seems to be setting some extremely tough goals for itself, but if even one of them succeeds, the rewards would appear to be great. it'll be interesting to watch this one unfold

Sander Temme, Apache and Zeroconf Networking - session

Zeroconf is cool. this presentation was about adding support for Zeroconf to Apache, but began with a brief overview of Zeroconf and it's uses

the one sentence description of Zeroconf can be found on the org's website, "The charter of the Zeroconf Working Group is to enable Zero Configuration IP Networking". what a simple and powerful goal. why? well, consider some interesting scenarios, like you've got two computers that want to talk to one another to play a game, swap some music or just trade some files. you'd like to just plug a crossover cable into each one and just have things work. your computer would get an IP address, as would the other, they would both find one another and communication would ensue

but wait a second, how do they each get IP addresses? neither is running DHCP. and how do they find each other? neither is running DNS. the answer is Zeroconf. and more and more systems are taking advantage of it. one popular example of this is iTunes music sharing. if you enable music sharing in iTunes, you'll see a list of all the other iTunes users who have also enabled music sharing. there was no coordinator involved making this happen, it was Zeroconf allowing them to discover one another. in fact, Apple switched from AppleTalk to Zeroconf (they call it Rendezvous) in the Jaguar version of Max OS X

here's (roughly) how it works. first off, Zeroconf coexists with traditional IP services like DHCP and DNS. so this isn't some either-or decision. but assuming that DHCP is not available when the device wants to communicate, it will first pick a random IP address from the link-local address range (169.254.*.*). then use an ARP (Address Resolution Protocol) message to allow an already in-use address to defend itself

next, use mDNS (Multicast DNS) to discover the address of other services. for example, a game can look for other games, etc. it runs on a different port then DNS, and on every host. in this model, machines name themselves. but there's one thing missing, and that's dns-sd (DNS Service Discovery). this provides for actual network service browsing via either DNS or mDNS. this is how the list of iTunes shares is populated

if you want to play around with Zeroconf on your platform, check out Swampwolf Technologies, providers of a cross-platform implementation of Zeroconf named Howl

the plan is to enable Zeroconf in Apache httpd 2.0 so that virtual hosts are registered with the mDNS responder. this will allow other Zeroconf services to browser and connect to services proffered by any instance of httpd

wrap-up

that pretty much did it for my first ApacheCon. all in all, it was a really good trip and i learned a lot about what's going on in Apache. there was some very good energy at the conference. i'm looking forward to next year, and i expect it to be even bigger

Rod Chavez has been building systems software for more then 15 years. He spent the first half of the 90's working for Borland, and the last half for Microsoft, before leaving gainful employment to start Crossgain (co-founded with Adam Bosworth) in 2000, which was then purchased by BEA Systems in in the summer of 2001.