Wednesday 27 June 2012

#TEE12 Day 1 roundup: SharePoint BI&dev and awesome Agile estimation

Today I attended the following:
  • Keynote
  • Creating Self-Service BI Solutions with SharePoint 2010 (ooh on blog and everything!)
  • Developing and Managing SharePoint solutions with Microsoft Visual Studio
  • Making Agile Estimation Work
Below are the major bits remembered off the top of my head (the transformer is recharging!) To take them one at a time:

Keynote


The crux of the keynote was the epicness that appears to be Windows Server 2012 and the new Hyper-V. I'm not really a server kind of a guy, IOPS and supported levels of RAM only bother me if I hit a limitation on the platform I'm working within. However there were a few fairly impressive things, especially when it comes to the connection between WinServ 2012 and Azure, which appears to have finally come of age as a serious infrastructure private cloud provider.

Previously, I think it's fair to say, Azure had been an odd sort of product. More or less existing as a rather specialist development/deployment environment that forced devs to learn a lot of rather specialist techniques for pushing apps to the cloud. Rather than the model that has been maturing for a while now: allowing orgs to buy cloud infrastructure (and with it the control that 'owning' infrastructure gives) and then offload particular apps to a readily scaleable (both in a +ve and -ve fashion) manner.

Azure has now woken up to this model and appears to be a very attractive offering, especially considering the ease with which servers can be rolled up in the Azure cloud, or in a private cloud, and transferred between the two according to demand and locality issues. 

It goes without saying that this comes hand in hand with a new Hyper-V offering, and MS were keen to demonstrate just how performant its latest virtualisation platform is. The numbers look very good indeed, far better than previously and genuinely in line with, or better than, what you'd expect from VMware or a.n.other competitor virtualisation infrastructure provider. 

Taken together, WinServ2012 and Hyper-V, managed through SCOM2012 look pretty awesome. HTML5/metro apps look nice and the demos were all very slick indeed (only one had to 'drop' to a backup machine, and to be honest that wasn't really a 'money shot' demo).

Considering, as mentioned earlier, this isn't really an interest area of mine, I was quite intrigued by it all, especially to the extent to which it may be possible to consider driving our SharePoint infrastructure to Azure (or a.n.other cloud service) in the future.

Food for thought, which was more than I was expecting.


Self-Service BI in SharePoint 2010


In many ways this was a knowledge 'rinse and repeat' rather than new content. The content was good, however, and the takeaway for me is the need to do two things:

  1. Generate some Business Intelligence Semantic Models (BISM) in Analysis Services and push them out to our report developers across the University. My first thought here is admissions and the data we hold there regarding the nexus of applicant demography and course end points. My institution should be making decisions regarding course approval and continuation based on good predictive data and whilst they have access to a variety of data, I'm not convinced it's tied together perfectly, certainly not in the natural 'slice and dice' environment of PerformancePoint or PowerPivot.
  2. I need to demonstrate some of this to the right people. At present we function off of a fairly innocuous non-cube table conglomeration of data. I need to show the variety of end points reachable through a cube and tie that together with built in KPIs and trend metrics to show what a really extensible and explorable model a true BISM provides. 
In other words, it's time to show where we should be heading next.


Developing SharePoint apps with VS2012


Another good session with some recap and some new content. VS2012 usefully does away with a number of solution templates, replacing many with project-level items add-able once a solution is created. There are some real advances with regard to the design and packaging of features and WSPs making that side of things a breeze. All of the usual development endpoints are present and vastly easy to develop than for MOSS2007 (our upgrade to SP2010 should be beginning soon!)

The VS team has also been hard at work providing us with some tools that would be traditionally found within SharePoint Designer (like the list creator), which will speed up solution development as well as providing better code management as list definitions and instances can be packaged up within a solution/feature-set.

For me the really new bit was the quick look at Visual Studio Lightswitch. This looks like the beginnings of an MS advance into a space traditionally occupied by third-party vendors interested in hooking you with a promise of 'RAD' data-driven apps and then tying you in with a long-term commitment to a hybridised and inaccessible data layer coupled to non-standard code. We have a system like this in place at my institution right now (I'm sure we all have them somewhere!) and Lightswitch looks to me like a genuinely plausible route for further investigation for a possible migration out route. We would need to seriously investigate our underlying data architecture (a polite way of saying that our DBs are a bit of a mess in places) but my intuition tells me that a demo on my return might convince some people that we should begin the process forward and away from our legacy environment sooner rather than later.

I'll be going to a session dedicated to Lightswitch tomorrow, so will doubtless write more about that afterwards!


Making Agile Estimation Work [slide deck]


Seriously brilliant session with Joel and Stephen from Telerik (and elsewhere). Massive thumbs up for a brilliantly engaging and enjoyable session bringing to life some Agile concepts and topics I've been playing around with and turning over in my mind at work.

The focus here was the perennial issue... You're stopped in the corridor by your archetypal 'big boss' and they say "I've had an idea... I want a website, I want it to be blue and I want to it to deliver everything our current site doesn't. How long will it take?"

The problem is two-fold: firstly, it misunderstands the nature of estimation to seriously answer such a question (a) without requesting information and (b) with an exact 'number' rather than with a range. An estimation is defined as something based on inaccurate and incomplete data, after all. Secondly, the 'big boss' intends to hold the underling to account for this initial guess (an estimate needs some actual data fgs!) but development should be pointing to the mass of accumulated historical data regarding software projects and say: "Look! You don't want this estimate, we both know it will be wrong, possibly by a factor of 4X, we won't know a more useful estimation on delivery until about 5 weeks into development."

There were some really useful practical tips in here. To begin with, with user stories and story points, begin with a user story that you can confidently assign a story point value to (adding a login screen, perhaps) and then use that to derive comparative values for the other user stories in the product backlog.

Next, accept that the Cone of Uncertainty is unavoidable and use it to your advantage. It is completely ridiculous, either as a developer or as a manager of developers, to begin to claim you can beat this device. It's based on real data, it's not some dreamt-up management consultant's way of making a fast buck (well Joel and Stephen didn't charge me for it, yet.) Instead accept it and use it to drive your calculation of your team's velocity and then use it as a mechanism for re-calculating and re-prioritising that velocity and product backlog as each sprint completes. Once you know what a sprint can realistically deliver (after about 4-5 sprints) you can then much more confidently estimate (it's STILL an estimation though, just a more accurate one) your team's velocity (I kind of prefer the term efficacy here, but that's not Agile terminology) and then apply that forward across the remaining items on the project backlog (there's always more to do, remember! Archetypal managers are also intimately acquainted with the phrase "Oh, by the way...") It's instructive to note that both Joel and Stephen suggested having short sprints of max 2 weeks for exactly this purpose.

This connection between estimation, planning poker (the suggested mechanism for arriving at story point assignment) and actual calculated team velocity is not something I've really thought about in detail before, but I am now pretty wise to the value of bringing them together. It was also good to see that they factor in bug and unexpected complications into a standard velocity calculation, again with the overall aim being to provide a more accurate estimation for the current project.

It strikes me that you kill two birds with one stone adopting an approach like this. Not only do you slowly begin to claw back some intellectual ground from the business who, quite justifiably, believe that IT always delivers late and over budget; you also give your development team a chance to render visible their actual velocity against real projects in progress right now and this is an extremely useful thing as a developer. It's actually (believe it or not) positive feedback and if accompanied by the right words from a manager should serve to drive performance.

The side of things that will be most difficult for the business to accept here is that the Cone of Uncertainty applies to every project, every time. It will be hard to counter the feeling that "when we've done this once, we'll just know how long every other project will take." This evades the message that the Cone really delivers, however. The whole point is that the Cone applies to every project, each time, and estimations (based on no matter how much experience) are still a function of the inaccuracy of the data being fed into them, not the people being asked to conduct the estimation. It's absolutely true to say that a good manager will learn over time how to read story point planning poker exercises and will begin to make estimations on project delivery time accordingly, but the caveat of the Cone of Uncertainty should always be remembered and a good manager should always express that it will take a number of sprints for the business to surface all of its requirements regarding any given product and the development team will take a number of sprints to unpick these and start delivering a more predictable velocity.

Oh, and finally I'd like to stand by original estimate of 20,000 sq ft for Joel's house. With a bathroom that size, there's no way it's any less than that ;-)

More anon.

- rob

p.s. Hello to m'colleagues from University of Southampton: Chris, Emad (Electroncially Mad!), Dan and Rick.

No comments:

Post a Comment