Nov 20 2013

Don’t push the wall, climb it

I wasn’t writing for a while. Great dynamics happened meantime. With backlashes obviously, but it seems truthful to see that there is nothing gained if there is no cost to pay.

With this short story I wanted to summarize my period from July 2012 till the beginning of this year. During that period and after long run of corporate comfort, I tried to prove myself of what I can do and to rediscover myself professionally.

After 7 years at MS which redefined me completely for the period I had spent there, I wasn’t looking for a serious job.
I just wanted to feed and keep the comfort zone working at home and for the family.

For myself I only wanted check old skills against the new ones. I was confused with the long term future so I’d decided to start with short term thinking only (literally). I’d started a self employment based freelance practice to serve my old customers and to sell my time to new ones. A freelancer simply said.

With the software development, consulting and pre-sales experience from the past, great connections from all periods of my past work, I put myself easily into a stack of various activities, including:

- software development leadership (coaching, advising, keeping devs doing their job)
- software development consultancy  (high-level architecture)
- business development assistance (pre-sales mostly)
- business development strategy (new developments, new directions, melting up tech and business keening to build up new offerings and business models)
- reformation of consulting and sales teams (operational stuff and it was fun to be CEO’s hidden agent)
- training & public speaking (conferences and in-class trainings included)
- book writing and editing (hi to Packt!)

All so different things. Some paid, some done for fun and to gain new experience on the fly.
Different to each other and tight in a limited time.

Short summary from all above I can outline: don’t experiment with your time this way if you’re not ready to pay the tax!

It was and I still think amazing to recap the skills and the personal directions to go forward with the professional path but..

There is always a but.. and my “but” was time management and work/life balance.
Typical fault of freelancing and with these rants I probably don’t say anything new. Still as a warning I want to highlight it again.. Sometimes too much is too much even for simple discoveries.

My Gains:
- financial benefits. If you can handle it, it comes (seriousły, annual salary made in a quarter is possible, but if you appreciate creation more than money it won’t give you happiness. It neither did go that way to me)
- discovery of strong versus weak sides of your own personality (but at huge cost). I know more than ever what I’m capable of yet I had to pay my taxes beyond imagination.
- great refreshment from corporate stagnation (if you were there before you probably know what it means to build a need for a heavy face kick to wake up)

Looses:
- work&life balance. I don’t want to articulate it, some know words I know to express it, that’s enough. But seriously, never forget about your close ones and put the right priority there. You simply can’t manage emotions, they just come and depart.. apparently.
- completely different stress you have to learn how to handle. I’m used to it now, distance helps to build perspective and to push horizon forward. Yet I admit, stress to recover from old comfort zones can be a killing machine.

Good but costly learning which at the end has put me in yet another bucket, both professionally and privately.
New hobbies, new people, new roles in life.

Professionally I’m now involved strongly in VC world, startups and handling CTO responsibility for an emerging consulting firm.

With my past in engineering, consulting and long time tech evangelism track, it looks like it’s been natural evolution.
It deserves a separate story to be written down, so let me finish now.

The reinvigoration of this blog seems to be engaged.


Apr 26 2013

AWS Summit 2013 – London Edition

One day passed since I’ve come back home from London attending Amazon’s roadshow about their cloud offering.

Technical conference with keynote speakers including Werner Vogels himself. If you don’t know who Werner Vogels is then I recommend to google a bit about a person. Shortly, he is a CTO of Amazon. His keynote at Amazon conference in Las Vegas, hold last year November, intrigued me heavily. Especially with famous sentences like “academic research is done” and “everything what was to be researched is already researched” and “all architectures we dreamed about are finally possible”. I’m not sure how exact my memory about those sentences is, so please do not treat them as real quotes, yet meaning stays.

I flew to London as complete newbie to Amazon. I had some pre-sales knowledge and general understanding of Amazon’s direction but as technical person, architecture leader and consultant that’s obviously not enough to fully utilize the potential. I was curious and I’d hoped that my visit to London and AWS summit would feed my hunger for knowledge, new experience and to be better prepared to compare big vendors offerings. By big vendors I obviously mean Google, Microsoft and Amazon.

I’m happy as I’ve got my answers. As a person who was with Windows Azure from the very beginning (PDC 2008) and who was in Azure SWAT team in Microsoft Poland helping to sell the message in the local market I found many aspects really familiar. Often natural comment to me was like “yeah.. Microsoft has it too”, then I kept reminding myself that yeah, but Amazon was there in 2006 while it’s still MS who chases the rabbit.

Aside of feat-by-feat comparisons, I finally figured out how to position cloud computing platforms. At Microsoft we desperately tried to sell Azure to developers who:

- did not care

- were afraid that the legacy architectures will kill their years long efforts

- did not understand the value

- were afraid that something they used to have for free (dev environment) has to be paid with on demand basis

Then their bosses mostly from traditional Microsoft ecosystem of ISVs (with boxed solutions, easy to define, on-premise based business models) could simple not see the ROI.

What I’ve learnt at AWS summit that paradoxically cloud computing platform is not for developers in the first front. They have their role of course and choices to stay in their comfort zones, but making developers happy about staying with .NET, python or whatever to deliver cloud services is not enough to show the complete perspective. Big picture of cloud computing is designed for the IT and solution level of architecture, not software development level of architecture.

If I dared to explain my learning in detail, of course all dots would become connected but the trick is in messaging. Messaging when I don’t try to say that IT is not needed anymore as Microsoft will take care of it (while it doesn’t, it just provides basic level of high availability).

IT in the world of cloud computing is needed more than ever, yet it has to transform rapidly.

I loved the speech of CIO of News International, who said that before he joined the company they had IT department. After years of his operations they do not have IT department anymore. IT has emerged into Technology department. Suble change in names suggest one thing. IT department is perceived by its IT operations.  Those can be heavily automated and standarized by cloud based services. If all that ecosystem of solution can be designed, delivered and maintained well, IT can do what modern IT is supposed to do. Be much better connected to business and become such Technology department whose mission is not to run IT operations but to deliver innovation.

For such KPIs change. IT operations often are measured by cost to performance ratio. That’s how budgets are negotiated and challenged. Innovation goes completely different road. It’s about value to performance ratio, with such positioning IT gets tools to understand, measure and control ROI much better and those arguments are the arguments for much more efficient discussion about budgets.

That’s my biggest enlightenment from AWS Summit, especially if some details from their business model is considered as natural helper to unblock the conversation.

From technical point of view I was mostly bootstrapping myself into the ecosystem so I’ve learnt basics. I touched all components I needed to realize some pros and cons for particular architectural choices. At the end I summarized it and personally compared with Google and Microsoft offering. I challenged a few Amazon representants with interesting questions.  Many brought me answers like “we’re not ready to announce anything about it yet”, which to me with my corporate experience means “Yes, we do have plans, yet I cannot tell you when those will be announced, but I’d say ‘_no_ we do not intend to’ if it wasn’t the matter for this or next year”.

All my notes which have about 30-40kB of raw text brought me all answers to say that Amazon’s offering is most mature and most complete. Some would say still, I’m saying I’m happy that I’ve finally met the stack, the company and its value proposition.

Quite fun to be on the other side of such event and to observe how Amazon organizes events, touches the community and builds relations with developers. I’m glad I chose to fly to London for the event.


Apr 18 2013

Simple Skeleton Framework for Cocoa OSX OpenGL application

I found over the web that even official document on Apple’s Developers Network is quite rich in the aspect many people over the web ask for basic tutorial how to jump into OpenGL programming on OSX/IOS platforms.

One of the reasons I presume it’s so is because even if documentation and the library is complete in many aspects details are quite.. disconnected. For that reason I decided to show small tutorial how to start with OpenGL programming on OSX platform.

It’s not a tutorial about OpenGL itself, I assume developer knows it very well or enough to continue on it’s own, yet is not very much familiar with general development skills required to target Apple’s platform.

It was my case when after more than a decade of sticking in Microsoft’s platform I decided to learn OSX/IOS programming.
With such personal experience I crafted this text carefully looking for Microsoft’s analogies to ease the learning path for developers coming from that ecosystem.

Initial requirements:

If you’re complete newbie to Apple’s software development ecosystem please mind that to continue with this tutorial you need:
- OSX platform (one of Mac computers)

Note: I played with Hackintosh some time ago and aside of legal part of the experience when we speak about graphics programming where it simply lacks of good drivers and it’s very unstable. If you started your Mac experience from such configuration I really recommend you to buy a Mac to continue. Honestly speaking, I started with Hackintosh virtual machine in times when I worked at Microsoft. Just for curiosity to check the competitive platform without the need of buying it and without alternatives that could help me evaluate it’s value proposition. I fell in love and purchased the real hardware and for the main topic of this article as said I really recommend to you to go and buy one too.

- Xcode tools – available for download from App Store (free of charge)
- Some additional OpenGL libraries which you may like to use (like glem for example)

 Initializing OpenGL project

My approach to this tutorial is very raw, to keep it simple on those aspects that require integration with native Cocoa platform.
So our starting point is simply starting new app project, targeting OSX (Cocoa Application)

From that point you should have MainMenu.xib file which is Xml format to describe UI and it’s basic behavior. It’s quite comparable to Xaml as an idea. On the right pane’s bottom part you should have Object’s library. Select Data Views section to ease search for right view  control and find OpenGL View into the window. Change the properties to let this view cover whole the area of the window, set the window size according to your initial expectation.

Creating Custom OpenGL view

Standard view still needs some massage to fit your individual requirement. For that custom Objective C class has to be created. In my example let’s call it OGLDemoView. Its interface and implementation code looks like follow:

<HEADER>

 #import <Cocoa/Cocoa.h>
 #include "tut01_renderer.h" // I'll explain this include later, 
                             //shortly it's our pure C/C++ renderer
@interface OGLDemoView : NSOpenGLView
{
//system timer, needed to synchronize the frame rate
    NSTimer* renderTimer;
//our C++ renderer as I aim to minimize
//ObjectiveC footprint and use clean C/C++ only, if possible
    tut01_renderer renderer;
}
//it's analogical to WM_PAINT event in Windows
- (void) drawRect: (NSRect)bounds;
@end

<BODY>

 #import "OGLDemoView.h"
 @implementation OGLDemoView
- (id)initWithFrame:(NSRect)frame
{
    self = [super initWithFrame:frame];
 //below code helps optimize Open GL context
 // initialization for the best available resolution 
 // important for Retina screens for example
    if (self) {    
       [self wantsBestResolutionOpenGLSurface];
}
   return self;
}
- (void)prepareOpenGL
{
    // Synchronize buffer swaps with vertical refresh rate
    GLint swapInt = 1;
    [[self openGLContext] 
         setValues:&swapInt 
         forParameter:NSOpenGLCPSwapInterval];
    renderer.init();
}
-(void)awakeFromNib
{
//when UI is created and properly initialized,
// we set the timer to continual, real-time rendering
//a 1ms time interval
   renderTimer = [NSTimer timerWithTimeInterval:0.001  
                     target:self
                     selector:@selector(timerFired:)
                    userInfo:nil
                     repeats:YES];
   [[NSRunLoop currentRunLoop] addTimer:renderTimer
                                 forMode:NSDefaultRunLoopMode];
//Ensure timer fires during resize
    [[NSRunLoop currentRunLoop]
          addTimer:renderTimer
          forMode:NSEventTrackingRunLoopMode];
}
// Timer callback method
- (void)timerFired:(id)sender
{
// it's the update routine for our C/C++ renderer
   renderer.update();
//it sets the flag that windows has to be redrawn
   [self setNeedsDisplay:YES];
}
// Each time window has to be redrawn, this method is called
- (void)drawRect:(NSRect)bounds
{
  //below code sets the viewport of Open GL context into
  //correct size (assuming resize, fullscreen operations may trigger change)
  NSRect backingBounds = [self convertRectToBacking:[self bounds]];    
    glViewport(0,0, backingBounds.size.width, backingBounds.size.height);
 //our renderer's drawing routine
   renderer.render();
}
@end

Having above code defined correctly in the project last step we need is to assign it as the class handling OpenGL view put on our window (right pane and correct properties can assign this class to the visual object).

Toggling Fullscreen

The easiest way to toggle our OpenGL application between fullscreen and windowed modes is explained by below code, which I put into my app delegate and assigned as menu item’s action:

- (IBAction)fullscreenToggled:(id)sender {
    if (![self isFullscreen])
    {
        [self.view enterFullScreenMode:[NSScreen mainScreen]
                   withOptions: nil];
        self.isFullscreen = true;
    } else {
        [self.view exitFullScreenModeWithOptions:nil];
        self.isFullscreen = false;
    }
}

This simplified approach can be easily extended to cover all possible scenarios and with above implementation is far from complete but for certain circumstances works quite stable as base point for further investigation. One of the consequences of above implementation is the behavior when your app goes fullscreen and then will come back to windowed and you will manually resize the window. OpenGL rendering will stop working because your OpenGL context is not aware of all the circumstances for the change. I’m not covering that in this post. To keep it simple stupid you can window’s resizing by setting the same values for min/max width/heights and continue.

C/C++ renderer and coding continuation without ObjectiveC/Cocoa impact.

If your application from that point needs only to render 2d/3d images with OpenGL and that’s all you need to know from the platform perspective. Of course if you need input (keyboard/mouse) interaction and handling other events then fun continues.

If we want to continue with the rendering code from pure C/C++ perspective our next step is to build base class C++ class for our renderer:

   class base_renderer
    {
    public:
       virtual void init() = 0;
        virtual void render() = 0;
       virtual void update() = 0;
    protected:
       void clear(float r=0,
                  float g=0, 
                  float b=0,
                  float a=1,
                  bool depth=true);
        void flush();
    };

Class is abstract so the only important (and base elements) are those which clear the buffers and flush, their Open GL code are represented below:

void base_renderer::clear(float r, float g, float b,
                          float a, bool depth)
{
    glClearColor(r, g, b, a);
    if (depth)
        glClear(GL_COLOR_BUFFER_BIT);
}
void base_renderer::flush()
{
    glFlush();
}

Now let’s see our tut01_renderer example which I used in the OGLDemoView code above. It’s not sophisticated and visually compelling but show the point how to make real-time renderer in Open GL that is bound to above basic infrastructure:

<HEADER>

class tut01_renderer : base_renderer
{
public:
    virtual void init();
    virtual void update();
    virtual void render();
private:
    float shift;
    float shift_direction;
    void draw_triangles();
};

<BODY>

void tut01_renderer::init()
{
    shift_direction = 1;
   shift = 0.0f;
}
void tut01_renderer::update()
{
#define SHIFT_MOVE 0.005f
    if (shift_direction==1)
    {
        shift +=SHIFT_MOVE;
        if (shift>=1.0)
            shift_direction = 0;
    } else
    {
        shift -=SHIFT_MOVE;
        if (shift<=0.0)
            shift_direction = 1;
    }
}
void tut01_renderer::render()
{
    clear();
    draw_triangles();
    flush();
}
void tut01_renderer::draw_triangles()
{
    glColor3f(1.0f, 0.85f, 0.35f);
    glBegin(GL_TRIANGLES);
    glVertex3f( -1.0+shift,  1.0, 0.0);
    glVertex3f( -1.0, -1.0, 0.0);
    glVertex3f(  1.0, -1.0 ,0.0);
    glColor3f(1.0f, 0.0f, 0.35f);
   glVertex3f(  1.0-shift,  1.0, 0.0);
    glVertex3f( -1.0, -1.0, 0.0);
    glVertex3f(  1.0, -1.0 ,0.0);
    glEnd();
}

Summary

Above example is very simple and doesn’t cover all possible scenarios but to people who are familiar with OpenGL (or general 3d programming) and are starting with OSX development it should be enough to focus on learning Open GL and having basic skeletal framework on OSX already done.


Dec 11 2012

Starting up with NodeJs from the beginning with TDD in mind

I started playing with NodeJs. It’s a separate topic why and how, but my overall approach was that from the beginning I wanted to have that TDD habit included in the checkout/code/run/test/build/commit daily workflow.

For me, being complete newbie in the sphere of NodeJs it has taken about hour to set things up correctly, which at the end I can say is very intuitive, yet with lack of good documentation a few Google searches were needed, mostly followed by StackOverflow lecture.

I decided to describe the whole process in this post blog as with my searches I didn’t find any single place where it’s very well explained with complete newbie in mind of the authors and contributors.

First of all you need NodeJs, and if you don’t know where and how to find it, then probably this post isn’t for you. Node comes with interesting package manager called NPM and which comes with handly command line tool with the same name but in lower case.

Using it you can add additional components to your NodeJs project and those components will be nicely encapsulated in ./node_modules/* folder structure.

What you can start with is to install a few critical modules that are really helpful in test driven development where you need handy tools for manual unit testing and growing your unit testing base in natural – code first – way. Additional tools I recommend are helpful for code coverage that is suplemental to all TDD tactics known to me.

So, lets begin with:

npm install mocha
#which installs Mocha – great and very usable unit testing environment

npm install should
#which you will belove for great assertion syntax that complements core Mocha and NodeJs assertion routines

And last component you need is node-jscoverage which I initially added also through.

npm install jscoverage

but for some reason package pushed to NPM is kinda broken and for this short time I quickly decided to clone node-jscoverage directly from github (https://github.com/visionmedia/node-jscoverage) and manually copy all sources in my ./node_modules/jscoverage folder.

JsCoverage has lot of C/C++ code which you need to compile, the easiest way (and one which is not broken, comparing to NPM) is to run the sequence of commands:

./configure
make

and in root directory you should get some executables, including jscoverage and jscoverage-server.
Now lets add some code of our own targeting NodeJs in a structure:

./node_modules/ #you leave that alone, just utilize in your scripts

./src #folder with your source code
./src/server.js #let’s call it main routine for your application
./src/someClasses.js #let’s call it additional includes for your project

./tests/ #folder with your unit tests
./tests/someClasses.test.js #unit test package targeting ../src/someClasses.js

Obviously and assuming correct code inside all files to run your NodeJs server in dev environment you only need to type:

node src/server.js

or more often if in root folder of your project you create some index.js with all included requirements, then code will look like:

node index.js

We’re good. But how about testing? Mocha comes with executable located in ./node_modules/.bin/mocha and we will use it

To run tests you need to make following call:

./node_modules/.bin/mocha tests/*.test.js

That’s it. In response you will get result like below:

․․․

3 tests complete (3 ms)

Or longer and more detailed, if your tests fail.

If you want to check coverage of your tests then you have to rig jscoverage with mocha. It’s relatively simple.
Command jscoverage [SRC] [DST] copies your code from one folder to another and in destination folder it creates a copy with additional markets needed to run coverage report smoothly. It cannot replace existing folder so each time you call it make sure that you deleted (or scripted the deletion) of older instance.

In my case it’s:

./node_modules/node-jscoverage/jscoverage ./src ./src-cov

Then having that done, you have to re-run your mocha with additional parameter for code coverage.
I encourage experiment with two outputs: html and json with below variations:

./node_modules/mocha/bin/mocha -R html-cov tests/*.test.js > report.html
./node_modules/mocha/bin/mocha -R json-cov tests/*.test.js > report.json

Last very important thing. If you normally required include files from your original ‘src’ folder then your code coverage will show nothing. You need to point your includes into ‘src-cov’ folder, not to slow down regular tests with additional code that jscoverage produces, I recommend to make a script that prior to code coverage testing will set environment variable and that variable will be included in your unit tests, like below example:

var myClasses = process.env.COV_REPORT ?
require(‘../src-cov/someClasses.js’) :
require(‘../src/someClasses.js’);

In my case, additionally I wanted to stay in the shell with all runs so I crafted little Javascript code executed by Node in parallel to show me the summary from JSON report and I created complete script for code coverage that looks like below files:

File: ./runCoverage

#!/bin/csh
#setting env for mocha

setenv COV_REPORT 1

if ( -d “src-cov” ) then
rm -dfR src-cov
endif

./node_modules/node-jscoverage/jscoverage src src-cov
./node_modules/mocha/bin/mocha -R html-cov tests/*.test.js > report.html
./node_modules/mocha/bin/mocha -R json-cov tests/*.test.js > report.json

unsetenv COV_REPORT

node ./coverageReportSummary.js

File: coverageReportSummary.js 

var coverageReport = require(‘./report.json’);

console.log(‘\nCode coverage report:’);
console.log(‘* SLOC Total: \t\t’ + coverageReport.sloc);
console.log(‘* SLOC hit with tests: \t’ + coverageReport.hits);
console.log(‘* SLOC missing tests: \t’ + coverageReport.misses);
console.log(‘* Code Coverage (%): \t’ + coverageReport.coverage.toFixed(2));
console.log(‘More detailed report available in *report.html*\n’);

And the final output from runCoverage script looks like that:

Code coverage report:
* SLOC Total: 54
* SLOC hit with tests: 32
* SLOC missing tests: 22
* Code Coverage (%): 59.26
More detailed report available in *report.html*

Just some tweaking or massaging as some of my friends would say, but I like it. Happy coding!

 


Dec 10 2012

The moment I switched to Mac

(revised, morning wasn’t lucky when I checked the number of typos and errors in below text :D )

 

Two years ago I was curious. Among my friends who also share the professional passion toward computers, I was the only one who had Windows.. and one of two who had PC. The other guy was passionate and proud to be 100% Ubuntu based Linux user.

We’re talking about band of ten and easy comment comes up to mind that every cliche has its niche, which is true.

Still, working at Microsoft and attending various events, I’ve noticed more and more Macs around, and with these guys I’d always seen extremely happy and passionate faces. Thing that was random among MS communities who still somehow seek for approbation that the future was bright.

My initial experience with Mac was very raw. I decided to check this thing up, mainly to have my own opinion, as a tech savvy person who tries to make choices consciously.

With that approach I said that I needed to check it but I won’t buy it only for probably a week of testing. Like I didn’t have other toys around that are covered by dust. So, from party to party I’ve played with friend’s computers, asked questions, dwelled deeper. On some machines I installed XCode even though these guys will never make their hands dirty from coding.

At some point I decided to search the Internet for infamous Hackintosh solution. So I managed to configure and install one under VirtualBox to quickly realize that I was more flexible to experiment with this comparing to friends’ machines but it’s not the EXPERIENCE I really wanted.

Meantime I’ve purchased iPad for similar reasons and fell in love to it. More and more I missed integrity.

In a minute I’d decided that I needed to purchase real Mac and check how it’s to code on it, play with it and share with my own family as home computer set. Already with 3 laptops at home I purchased iMac as I never planned to move with it around, initially it was planned as secondary computer just for testing and trend evaluation.

Fun part, it quickly has become my primary computer. After two weeks I learnt not to bring company’s laptop at home. It was permanently locked by Kensington at the office and I was only getting it home when next day I had a business trip planned.

I installed Windows on Mac and when I needed to do something on Windows I did and shared my folders through Dropbox and next day, after coming back to office, I was just happy not to bring almost 5kg of gear with me and everything was syncing just fine.

Five kilos is not the case here. Still hidden and sort of illegal (at MS) fascination with the new User Experience is the thing that has started triggering my imagination ever since. My days at MS were doomed ;-)

Now since I can consider myself a freelance person with complete freedom of technological choice, I still have stayed with Macs and in summer I purchased second one, replacing old company’s laptops I had to give back while leaving. This time it was Macbook Pro as I need mobile computer capable of doing things tablet still cannot do well.

Why staying with Macs how it has convinced me?

After these two years of crouching tiger, hidden dragon experiences with the desktop machine I discovered, that it’s perfect and universal dev kit station. With it, I can target everything from a single stand with great performance. The only “optional” extension that keeps performance high is RAM. I expanded my computer to 16GB to run smoothly with virtual machines. Plus of course Fusion or Parallels if you want good performance on desktop Windows installation also ran virtually from your Bootcamp partition. $100 + $50 of extra expense to the core value proposition from the store.

Having all that, in a single box, I have the best from Unix/Linux and Windows with quite fresh user experience (at least to me still as a fresh baby in non-MS world).

By the best of Windows I understand huge and vast ecosystem of great applications with amazing usability coming from the core operating system.
By the best of Unix/Linux I understand everything what developer needs, already available under his fingertips and more easily added to the system, well integrated and often great value gotten for free.

Second part (best of Unix) might be challenging for hardcore Windows developer as he might find great difficulty to come back to  command line driven mantra. It’s fortunately or unfortunately still very popular in old and new innovations targeting Unix devs and Unix based dev environments. I found Visual Studio initially made me very lazy to click here and there and if I cannot click somewhere then it’s not there. Nothing more wrong. Many cmd line tools are so powerful that one can really think “why to pay for IDE?”. If you want to stay with powerful IDE then of course Xcode offers a lot. I really mean a lot and it’s free.

If you want to stay with the core Apple offers to developers then it’s perfectly fine and similar experience, just new shortcuts, menu items and windows popping up have to be learnt again.

For me Xcode often is not enough as I said I try to experiment with various platforms where Android, Web Development and other targets are involved. For those editors like Sublime Text and Emacs are really handy. For those terminal commands like ‘ab’ or ‘gcov’ are pretty neat to automate with some ‘ninja’ and ‘tup’ like systems.

With shell based edit/compile/debug/test/build mantra, some people love it, some people hate it, this is preference. I’m relearning this forgotten lore, forgotten for many years. I like the learning curve as after so many years with quite stable and slowly evolving environment, mostly over-bloating itself with each next version it’s ass kicking motivator that keeps me technically alive.

Last but not least as for Macs, especially when I work directly with customers. There are some who are aware of Apple proposition, some just repeat slogans and have mixed feelings about it. If you leave slogans and show them how you work with the machine, even as a simple user, then it’s amazing how self-evangelizing this product is.

Just 5 minutes of operational typing and touch pad swiping and people comment: “I have to consider it as my next computer purchase”. Often it’s +5 in general conversation about anything, just because for everybody experience is still fresh.

Business users mostly says so, consumers still complain about the price which I have to agree, is valid point especially here in Poland where computer is a commodity but for current average salaries Macbook is not.

Aside of the price, people are really bored with Windows. This is where non-MS tablets represent real challenge to Redmond. I discovered that with me. Change to something completely different, yet usable and useful, and first of all commercially successful, is something that everybody needs from time to time, to keep technological curiosity positive and to push innovation forward.

Reason why Windows 8 is and should be super important to Microsoft, yet to me it’s already just one of many platforms – not the most critical one. Great warning to my former colleagues, it’s so easy to abandon one platform today and still be happy and productive.


Oct 26 2012

Weekend doesn’t count. D-3 to end one long chapter..

It was November 2005. I was in London, with a sponsored trip to sign a contract with local Microsoft Dynamics partner.

At the same time I had conversations with Microsoft in Denmark and Poland but I hadn’t believed that I was grown enough to be seriously considered.

So I’d joked to one of the recruiters that I was flying to London anyway as that job is secured but MS is a priority, so if they called me before Thursday then I’d fly to thank England for the consideration and the ticket and I’d immediately come back to Warsaw to sign the contract with the almighty…

… and the HR person called me on Wednesday night (11:30pm) just to tell me that I can fly to London for a touristic trip and when I come back I should immediately drive to Al. Jerozolimskie, to the office of Microsoft subsidiary in Poland to sign the contract as we previously agreed.

That was my beginning for a 7 years long adventure with the biggest ISV in the world as MS likes to call itself. The adventure that has practically ended in June, officially ends by the end of October.

Seven years.. many things happened during that time. I joined the blue badge ranks as passionate technical geek with strong and fresh programming skills that I’d always joked about like “I know how to make origami out from the keyboard”.

I’ve learnt a lot and I’ve changed. I hit my glass roof and at the end I helped myself little bit make hard decision and leave.

Now as I’m in the end of the incubation period of leaving I decided to summarize this long story somehow.. and in parts, as my tribute to all friends, partners, customers and anybody I had pleasure to cooperate, collaborate and co-work with…

Back then in 2005 I promised myself that “I’m joining MS as Neo not Mr. Anderson, and the first sign of accepting the Matrix is the first moment to get the right pill and flush myself from the system..”

It has taken seven years and to discover first marks of the flip side… Or maybe little less as it’s a history of the pirate flag hoisted up above my office desk and the middle finger shown to all who were on the wrong side of the wall.

I don’t regret anything from those last 7 years and it is not the end..

This is the new beginning!


Jun 16 2011

Digital versus Classic Journalism – Coming back from the debate

Two days ago I was participating in public debate opened up with following question: “If and How Internet has changed journalism”. Debate was taking place in Warsaw/Poland and was strictly regarding Polish media market. Debate was organized by quite fairly popular news and community journalism portal http://www.wiadomosci24.pl/.

Leader of the debate and chief editor of Wiadomosci24.pl – Tomasz Kowalski – invited interesting people to the conversation.
On one side we had very popular tech crunch bloggers (in the Polish Internet of course) like Przemyslaw “Spider” Pajak (http://www.spidersweb.pl), Maciej Budzich (http://blog.mediafun.pl/) and Krystian Kozerawski* (http://www.mackozer.pl). On the other, we had only one representative from classic press – editor of Polska The Times daily newspaper – Agaton Koziński. In the middle and between guys including me, who were invited to be a joint-company for those two very different groups of writers. I was sitting there as Microsoft employee – a person who understands technology and how it has impacted changes also in journalism. 

I’d be 100% ignorant, if I said that I was the most important guy in the company. We had two senior experts coming from academic world. Two well recognized Polish professors – Maciej Mrozowski and Włodzimierz Gogołek, who I suppose were invited to share their wisdom and distant maturity probably unknown to the rest. It was the first fail of the debate itself.

I have met those two fellows first time in my life. Post-mortem, I have huge respect to Mr. Wlodzimierz Gogołek who stood in the position of a person that tries to understand Internet and big changes in has introduced to our society. Mr. Maciej Mrozowski presented nothing but ignorance and arrogance in almost every word said. Too bad, he disrespected the audience. From quite interesting opening question and the perspective of discussion in subject, we had quickly switched toward war between amateur/enthusiast writing (blogs) and professional journalism which is “about serious stuff”. I’m writing war, because it was a smalltalk I often have with my friends during bar conversations. I expected higher level and argumentation worth the public debate.

I was terrified in lack of understanding of the Internet as a medium. I was shocked when Agaton Kozinski was more interested in finding funds for sending correspondents to Middle East, not asking a question first, if we here in Poland are interested in another news about the conflict in Palestine. Maybe we’re more interested how our freeways are constructed and why the heckwe need to pay so much for them? Not sure what is more important, but now I understand why, when I buy a magazine - out of 30-40 articles, I’m interested in maybe four. If editors do not ask themselves critical questions first, then they land in the abyss of sales falling down! In effect no money for the Middle-East guy.

I had a feeling that traditional journalists in Poland are bunch of celebrities who demand exclusive attention. I’m no expert, but I understand two ways of career development in journalism. The way of classic celebrity. With all steps in the food chain to gain respect and the way of rioter to jump over the food chain and piss on it. Regardless of the path folks enter, they should realize that it’s like five seats, not more, on the throne for the winners. The rest are forgotten and should demand nothing but keep fighting for the attention with good quality content and modesty. Good example with Mr. Agaton Kozinski. Before debate I didn’t even know who he is. No offense in that – I just don’t read daily newspapers.

I don’t, I have different sources of information that are faster. Honestly, all I know about the earthquake in Japan I’ve got from the Internet. I do not represent the majority yet. TV probably still is the winner. But such an audience, that takes Internet as primary source of information is growing. Now many Internet events are pimped up from classic media sources. Soon I’ll find info about new show in TV from the Internet, will turn TV on for that show and come back to Internet after it. Maybe all will be broad casted in the Internet and TV will be just a flat screen, yet another monitor in the living room.

On the debate I have realized that it is freaking out many traditional media guys. And they should be afraid. Sooner they realize that, they are not attractive to younger generations who are connected, sooner they will wake up. And some say we have about 20+M Internet users in Poland. Lots of people who are changing habits right now!

They have to wake up. We all together have issues to solve. Like privacy, like literacy of people who read true garbage and only that.
Internet represents very unique place where true democracy as I understand it, works. And from this prime experience I must say, democracy is very close to anarchy. Anarchy where arrogance has its beautiful yet unproductive power.

Mostly that is, what I’ll remember from the debate.

*) edited: Apologies to Krystian, I mistyped his surname little bit. Thanks (Dominic Warkiewicz) for pointing it out.


May 11 2011

The tablet story – a few lessons learnt

I am fascinated by modern tablets. By modern, I mean form factor represented by pads, slates & e-book readers – just screen, with multi-touch capabilities for human-computer interaction.

I’ve been playing with Tablet PCs since I started working for Microsoft. One of very first computers I’ve got here was Tochiba’s tablet with Windows XP (back in 2005). I liked it, but it’d never become my main computer. Reason was simple – touch screen was optional, secondary way of interacting with the machine. Some (in fact a few) apps were designed to take advantage from digital pen, but overall feeling of the OS and most of apps was simple – it was designed for keyboard and the mouse as primary tool for interaction.

I has taken me a while to fully understand why Microsoft has postponed the original Slate idea to go to market. Then when I started playing with iPad and I realized how much the OS and all apps I’d downloaded were designed for the screen as primary tool for interaction. For new developers designing apps for such devices, consider it as mandatory part of your UX focused design. Not only the beauty of High DPI vector based graphics, but well crafted and tested interaction through available and individually invented gestures.
If your design required keyboard then it’s worth checking how many pads users buy this optional accessory. I don’t have the numbers but I don’t believe it’s mainstream. If confirmed, then simple question should be asked: is my one dollar app good enough to convince user to go to app store and buy physical keyboard for additional $60. This is universal knowledge regardless of the device and additionally impacting design process if one device just doesn’t have such accessory available.

So this is the main reason now I understand why well known user experience of current Windows is not the best shot in consumer world of tablets. Different UX from both OS and Apps is a must. I have great hope in Windows v.Next to see Microsoft’s progress in subject but still now I haven’t played only with iPad, I’ve got Galaxy and Xoom in my hands, I have several Slates with Windows 7 and even though I agree current benchmark of quality is measured by iPad’s success I disagree that only if you have bitten apple logo on the back of the screen you can success. I found many imperfections of both iPad1/2, I found many opportunities laying in Android’s the state the art, called the Honeycomb. I found many interesting use cases where Windows 7 based slates are the easiest way to navigate. World is not that clear and I believe that real battle for tablets hasn’t yet started.

Few examples:
- I hate watching movies on iPad because of the process to move my movies there:
If you have great video-streaming services in your countries that might not be an issue but when to watch movies I have to rip my DVD, format it to right codec & size that is iPad compatible. It is time consuming and it heavily frustrates me. Then I have to run iTunes to find settings for my video playing app and upload my movies to the isolated storage for that app. Why can’t I plug my pen drive to be automatically discovered and handled just like on my game console?
- I’m not big fan of isolated storage. sharing data between apps is painful
- Switching between apps is no so fortunate either. 
- Several and non-complementary content license systems. I’m not sure if I prefer iTunes/app store only way to purchase new content, but when individual app has its own system of acquiring new content it’s not always that stable as I’d expect. I lost a few issues of my Wired magazines with one update of the app. It put me on hold toward new purchases.
- there was lot of dicussion in sense of having copy/paste functionality. For me it’s must-be. Additionally I’d add fast app switching to this critical group. Still I miss some fast way like alt+tab, ctrl+c, ctrl+v way of doing it on the screen. Current implementations barely help doing it subconsciously.

I can add lot more of small defects and imperfections but there is no sense in that. After all, it’s great device and user experience playing with it. Still as said, I perceive lot of space for new innovations and improvements better than different color, shape and camera.

I’ve been playing with Samsung Galaxy which is no competition at all. Its performance disqualifies everything. I’ve been playing Motorola Xoom which is kind of cool, but as I read it hasn’t got the traction yet. Windows 7 based slates with good hardware capabilities (Asus Eee Slate is cool enough) are really nice for commercial scenarios, especially as closed OEM boxes. But I think it’s where Windows Embedded was designed to go, so it’s unclear how to position those devices. Windows 8 should answer to all questions but it’s it too early to say something more but speculations. With a few tablets already in my hands, I have learnt a few things though:

- Apple for sure is big perception winner, nevertheless it’s far from perfect
- To win in modern tablet space as a vendor you have to consider:
     – high quality hardware (by means of cpu/gpu/ram/hdd performance and 2+ touch points capacity screen that really is responsive).
     – I believe, there is no need to own hardware part of the business, but you have to ensure high quality of hardware delivered (OEM certification, formal hardware requirements, etc.)
     – you have to have apps (from OS to 3rd party apps) designed with touch screen as primary interaction tool
     – you have to have successful electronic distribution instruments as your business model proposal for developers making consumer apps
     – you have to know how to make strong and long-term relations with developers considering business application that require direct selling. App stores do not fit. Alternative business models have to be constructed or polished.

Considering it, I believe that in future again, just like with iPhone, iPad will land in the exclusive area of high-end consumer devices for people who can afford it as portable, yet home computer. But to reach mainstream in measures of PC’s current reach, I think real battle hasn’t yet started. It will begin and will include more vendors. Which is good, for all innovations that it may bring.


Mar 27 2011

This is all new stuff, amazing

I must say, I’m confused and amazed at the same time.

Confused, cause I perceive that real commercial apps examples show up how pointless discussion is which platform is better – native (not necessary c++) or web bound to the browser.

Amazed, because several apps I’ve been using constantly for a while show up how much we have moved forward in User Experience and aesthetic design of our apps.

Regardless if true web or native but still utilizing power of the Internet those apps still do not change my definition or priorities why I value Internet. So no big revolution in the Web itself, just more mature I’d say.

Internet has always been to me about three priorities:
 - to communicate (meet and chat with the right people)
 - to learn/get to know (gaining access to right information which can extend my knowledge, experiences and can help making better decisions)
 - simply.. to get the stuff, whatever stuff means.

Entertainment and consumer apps are the easiest example. Adoption of new trends has always touched those first. But I see all new interesting stuff for collaboration and IW stack of solutions like document management and knowledge management too.

I stopped using One Note some time ago and switch to Evernote. I stopped tracking tasks in my Outlook as even without tasks it’s filled up with so much info that I’m lost and frustrated every time I have to open it.

I envied that Mac users have Things. I’m glad to find Wunderlist. Evernote and Wunderlist are perfect examples of replacing one huge solution (call it Office) with smaller but dedicated pieces. And this is hard as I work for Microsoft, I have my free copy of Office, but still I preferred to un-install One Note (why the heck I’d need it now)

There are many more solutions like that. Preferences of users change, organization can block it or encourage it the same way as Facebook could have been blocked in many offices. Still users have those preferences and frustrations if they cannot choose the best possible solution they think is at reach.

This perception should encourage you if you make those decisions to look not only at software but also at hardware from different angle. Smaller is often better. Remembering not what but how employees work. With who and why they share information. Assess those and then pick up IT solutions based on real use cases not modelled by business or software requirements.

I’m deeply amazed by apps like Evernote, Wunderlist, new social networks like Quora or Convore which bind people by the professional interest they have not only by the will of sharing rubbish over web (like Facebook and Twitter). It’s only a beginning and I’m addicted to new hobby, searching for new stuff around me. Some are really a digital-life changers.


Jan 11 2011

Live long HTML, death to the browsers!

In last summer Wired Magazine published article about the Web. They spread the vision that in future Internet will obviously evolve, but Web – as we know it – will die. There will be no need for websites, but we’ll come back to apps and apps-bound content (think e-books, comics with dedicated reader, radio, music, video streaming, etc). If you’re interested to read that article, please go to this page.

When I read it, it triggered my loud thinking. I’m not sure if they’re completely true but there are some signals of such vision going through, right now. It’s well visible especially in mobile, portables devices, as well in game consoles.

Instead of raw web pages to access services exposed by vendors we choose dedicated applications. On some devices we have no choice. Xbox 360 does not give you an opportunity to browse the web freely, but gives you apps to access Facebook, Twitter and Last.FM services. Those services are good examples of on-line service, which has multi-platform clients and web page is just one among them. Not always the most important and richest in features.

To give you comparison, even on PS3 which has web browser, it’s much better user experience to work with applications. In Poland, where I live, market for those apps is not very well developed and the only app I see on my PS3′s dashboard is AXN network in TV section. This particular app is in fact completely hopeless, offers only a few clips (I’d not even call them movies). Just like YouTube player but only with some very limited AXN sponsored content. Sony’s problem, opportunity is huge.

YouTube player is another good example which for mobile/portable devices provides dedicated application instead of a link (icon) to the web page. I believe it came originally from technological limitations. They stream videos using Flash technology and have experimental support for HTML5 <video> tag. There is big escape from Flash on mobile devices and not each and every device and platform supports well HTML5 yet. So YT gives an app to search and play their videos. I think having those apps guys behind YT do realize the potential of dedicated UX coming through application, not web page.

If you look at electronic publications, slow migration to e-paper is coming and killing need for rich content on web pages. Why? It’s great business opportunity hidden there. Some believe a rescue wheel for publishers who struggle how to monetize well from the web, while paper subscription is at constant decline.

We’re so much used to see web pages free, that still as Internet users, we barely like to pay to read anything on particular URL. Sites like www.nytimes.com give content for free and install ads anywhere possible. They of course have subscription for on-line content. WSJ has it too, but I suspect that most of their customers are corporate/business, not consumers.

Now take NY Times Reader for PC, phone, iPad, whatever. Designed to emulate daily issue, paid the same way as daily issue comes to your door. Full of dedicated content and with User Experience suited for the device. Highly welcomed product, I find among my friends, people are eager to pay for it.

What a rescue for newspapers indeed. Electronic distribution of many different kinds of goods (on-line content) wins more and more shares and with our shift toward public cloud, it’s just sentenced to win.

Now, going forward. What if all that becomes true. We will have Internet A -> with browser and web pages – Obsolete and with legacy stuff. Another Internet (B) will come with apps, that will just use the infrastructure for connectivity. Little bit just like in old good times but with much more mature platforms, APIs and standards than in 90ties (Webservices and JSON as good examples which we didn’t have in previous decades).

So how about that current, mindset war against RIA versus HTML5 improvements. If apps come to win, is HTML5 really relevant? I think yes. In many scenarios, it’s right now the easiest way to start a project portable across all these different platforms and screens. Biggest risk is, that it’s still immature (both in standards and browsers implementation) and fragmented.

Many HTML5 enthusiasts yell that HTML5 kills all need for any other development platforms and first of all need for Flash and Silverlight as RIA platform for the Web.

I’ve played with HTML5 little bit and to set it 1:1 to Flash or Silverlight, I must say it’s incomplete. To really compare both at features level, left side has to include:

HTML5 + JS DOM extensions coming with HTML5 + CSS3 + SVG

Then I can start talking about any comparison to what SWF and XAP files can render themselves.

But this post is not about bitching at HTML5 versus plugin-based RIA. In fact I believe that in reasonable time maturity of HTML5 will finally come true. Standard will become complete and all popular browsers will support all relevant features. To handle HTML+JS+CSS+SVG complexity, we just need different tools which are right now absent. But I believe they will come too, they just have to be split to suit needs of both coders and designers (the way Visual Studio and Expression Studio is positioned in MS offer as an example). Then and only then we can start verifying HTML5 as dominant platform for the Web.

But wait a second, let’s go back to the beginning. Wired said Internet will survive (infrastructure level and connectivity across devices), but Web will become obsolete. So why HTML5, if not for Web pages? Web apps – yes!

Right now for more and more platforms we can build web apps behaving like regular client apps for the devices.

Pads/Slates are good example of devices which I call hardware browsers.
So, if my window to the Internet is hardware device not the browser application for PC, why to need a software browser inside it?

Still one core feature currently dominant for daily Web usage – search. For websites and on-line services, even content and apps download you need search provider to find them in endless wastes of the Internet. But when we talk about Web Apps, search can be easily replaced by something we see becoming mainstream right now -> App Marketplaces, App Stores, Steams or whatever the service is called.

I’m not a conjurer, I can’t predict the future, but some things are happening right now. And if it all becomes true, think why Google is building marketplace for web apps that are on-line not for download. To be prepared..

Think how many businesses will change operationally if that happens. SEO for example. Big risk to be killed or 100% controlled by app store vendor. App stores are closed and I see no reason why to open them up. If killed, then we will most probably come back for traditional model for billboard like advertisements.

Purchasing C-class infrastructure to set farm of SEO servers to pimp up your pages faster will just not work. You’ll have to go to the apps store vendor and just like with NYC’s Times Square LCD screens, you’ll pay heck of money to be on main page as app of the week.