Monday, June 28, 2010

#11, 3D Graphics, Part 1

This week I'm going to talk about 3D graphics on Mac. Please refer to previous posts on graphics and programming for some background info. What I did in the past few days was (1) testing some sample 3D applications and (2) writing some basic 3D programs. Short version of the result is that Mac hardware and software don't seem to be particularly weaker than PC, in some cases they even do slightly better, but in some other cases especially for real-time rendering using hardware accelerators they may not be as efficient which can be due to less efficient drivers. The basic 3D API on Mac is OpenGL which is not as good as Direct3D on Windows but is pretty much portable, at least for basic stuff.

Before I get to some details, let's make sure we all know the basics of 3D graphics.

==========
Very Very Quick Intro to 3D Graphics
==========

2D computer graphics is all about images, i.e. two-dimensional arrays of points with different colour values (a.k.a pixels or picture elements). 3D computer graphics on the other hand is about points in three-dimensional space. We call these points vertices. Each vertex has three location values (XYZ) and a colour value. But things are more complicated than this for a couple of reasons.

3D points form 3D objects when connected. The simplest type of 3D objects are polygons, i.e. a set of usually 3 or 4 connected vertices that are on the same plane (surface). Polygons are then connected to form a mesh that is the base for complex 3D objects. A mesh can then have a material (properties like colour and reflection) and/or a texture (a 2D image covering the surface of the 3D object). Materials and textures define the colour of vertices and also the colour of points inside the polygons.

Now here's the second complication. 3D scenes have light sources. This means that the colour of a point in 3D is a function of its material/texture and also the light sources.

Finally we have another new concept and that is viewpoint. A 2D image has no notion of viewpoint. Although physically you can look at it from different points and potentially see it differently, this has nothing to do with how your computer is generating the image. 3D scenes on the other hand need to be transformed to a 2D image before they can be shown on screen (let's not talk about real 3D displays). In order to decide what your 3D world will look like, you need to decide where your eyes are.

The process of creating a 3D world is called modelling. Making the objects in this 3D space move is animation. Transforming your 3D world to a 2D image is called rendering, sometimes called shading. Rendering process is controlled by a series of parameters and is usually done in two steps:

** Vertex Shading transforms all your vertices to points in 2D. This depends on the location/scale/orientation of your objects in 3D world and the location/orientation and properties of your camera (e.g. field of view).

** Pixel Shading is the second step and decides what colour to be used for each pixel based on material and texture of the objects and also the light sources.

Rendering used to be (and still can be) done in software. Graphic Accelerators allowed this to be done in specialized hardware with fixed functions. They then evolved into programmable GPU's with their own programming languages called Shaders. High Level Shader Language (HLSL) is the shader language supported by Direct3D. OpenGl Shading Language (GLSL) is an alternative for OpenGL. These languages have their own versions and different graphics cards are compatible with different versions, or Shader Models.

When you watch a 3D animated film each single frame has been rendered separately. When you play a 3D game each single frame is being rendered in real time.

==========
Application Tests
==========

The first 3D application I tested was Maya 2011.

512x512 render using Mental Ray
Windows, Maya 32-bit, 3.48 sec
Windows, Maya 64-bit, 3.08 sec
OSX, Maya 64-bit, 3.02 sec

For comparison, I tried the same file on my desktop PC (Quad Core, 3GHz, 6GM RAM, Performance Index 5.9) and it took 2.07 sec. FYI, the Performance Index for my Windows 7 installed on Mac is 5.2.

1024x768 render of another scene without Mental Ray and using Maya Software
Windows, Maya 64-bit, 0.07 sec
Mac, Maya 64-bit, 0.10 sec

When I used Maya Hardware both had less than 0.01 sec and I couldn't see any difference. I will try longer tests later for better comparison when using Maya Hardware.

Next I tried a 3D game. I chose Half-Life 2 which is now available for Mac. Valve has released Steam for Mac, and some of their games including Orange Box are now available for Mac. One good thing is that if you have the Windows version, you don't need to buy the Mac version and can download it for free. I tested the Windows and Mac version at default settings (1280x800 for Windows and 1152x720 for OSX). I only tried HL2 on OSX for a short period. It seems to be working fine but even at a lower resolution a slight performance problem could be noticed. The rendering seemed a little slower as you could see the screen being refreshed with a little delay. This was not a big issue at all and hardly noticeable but I'm guessing when we get to more complex scenes it could get worse. Needs more testing.

Overall, Mac (or I'd better say OSX since my Windows is running on Mac hardware too) doesn't seem to have a major issue with 3D. My guess is that OpenGL drivers for Mac are not as efficient as Direct3D drivers on Windows. This can be due to slight inferiority of OpenGl itself, drivers, general difficulty of programming for Mac, and popularity of Windows platform that makes programmers work harder and better for it.

The next step of my test was to compare OpenGL on OSX and Windows.

==========
3D Programming
==========

OpenGL is the basic 3D API for Mac. Even other API's like CL are created on top of it. So to test 3D programming it seems that we should focus on OpenGL. For comparison, we can use Direct3D or OpenGL, and I started with OpenGL on both platforms. Windows and Visual Studio don't come with all the libraries for OpenGL programming but they are easily available. This includes OpenGl itself and GLUT (utility functions for OpenGL). There is also GLAUX which is only available for Windows and I won't use. On the other hand, OSX is completely OpenGL-based and Xcode comes with GL and GLUT frameworks. Here are the basic steps to write a simple OpenGL program on OSX:

1- Start a new project of type Command Line tool with C. Unfortunately this is the only non-Cocoa C-based option. Don't worry you can do graphics with it using OpenGL.
2- Right-click on the project name and select Add and then Existing Framework. Select GLUT and OpenGL.
3- Add the following code:

#include
#include
#include

void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);

glBegin(GL_POLYGON);
glVertex2f(-0.5, -0.5);
glVertex2f(-0.5, 0.5);
glVertex2f(0.5, 0.5);
glVertex2f(0.5, -0.5);
glEnd();

glFlush();
}

int main(int argc, char** argv)
{
glutInit(&argc,argv);
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(500,500);
glutInitWindowPosition(0,0);
glutCreateWindow("simple");
glutDisplayFunc(display);
glutMainLoop();
return 0;
}

Let me explain this code a little bit for those who are not familiar with OpenGL:

** The #include lines allow you to use OpenGL and GLUT. Remember that in C/C++ you need #include for compiler to use libraries. The linker then links your program to actual library code (here the frameworks you added).

** The main() function is the entry point of your C program that initializes OpenGL, creates a window, sets a Display Function and calls OpenGL Main Loop. Display Function draws the graphics on your window. You need to provide a function and pass its name to glutDisplayFunc(). Main Loop processes user commands. If you don't provide your own Main Loop, the default doesn't really do anything special.

** The display() function is what you pass to glutDisplayFunc() and you basically have to have one (it can be called anything). In this simple example, our display function only creates a polygon by defining four vertices in 2D. You notice that OpenGL can be used for 2D graphics as well as 3D. This function will be called every time the window needs to be redrawn.

To run this on Windows, you create a Win32 empty console application, add OpenGL libraries to linker and add a new file with these headers:
#include
#include
#include

You notice that you will need windows.h, and also that OpenGL header files/directories have different names. The rest of the code is the same.

The next step is to add some 3D code. I used a simple Camera class that allows moving camera, I added a Main Loop that allows user to control the camera, and modified the Display Function to create and draw some simple 3D objects. You can find the full code here:

http://www.csit.carleton.ca/~arya/b2cc/opengl/

My Display Function looks like this:

void Display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //Don't forget the depth buffer!
glLoadIdentity(); //Load a new modelview matrix -> we can apply new transformations
Camera.Render();
glLightfv(GL_LIGHT0, GL_POSITION, LightPos);
glutSolidSphere(1.0,20,20);
glTranslatef(2.0,0.0,0.0);
glutSolidSphere(1.0,20,20);
glPushMatrix();
glTranslatef(0.0,2.0,0.0);
glutSolidSphere(1.0,20,20);
glPopMatrix();
glTranslatef(0.0,0.0,2.0);
glutSolidSphere(1.0,20,20);
glTranslatef(-4.0,0.0,0.0);
glRotatef(TorusRotated, 0.0,1.0,0.0);
glutSolidTorus(0.3,1.0,8,16);
glFlush(); //Finish rendering
glutSwapBuffers(); //Swap the buffers ->make the result of rendering visible
}

You can see that simple 3D object are being created and then placed in 3D world. Everything is static except a Torus whose rotation value depends on the variable TorusRotated. I used two methods for changing this variable. Look at the main() in the code and locate these lines (we should use only one of them):

glutIdleFunc(Idle);
glutTimerFunc(400,Timer,0);

Both are similar to the one that defined our Display Function but they are defining an Idle function (to be called when the application is doing nothing) and a Timer function (to be called at certain time intervals). Each one of these methods simply changes the TorusRotated variable so the next time the scene is rendered you see the Torus with a different orientation. The timer method is prefered when you want to control how many frames per second you want to have.

For your quick reference here are the Idle and Timer functions:

void Idle(void)
{
TorusRotated += 2.0;
Display();
}
void Timer( int value )
{
TorusRotated += 2.0;
Display();
glutTimerFunc(400,Timer,value); //need to call the timer again
}

It is not very efficient to do actual drawing in Idle and Timer functions but optimization is not an issue here. The code will work with no change on Windows.

For this simple example I didn't notice any particular performance difference for rendering but the Idle and Timer functions don't work the same on OSX and Windows. More specifically, the Idle function on OSX is called a lot more frequently and the Timer function intervals can go as short as 1 millisecond. In Windows you won't get anything less than 10 milliseconds. In Direct3D there are other ways to get higher time resolutions but apparently the OpenGL implementation on Windows doesn't use it and stops at 10 millisecond which is the default for Windows timer.


This week tests don't seem to be very conclusive but hopefully it is a good starting point for understanding how things work and performing more advanced tests, both with existing applications and games, and also developing 3D programs.

I'm off to Barcelona and then Vancouver in a few days, so expect some delays but no worries.

I'LL BE BACK!

Monday, June 21, 2010

#10, 2D Graphics and GUI Programming Summary

I heard last week that Xcode 4 is being released and it has Interface Builder integrated. I found it interesting that Apple calls this (and other Xcode features) "revolutionary" while they are still way behind industry standards for an IDE, but anyway it's good to see this change. I look forward to see some other changes related to problems I have mentioned here. Anyway, today I want to wrap up my quick review of basic Cocoa programming and creating simple GUI and 2D graphics applications. But first let me mention something that I didn't discuss.

Back in the old days (1990's) when I was using Microsoft Foundation Classes (MFC) in Visual C++, I was introduced to the concept of document-based vs. form-based applications. MFC had (and still has) three types of applications: Multiple Document, Single Document, and Dialog-based. The first two types are variation of what in general is called document-based application where you can open an existing document or create a new one. The framework provides programmers with base classes for Document (application data) and View (user interface). Dialog-based or form-based applications on the other hand have a dialog box (or a form) as their main interface although they still may open files, and there is no framework imposed separation between data and UI. In MFC dialog-based application where easier to make than document-based ones, especially because it was easier to define the program in terms of a series of event-handlers responding to UI objects on the form. Visual C# made creation of dialog-based (now called forms) even easier and got rid of document-based application style (you can still make them if you want but have to create the document class yourself). Cocoa does provide similar option. Default application in Xcode is a "form-based" one, but you may select the document-based option and some classes will be added to your code to support document-based architecture.

Just like I didn't like document-based applications in Visual C++, I didn't like them in Xcode. In fact I disliked them even more because they are harder to use and more confusing than their VC++ version). After a short try, I decided to not work with the document-based applications. If you are interested in learning about them, check this page as your starting point:

http://developer.apple.com/mac/library/documentation/Cocoa/Conceptual/Documents/Tasks/ImplementingDocApp.html

Now let's focus on the form-based applications and basic GUI and 2D graphics programming with Cocoa. Here are the basic things you need to know:

1- Cocoa and .NET provide more or less similar functionality for creating basic GUI and 2D graphics apps, but Cocoa classes are slightly more complicated to use. As for the IDE, Xcode is A LOT more complicated to use than Visual Studio. Also Objective-C as the programming language is not as convenient as C# for typical C/C++ programmers.

2- Once you create a default form-based Cocoa application, you have a NSApplicationDelegate class made for you. This can be your starting point to add variables and event-handlers. It already has a member that points to your application window.

3- You design your form (user interface) in Interface Builder that can be launched by double-clicking on your xib or nib file in Xcode. In IB, from the Library add the UI objects to your form. Save and go back to Xcode.

4- You add two important things to your AppDelegate class: member variables representing UI objects and methods for UI event handlers. variables representing a UI objet have the type IBOutlet and the method IBAction. You don't need to specify the exact type of the object. Cocoa uses dynamic types. Here is an example:

=======
//TestAppDelegate.h
=======

#import

@interface TestAppDelegate : NSObject {
NSWindow *window; //added automatically

IBOutlet id textView; //outlet for text view
IBOutlet id customView; //outlet for custom view showing images
IBOutlet id textfield; //outlet for a simple text field
}
- (IBAction) clearText: sender; //method for clearing the text view
- (IBAction) setView: sender; //set custom view properties

@property (assign) IBOutlet NSWindow *window;

@end


=======
//TestAppDelegate.m
=======

#import "TestAppDelegate.h"

@implementation TestAppDelegate

@synthesize window;

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
// Insert code here to initialize your application
}

//////////
// This is the part where we implement the methods we added
//
- (IBAction) clearText: sender
{
[textView setString: @" "];
}

- (IBAction) setView: sender
{
//some code here
}

@end

5- Once you add the code in Xcode and UI objects in IB, you go back to IB and link the UI objects to the code. This is done by holding down the Control key and dragging item A (source) to item B (target). For member variables, source is the AppDelegate object in the IB Documents window, and taget is the UI object on the form. For event handlers, source is the UI object and target is the AppDelegate. In both case, once you drop, you get a list of options (class members that you want to use).

6- Instead of using the automatically-added AppDelegate class, you may create a new class based on NSObject and do the exact same thing with that.

7- Another thing that you can do is to add classes to extend existing Cocoa classes. For example we created a new class derived from NSView to control our custom views and make them draw images. It already had some default methods and we added our code to the drawRect method (see previous postings). We can also add member variables and new methods:

=======
TstView.h
=======
#import


@interface TstView : NSView
{
int x; //new variables
}

- (id) setx; //new method

@end

8- Once you have added a new class, you need to go back to IB, select the UI object, and select the new class for it.

9- Finally you use classes like NSPoint, NSColor, and NSImage to do 2D graphics as discussed before and generally in your drawRect method. See previous posts for details.

Hopefully this quick review gave you the chance to learn a little bit about Cocoa programming. It was a good learning experience for me. Although I didn't find any of the two systems significantly superior to the other in terms of functionality, I found the Windows programming frameworks and tools more convenient and effective. Of course I'm not done yet. My next step is to try 3D programming and also test and compare some operations in common 3D programs.


I'LL BE BACK!

Monday, June 14, 2010

#9, Graphics Programming, Part 3

It's summer and from sunny days to travel plans to tons of projects that you have left for summer to do, there's not much time to explore OSX, although I've been using my MBP all the time. But anyway, I'm following the plan, and this week I'll continue with graphics programming by trying some basic image functions; things such as displaying an image and drawing on it.

We learned about some basic concepts in graphics programming last week. They were Rect, Point, and Color. This week, I introduce two new ones which are as fundamental as those: Path and Image. We will see which Cocoa (and in fact Quartz) classes control them and how.

==========
Path
==========

A Path is a line (including straight and curved lines, arcs, rectangles, etc) that is defined by points and some math to connect them. Paths can draw themselves to a view (see last week post to remember what a view is) and they can be changed or combined. NSBezierPath is the Cocoa class that represents these lines, or in general Bezier Curves (parametric curves with control points, http://en.wikipedia.org/wiki/Bezier_curve). Let's look at some examples of creating and drawing Path objects. The code can go inside the drawRect method of your view class from last week.

The first example draws a straight line between two points with a gray color:

//create start and end points
NSPoint startPoint = { 21, 21 };
NSPoint endPoint = { 128,128 };

//create the path object
//the bezierPath static method in NSBezierPath class returns a new and empty object
NSBezierPath * path = [NSBezierPath bezierPath];

//define the path's start and end, and the path type (a straight line)
[path moveToPoint: startPoint];
[path lineToPoint: endPoint];

//set the line width for drawing
[path setLineWidth: 4];

//define the current colour
[[NSColor grayColor] set];

//draw the path
[path stroke];

Now let's make things a little more complicated and draw a curved line with two control points. All we need to do is to replace the call to lineToPoint with a call to curveToPoint which requires two parameters. These parameters are Bezier control points and we use NSMakePoint function to create them:

[path curveToPoint: endPoint
controlPoint1: NSMakePoint ( 128, 21 )
controlPoint2: NSMakePoint ( 21,128 )];


Just like we could fill a Rect with a colour, we can fill a Path with a colour. Simply call the method fill before your call to method stroke:

[[NSColor whiteColor] set];
[path fill];

//draw the path
[path stroke];

And here is the code for creating a rectangular Path:

//create a Rect
NSPoint origin = { 21,21 };
NSRect rect;
rect.origin = origin;
rect.size.width = 128;
rect.size.height = 128;

//create the Path using a call to bezierPathWithRect method
NSBezierPath * path;
path = [NSBezierPath bezierPathWithRect:rect];

Or a circular Path:

NSBezierPath * path = [NSBezierPath bezierPath];
NSPoint center = { 128,128 };

[path moveToPoint: center];
[path appendBezierPathWithArcWithCenter: center
radius: 64
startAngle: 0
endAngle: 321];

.NET framework provides similar functionality through Point, Color and Pen classes and also families of Draw and Fill methods in the Graphics class. Here is an example in C#:

System.Drawing.Pen myPen = new System.Drawing.Pen(System.Drawing.Color.Red);
System.Drawing.Graphics formGraphics = this.CreateGraphics();
//draw line by specifying X and Y for start and end
formGraphics.DrawLine(myPen, 0, 0, 200, 200); //from (0,0) to (200,200)
//we could also use a Point object
//draw a Bezier curve using four points
formGraphics.DrawBezier(myPen, startPoint, controlPoint1, controlPoint2, endPoint);


==========
Image
==========

Next stop is the Image which probably is the most used object in 2D graphics. The NSImage class represents 2D images and handles drawing at different sizes and opacity levels, and also disk read/write. Here is how we do the two basic operations, reading from file and drawing:

//create a Rect to draw the image in
NSPoint imageOrigin = NSMakePoint(0,0);
NSSize imageSize = {100,100};
NSRect destRect;
destRect.origin = imageOrigin;
destRect.size = imageSize;

//load the image
NSString * file = @"/Library/Desktop Pictures/Plants/Leaf Curl.jpg";
NSImage * image = [[NSImage alloc] initWithContentsOfFile:file];

//draw the image into the Rect
[image drawInRect: destRect
fromRect: NSZeroRect
operation: NSCompositeSourceOver
fraction: 1.0];

One interesting part of this process is the fraction in drawInRect method which controls the opacity.

Before we draw we can "flip" the image:
[image setFlipped:YES];
This is due to the fact that views can be standard or flipped (i.e. the origin being bottom-left or top-left).

We can combine images and paths to for example draw a frame around the object. Simply add the path code after the image code.

In .NET, the Image class and DrawImage method in Graphics class provide similar functionality. For example the following code draws an image into a parallelogram (doesn't need to be a rectangle):

// Create image.
Image newImage = Image.FromFile("SampImag.jpg");
// Create parallelogram for drawing image.
Point tlCorner = new Point(100, 100); //top-left
Point trCorner = new Point(550, 100); //top-right
Point blCorner = new Point(150, 250); //bottom-left
Point[] destPara = {tlCorner, trCorner, blCorner}; //bottom-right can be calculated
// Draw image to screen.
e.Graphics.DrawImage(newImage, destPara);

For drawing, .NET support opacity levels through alpha-blending (alpha being the fourth part of a given colour after red, green, and blue). DrawImage has other versions that support image attributes (including opacity).



As we can see Cocoa and .NET provide similar functionality through similar (and sometimes a little different) ways. I'm not sure about you, but I didn't see any particular advantage in either method. There are some details in both that again are a little easier or more powerful in one or another, but for basic 2D graphics programming, so far there is no winner!


I'LL BE BACK!

Monday, June 7, 2010

#8, Graphics Programming, Part 2

My objective this week is to explore the basic concepts in graphics programming on OSX. I continue to use Objective-C and Cocoa framework but this time using one of the underlying APIs, Quartz. In previous posts I talked about making a simple Notepad application using Cocoa, and introduced some fundamentals such as xCode and Interface Builder. Our Notepad used an existing library object that could take care of all the text operations and the application didn't really need to do anything except some simple UI like button-click to clear the view. For our simple graphics programming example, we use another library object, Custom View that allows us to do "custom" graphics. For the simple example I only draw coloured rectangles. But first let's review some basic concepts in Quartz.

Rectangle, presented by NSRect structure (remember that Cocoa classes and structs start with NS), is an area in which 2D graphics operations are performed. It has four basic members that are x, y, width, and height. The standard coordinate system in Quartz assumes bottom-left is the origin. Many other systems (including web and Windows) consider top-left as the origin. This is called "flipped" coordinate system in Quartz and can be set if preferred. Point, presented by NSPoint, is a set of x and y values and NSColor object is a system colour.

Here are some code examples:

// make a point at coordinate 20,20
NSPoint newPoint = NSMakePoint ( 20, 20 );

// use the previous point and size to make a rect
NSRect newRect = NSMakeRect ( newPoint.x,
newPoint.y,
newSize.width,
newSize.height );

// also can just do this
NSRect newRect = NSMakeRect ( 20, 20, 100, 100 );

//make a red colour object
//create a new object by calling redColor of NSColor class
NSColor * xColor = [NSColor redColor];

Notice how C-style functions like NSMakePoint are used and well as Objective-C method calls for NSColor. This is good and also confusing!

These variables are not objects. If you need an object (and you will in some cases), you can use NSValue:

//create an object by calling valueWithRect method of NSValue class
NSValue * rectObject = [NSValue valueWithRect: newRect];

Once you have a rectangle, you can perform operations on it, such as filling the area with a colour:

//set current colour
NSColor * white = [NSColor whiteColor];
[white set];

// fill a rect
NSRect rect1 = NSMakeRect ( 21,21,210,210 );
NSRectFill ( rect1 );

You see that the NSRectFill is another C-style function that uses a rectangle and "current colour" which has to be set in advance.

Now here is the steps we go through to create our simple application (see previous posts for details of the steps we have covered before):

1- Start Xcode and make an empty Cocoa application
2- Add a new file: Cocoa classes, Objective-C class, NSView
3- Double-click on .sib or .nib file to bring up Interface Builder
4- From Library/Objects find Custom View and add to application window
5- Choose Tools > Identity Inspector and then choose your newly created view class from the menu to associate the custom view to your class
6- Make other changes like size and attributes (optional)
7- Save and exit Interface Builder
8- Back in Xcode, go to your view class and notice three methods that have been created for you. initWithFrame and isFlipped don't need to be changed
9- In drawRect method, add the following code:
NSColor * xColor = [NSColor redColor];
[xColor setFill];
NSRect xRect = NSMakeRect(0,0,100,100);
NSRectFill(xRect);
10- You may add other graphics operation here. It will be done overtime the custom view is drawn.

This is relatively straight-forward. Now let's compare with C#.NET:

1- Create an empty C# project.
2- Add a pictureBox or another view object to your form
3- Add an event handler for Paint event for that object
4- Inside event handler write the following code:
Rectangle srcRect = new Rectangle(0, 0, 100,100);
e.Graphics.FillRectangle(new SolidBrush(Color.Red), srcRect);

As you can see the process is basically the same but the C# version has some advantages:

1- You need less code
2- The view class is more easily customizable if you have meow than one custom view. In Cocoa code, if you add two custom views they both do the same thing. In .NET example, each can have its own variable name and event handler. Of course these are possible in Cocoa too but they are not the default and require extra work which I will disuse in other posts.

So far I'm still feeling the same way as before that Windows programming is more intuitive and straight-forward than OSX. Next week, I'll show some example functionality provided by Quartz which demonstrate some of its advantages. As I mentioned before despite rather cumbersome programming style, Quartz provide programmers with more built-in 2D functions than DirectX or other Windows APIs.

Well, this is it for today.

I'LL BE BACK!

Wednesday, June 2, 2010

#7

Last week I was on vacation and for the first time in ages, I didn't take my laptop with me. No computer, no work, only spending time with family and some people I hadn't seen in 17 years! It was great but the result was nothing to report. So please stay tuned for next week post.

I'LL BE BACK!