Thursday, May 20, 2010

#6, Introduction to Objective-C and Cocoa

This post is a couple of days early because I'm off to visit my mom all the way on the other side of the world, after 2+ years (and to see some people after 15+ years!). I'm going to talk about programming again (with you here, not with my mom). What I really wanted to do was jumping into more exciting 3D graphics programming stuff but as any of my IMD-2004 students can tell you, you have to learn to walk before you can run! So let's start with fundamentals of programming for Mac/OSX.

As I told you before, OSX programs are written on top of a set of APIs, from OpenGL to Quartz and Carbon, all the way up to Cocoa. You can write C/C++ code to work with lower level ones, but what Apple is encouraging developers to do is to use the new API, Cocoa, and its programming language, Objective-C. And I thought I'd go through them kinda from top to bottom, starting with the new kids on the block, Cocoa and Objective-C. In a matter of speaking, Apple and Microsoft went through a similar process; they both created new frameworks with new programming languages (for Microsoft it is .NET and C#), but while Cocoa is more like a high-level API on top of others like OpenGL and Quartz (although it can bypass them sometimes), .NET is a completely new application structure, kinda like a virtual machine (i.e. Windows and .NET executables have completely different formats). The differences are even more clear when it comes down to the language. Both Objective-C and C# are new languages which are not theoretically tied to Cocoa or .NET, but in practice you use Cocoa and .NET libraries for them almost always. While C# continues the well-known syntax and notations of C/C++, Objective-C defines its own new object oriented structure and syntax on top of C. To understand OSX programming, we first need to learn the language.

Before I move forward, here are a few useful links:
* http://developer.apple.com/mac/library/documentation/Cocoa/Conceptual/ObjectiveC/Introduction/introObjectiveC.html
This is from Apple Developers' Library. Not a very good easy-to-use thing.
* http://en.wikipedia.org/wiki/Objective-C
Wikipedia is always a good for starting point
* http://cocoadevcentral.com
This is a pretty good place to start actual coding but the description is for older versions of Interface Builder (see AppController in the Cocoa section of this post)
* http://www.otierney.net/objective-c.html
This site is ok
* http://mobileappmastery.com/objective-c-tutorial
And this is not bad either

=========
Objective-C
=========

It all stated with C, the mother of all popular programming languages (in your face, Lisp programmers :-). C uses a well-known syntax and structure that many other languages adapted, such as the way functions and variables are defined. C++ is a superset of C and tries to look like C as much as possible. For example, classes look like C structures, function calls are the same way except they can have the object added before the function name, and C++ continues with the header/implementation file structure. C# and Java use many of these but they both got rid of header files ;-) Objective-C designers on the other hand made a different decision. Similar to C++ and unlike C# and Java, Objective-C is a superset of C. This means you can have C code mixed with your Objective-C code, both in the same file (.m) and as a separate file (.c). Objective-C uses the same idea of header files (.h) for class definition. But the similarity to C++ ends right there. The syntax is almost completely different. Whose brilliant idea was that? Well, if you want to get into more behind-the-scene technical stuff, the SmallTalk style is claimed to have some advantages over Simula style, such as ease of dynamic binding but I'm not buying that.

The first thing to know about Objective-C is how to define classes. Here it is:
@interface classname : baseclassname
{
// instance variables
}
+classMethod1;
+(return_type)classMethod2;
+(return_type)classMethod3:(param1_type)param1_name;
-(return_type)instanceMethod1:(param1_type)param1_name :(param2_type)param2_name;
@end

Objective-C uses the keyword interface for class, class method and instance method are the names for static and normal member functions, and are specified by + and - signs. Finally the type is inside () not the parameters which are listed after : . Here is the same code in C++:
class classname : baseclassname
{
public:
// instance variables

// Class (static) functions
static void* classMethod1();
static return_type classMethod2();
static return_type classMethod3(param1_type param1_name);
// Instance (member) functions
return_type instanceMethod1(param1_type param1_name, param2_type param2_name);
};

Here is an example with the implementation file:
//Integer.h
#import //import is the same as include

@interface Integer : Object
{
int integer;
}

- (int) integer;
- (id) integer: (int) _integer;
- (id) add: (int) _integer;
@end

//Integer.m
#import "Integer.h"

@implementation Integer
- (int) integer
{
return integer;
}

- (id) integer: (int) _integer
{
integer = _integer;

return self;
}

- (id) add: (int) _integer
{
integer += _integer;

return self;
}

- (id) hello
{
printf("hello!\n");

return self;
}
@end

Objects are instances of classes and are created using the "new" keyword:
//Objective-C: classname * objectname = [classname new];
//C++: classname * objectname = new classname;
Integer * x = [Integer new];

And finally, you access members:
//function call
Integer * x = [Integer new];
[x hello]; //calling the method "hello"
[x integer : 8]; //calling method integer with parameter 8

As you can see in the method "hello", you can use standard C (including struct) in Objective-C.

=========
Cocoa
=========

Cocoa is based on NextStep framework, hence the NS in all class names. You can use OpenGL, Quartz and other APIs from within your Cocoa application which are usually made using Xcode and Interface Builder. Now here is another strange design choice: Xcode is supposed to be an Integrated Development Environment, but Interface Builder is a separate program. When you modify your application UI in Interface Builder, you have to save it and go back to Xcode and recompile. Not to mention (again) that they are both suffering from bad UI themselves! But once you get past the UI, creating your first Cocoa application is pretty easy. Here are the steps for a simple NotePad:
1- Start Xcode and go to File/New Project
2- Select Cocoa Application, give it a name and select a location
3- Build and run your program to see an empty form (then quit the program!)
4- In Xcode, under Groups and Files, select Interface Builder Files and double-click on the one with nib extension
5- Now you are in Interface Builder and should be able to see your form (app), its menu, and the Library window, and the Document window (and possibly other things).
6- From Library, select Text View (you can search for it in the search box at the bottom of Library).
7- Drag and drop the Text View to your form and resize it if you want (notice that your menu has a Format item by default that will allow you to format text)
8- Save your Interface Builder file, go back to Xcode, build and run

If you compare this to C# and .NET, the process is pretty much the same, except that the default application in C# doesn't have a menu and formatting needs to be handled by the programmer. Although .NET framework does have classes for showing a Font box but a few lines of code are needed to create the UI objects and handling their event to show the format controls and apply the user selection.

Now let's see how we add more stuff to our simple Cocoa application. For example, we want to add a button that clears the text. Here are the steps:
1- In Xcode, click on Classes in Groups and Files window, then go to File/New File…
2- Select Objective-C Class and name it AppController. Now you have two new files AppController.h and AppController.m. This class will control an Interface Builder object in your application.
3- In AppController.h, add a variable of type IBOutlet to your class. This is a link to Interface Builder. Also add a method that's supposed to clear the text view.

@interface AppController : NSObject {
IBOutlet id textView; //outlet
}
- (IBAction) clearText: sender; //method for clearing
@end

4- In AppController.m, write the implementation of the new method.

@implementation AppController
- (IBAction) clearText: sender
{
[textView setString: @" "];
}
@end

5- Open the Interface Builder by double-clicking on the .nib file (see step 4 in for simple notepad)
6- In the Library window, select Classes, find AppController and drag/drop it to the Document window to create an instance of that class. In previous versions you need to drag AppController.h from the Xcode window and drop it on the Interface Builder document window.
7- Hold the Control key and drag/drop the AppController from Document window to the text view object in your form (design window), and select textView from the list. This was a little different in older versions of Interface Builder so be careful when you read old tutorials.
8- From Library window, under Objects, find Push Button and drag/drop it to the form. In the Attributes change the text for new button to Clear.
9- Hold the Control button and drag/drop to AppController in Document window. Select clearText.
9- Save and run a simulation of the interface (File/Simulate Interface).
10- Go back to Xcode, build and run. You should be able to type text and clear it with the button.

Now if you compare this with C#.NET, you'll see that Cocoa version is more complicated mainly because:
1- Xcode and Interface Builder are not integrated
2- In C#.NET, there is class automatically generated for you to control the form.
3- Adding a member variable for Text View and an event handler for Button are much more intuitive in C#.NET
4- You only add one line of code to the event handler to clear the text view.

To conclude, the IDE for Windows/.NET is clearly better. Cocoa does provide some basic default features that make standard tasks easier but when you get to real coding, it seems to be more complex and difficult to code. My next step is to do some 2D graphics programming. That's again one of the cases where OSX claims to have better built-in features, mostly Quartz functions. We'll see.

My #7 post will be a little late for the reason I explained at the beginning but don't worry,

I'LL BE BACK!

Monday, May 17, 2010

#5, Ubuntu

For many people the words PC and Windows always come hand in hand. The reality is a little more complicated than that. Personal computers were smaller simpler alternatives to mainframes and mini-computers that started in 70s by Kenbak-1, Xerox Alto, Apple Macintosh, and finally IBM PC. Since the release of IBM PC (and then PC XT and PC AT), the term Personal Computer (PC) was used not for any "personal computer" but those compatible with IBM PC architecture which itself evolved through time. This hardware architecture generally used Microsoft operating systems, i.e. MS-DOS and later Windows which made the term PC synonymous to a DOS-based (and now Windows-based) personal computer. Such computers use an Intel or Intel-compatible CPU, hence the term Wintel. But even at the old DOS times, this was a rather over-simplification. Back in 80's and early 90's, while PC's were running DOS/Windows, Unix machines were usually servers, mini-computers and high-end workstations such as SUN SPARK station. But due to popularity of PC architecture, Unix community (which had many different versions, BTW) started to release PC version of their operating systems. One of the relatively successful ones was Xenix, licensed by Microsoft from AT&T in the late 70's, and later owned by Santa Cruz Operation (SCO) who released it in early 80's. But the best known version of Unix for PC is Linux, created by Linus Torvalds in early 90's as an open-source project. Linux is now freely available through many sources and companies, one of the most successful ones being Ubuntu.

Although I started my blog to mainly compare Windows and OSX, no comparison of OS choices is complete without considering Linux. That, and the fact that I didn't have time to do much testing this week, made me install Ubuntu 10.04 on a laptop and add that to my evaluation. I chose not to install it on my MBP for a couple of reasons: (1) I'm not sure if I could install it as another boot option, (2) I didn't want to test it in a virtual machine, and (3) I wanted to have a separate laptop with Linux on it (I have Linux on my media centre at home). I will probably install Ubuntu as a virtual machine on my MBP later, though.

So here are a few preliminary observations about Linux/Ubuntu:

1- It's free :-)
2- The installation is pretty straightforward and there was no problem with drivers. I installed it on 3 different machines.
3- The installation is not particularly fast. But the start-up and shutdown are faster than Windows.
4- It doesn't come with popular audio/video codecs and you have to install 3rd party components (that are available through Ubuntu Software Centre, one of the system tools)
5- I installed Eclipse. It came as an empty IDE and you had to install components for different languages. I had problem setting up those through Software Centre. I tried downloading directly but couldn't add them to Applications menu.
6- Customizing desktop (e.g. the Application menu that I just mentioned and adding shortcuts to system tools) is not very clear and easy. I couldn't set a shortcut (Launcher) to My Documents!
7- It comes with a text editor but not a good paint program (just like OSX). It does have sound recorder and video editor though.
8- Games are not good :-( I installed a couple of 3D games from Software Centre but they are rather primitive.
9- Ubuntu comes with OpenOffice which is handy.
10- Ubuntu has built-in OpenGL support.

Next week, I'm going to be on vacation so I don't expect to have any serious post till June. Till then, hope you have fun!

I'LL BE BACK

Sunday, May 9, 2010

#4, Graphics Programming, Part 1

Graphics programming has always been a topic of interest for me as it combines two of my big passions: graphics and programming (duh!). Last week I mentioned the love I had for the miracle of digital electronics, 8051 micro-controller (In fact they were a family: 8051 was an all-in-one controller with RAM and ROM on chip; 8031 had no ROM and 8751 had EPROM). We lived happily together till mid-90s when she passed away (CPU years are even shorter than dog years!). In all that time, the only thing that came close to 8051 in my heart was the VGA card. Video Graphics Array (VGA), coming after CGA and EGA, was the first serious and worthy standard for graphics cards that allowed you to do real graphics. The EGA/VGA Programmer's Guide was my bible. It taught me the beauty of graphics done in hardware rather than software, and that is a key in the series of posts I'm going to have on the topic of graphics programming in OSX and Windows. But first, and most of this week, I have to explain some basics to make sure we are all on the same page. To those of you who already know these, sorry. Just read it as a story but don't you dare skipping ahead.

Back in the late 80s and early 90s, I was working on a real-time signal acquisition system that needed to plot the value of external signals (read through an analog-to-digital card). Kinda like what you see in the labs and hospitals. It would look like a continuously shifting plot where new data comes from one side and old data leaves from the other. This basically involves scrolling the screen. The first solution that comes to the mind of a novice programmer is to set the pixels on screen and with each new data, erase the old ones and put the new ones on, drawing the existing pixels with a shift in position. One of the earliest things you learn as a graphics programmer is that drawing pixels on screen one by one is slow. Your graphics card has a memory that holds the screen data. This memory, usually called video buffer, is not directly accessible to CPU so writing to it is slow and should be avoided as much as possible. The common technique (which my IMD-2004 students should know about) is the infamous double-buffering, i.e. using a secondary buffer in RAM (primary buffer being the one on the graphics card), drawing all pixels for a frame on the secondary buffer, and then copying the whole buffer only once. For normal operations, especially these days, this is fast enough. On our old PCs and with the real-time data we had, it wasn't. Now what this process illustrates is the graphic operation being done in software, meaning by your CPU. My first pleasant surprise with the VGA card was its ability to help you speed this up. Assuming your graphics card had more memory than needed by the screen data, you could define the starting address of the screen. Hopefully some of you can guess what this means. By changing the starting address you could scroll through data and only update the new values. I was amazed by how fast I could shift the plot on screen and it took me a while before I explained this miracle to my clueless colleagues. This was the first example I saw of doing things "in hardware" (VGA card also had the built-in support for split screens, so you could have a shifting and a static part on screen. Now that really made some other programmers jealous).

Those were of course simple examples. But that was the start of thinking about the graphics card as a processor that is capable of processing data rather than just refreshing the screen. Obviously people started to think of more advanced processing done in hardware (so your CPU time didn't have to be spent on it). In 2D graphics one example was copying multiple buffers and sprites to the graphic card and using them intelligently when needed. But the real beauty of "graphics accelerators" came with 3D. When you create a 3D world, you have XYZ values for all the points, and you define lighting, camera location and that sort of things. The image you see on screen is usually 2D though. So there is a need for some processing to map your 3D world to a 2D image. In 3D applications this mapping is called rendering (or sometimes shading although they are not exactly the same). Performing rendering in software is a time-consuming process and that's why it took the computer industry a while to have real 3D (There were cheats like the one in Doom). The real cause of advances in 3D programs was the invention of graphics cards with built-in support for 3D rendering. This support was provided to software through display drivers, the code written by hardware manufacturer that receives data and commands from software and passes them to the hardware. Eventually this resulted in Graphics Processing Unit (GPU) on graphics cards which is a real processor (now with its own programming language) that runs on the graphics card instead of the computer's motherboard.

Hold this thought for a minute.

Operating systems provide a series of software modules and standard methods for applications to use, for example for memory management, disc access, and graphics. We usually call these the Application Programming Interface (API). There are two reasons to use APIs: (1) to make it easier for programmers so they don't have to write all the code, and (2) to make sure access to system resources is done in a safe way (for example two programs don't write to each other's memory or window). In Windows the initial APIs were Win32 and display-related part of it, Graphics Device Interface (GDI). To be safe and general enough, initial graphics APIs had a lot of overhead and were quite slow. Microsoft soon realized that this can cause problem for applications that needed high-performance graphics, e.g. games. They came up with a couple of unsuccessful solutions like GameAPI, and finally developed DirectDraw, a new API that allowed direct and fast access to graphic card. It also supported new graphics accelerators which we were talking about before. DirectDraw later evolved into a set of modules called DirectX including audio, input devices, and Direct3D for 3D graphics (and also 2D in later versions by practically removing DirectDraw and having just one API sometimes called DirectX Graphics). DirectX is obviously a Windows-specific API. People in the open-source and Unix world on the other hand came up with a similar solution for 3D API called OpenGL which is now available for all major platforms and supports 2D graphics as well as 3D. Similar to Direct3D, OpenGL supports graphics accelerators too. After a couple of other approaches, Apple decided to use OpenGL as the foundation for their graphics system in Unix-based OSX. So far this means that comparing professional graphics programming in Windows and OSX comes down to comparing Direct3D and OpenGL. But there are a few complications here:

1- On Windows, you can use OpenGL or Direct3D.
2- On Windows, you can still use GDI (for apps that are not graphics-intensive).
3- On Windows (more exactly .NET framework) you may have extra APIs on top of Direct3D that simplify special things like game programming. Best example is XNA, a cross platform game development API.
4- On both Windows and OSX, you can use 3rd party APIs which in turn use native APIs of the operating system so obviously are not efficient but can be easier.
5- On Mac, OpenGL is just the start. Apple, who like Microsoft is not a big fan of open standards and interoperability (I'll talk about this general issue later), also came up with a whole series of other APIs that are OSX-specific and usually (but not always) run on top of OpenGL. These APIs are recommended methods of programming on OSX, are not portable, and have different features and performance results compared to basic OpenGL. Examples are OpenCL, Core Graphics, Core Video, Core Animation, Quartz, and Cocoa.
6- All of these APIs, as I mentioned before, are "somehow" supported on other platforms by the same company but I'm not going to get into that right now.

So what does this mean? For Windows we have standard Direct3D and Direct3D on .NET framework (I'll explain the difference shortly). For OSX we have OpenGL and all those other Apple APIs. Since OpenGL is a 3rd party add-on for Windows, I won't consider it as an option in Windows for comparison. To compare, I consider the run-time performance, complexity of programming, and available features. Finally here are some initial observations:

1- Standard Direct3D on Windows is basically a C/C++ API. When Microsoft released .NET framework as a basis for interoperability between its different platforms, they developed C# as the native language for it. Direct3D API is available for C# pretty much with the same structure as C/C++. The differences are mainly due to the differences between C# and C/C++ and usually mean the C# version is rather easier. If you go to XNA it gets even easier than that without a huge loss in performance ("huge" being the keyword here).

2- OpenGL is a portable standard. This means if you stick to it, you can compile your code for all major platforms with minimum change. On the other hand, it is more complicated compared to Direct3D and a lot more complicated compared to XNA, and it doesn't give a major performance advantage either. In fact because most hardware manufacturers consider Windows as the major non-console platform for high-performance graphics (a market thing), the Direct3D drivers are usually better than OpenGL ones which means using Direct3D is not only easier but also better performance-wise.

3- 2D Graphics APIs on OSX provide more functionality compared to Windows. Many features in Core Video, Core Animation, and Quartz are not available through Windows APIs and need extra programming and libraries. Although when it comes to 3D (e.g. games) Windows has better performance and more native features (such as XNA if we consider it native).

4- OSX C/C++ APIs are a bit more complex than the Windows ones, and considerably more complex than the .NET version.

5- As I said before, Apple is pushing for its new Cocoa framework that is based on Objective-C. I haven't had the chance to do much with it yet, but it doesn't seem to be better than the C/C++ APIs in terms of complexity or performance.

All of these are still initial observations based on typical examples like displaying an image or accessing bitmap or 3D model data. For now it seems to me that OSX provides more built-in 2D graphics features while Windows APIs have done a better job providing 3D features. In terms of code complexity and performance (specially 3D), Windows is more likely to be the winner.

My next step is focusing on 2D and 3D programming in more details, but before I finish, here are some of the new tools I found for my MBP:
1- iAntiVirus on OSX, free, for Mac-only threats so faster and lighter
2- LimeWire on OSX, free, file-sharing on Gnutella and BitTorent networks (also available on Windows)
3- HFS Explorer on Windows, free, for access to Mac HD when booting with Windows (VMWare Fusion allows such access when using virtual machine)

I'LL BE BACK!

Sunday, May 2, 2010

#3, Development Environments

I wrote my first computer program in 1985 when I was a university freshman studying Electrical Engineering. Michael Jackson's Thriller, hair bands and breakdance were the popular things, and girls looked so colourful and had big hairs, shoulder pads, and big belts. The PC era had just started. Our school was still using IBM mainframes primarily. If you were lucky, you'd get to use a dumb terminal, a text-based IO device that connected users to the mainframe computer. The Terminal (or Console or Command Prompt) window in today's computers simulates that experience. For us poor first-years, off-line batch processing was the way. We wrote our code on a punch card machine (http://en.wikipedia.org/wiki/Punch_card). Each line of code had to be written on one card, the set of cards would then go to computer room as a "job". After the batch of jobs was processed, we would get our results as a print-out usually the next day. Imagine our disappointment when we saw a simple syntax error had wasted our whole day! BTW, my first programming language was FORTRAN. Later I moved to simple personal systems like Commodore 64 and ZX Spectrum to write and run BASIC programs in real-time (the joy of that!). By the time I got to my senior year, we were using IBM PC (8-bit XT and 16-bit AT) and I moved to Assembly programming for micro-controllers like Z80 and later my only successful love affair, 8051. MS-DOS had no GUI and we used a simple text editor and then the command-line compiler/assembler and linker. I experienced the pleasure of Integrated Development Environment (IDE) and C/C++ programming when I graduated and started working as a computer engineer. My first IDE was Borland C++. My first PC at work was a 386DX that could actually fly compared to old XT and AT. I did AutoLisp programming on it for AutoCAD. It needed 10 floppy discs to install and had a real setup program that looked so confusing to me I had to call a friend to come to our office and help me without anybody noticing. I was considered AutoCAD pro! (Nobody else knew how to write code for AutoCAD. Neither did I, but I claimed I did and I learned quickly). I saw my first OS GUI on Amiga and then Windows 3 which wasn't really an OS because you still needed DOS to boot. And so started a long journey of coding in DOS, Windows, and Unix.

Considering my history as a software programmer, naturally one of my major curiosities when I started using a Mac was related to application development. This of course was intensified by all the recent talks about Apple and application developers. I've always believed that the popularity of PCs was due to the availability of applications and ease of programming and customization (and of course lower prices). Mac became an iconic graphics workstation because of graphics software made available on it (thanks to Adobe! It seems that history has a sense of irony). It is now becoming more popular because more applications are being developed for it (not to deny other reasons). After all, applications are what we want, not computers by themselves. A smart OS/hardware company makes it as easy as possible for developers to build new applications. What I really find unacceptable about Apple's policies is the way they force developers to use Apple's technologies. I don't believe these restrictions are about, for example, Flash not being a good technology. On iPhone, developers can't use any non-Apple technology, no matter how good or bad. It is Apple's intention to keep their control over the app market, not their care for HTML5 or users. They could support other older technologies while there is no strong alternative, and let users decide for their own what they want to use. On Mac, the situation is not much different from iPhone, and Apple (just like Microsoft) is pushing for it's own proprietary technologies (Objective-C language for example) but in a less restrictive fashion than iPhone case. Anyway, my first impression of native development tools on Mac was what I wanted to talk about this week.

Let me start by giving you a kinda big picture idea first.
1- Although there are 3rd party development tools and languages (especially for web applications) such as Flash/ActionScript, the primary tools for application development are Microsoft Visual Studio and Apple Xcode.
2- Both Windows and OSX provide support for basic console applications written in standard C/C++, and also common technologies such as OpenGL. Windows no longer comes with built-in OpenGL support but the library is easily available. OSX comes with built-in OpenGL but Apple also promotes its own OpenCL.
3- Both Windows and OSX come with their own native API for multimedia and GUI programming in C/C++. These programs won't be compatible with other platforms due to the use of OS-specific libraries such as Win32/GDI and DirectX on Windows, and OpenCL, Carbon and Quartz on OSX.
4- Both Windows and OSX have offered new frameworks with their own native languages and the promise of providing more features and similar development on multiple hardware (e.g. mobile devices from the same company). These are .NET framework on Windows with C# language and Cocoa framework with Objective-C. Compact .NET and Cocoa Touch are the mobile versions of these frameworks.

For comparing Windows and OSX development tools, I use the following criteria:
1- User Interface
2- API Functionality and Complexity
3- Compatibility
4- Documentation
5- Samples

So far my review and comparison seems the easiest when it comes to development tools. Pretty much in all the above criteria, Windows is superior! This may sound too quick an opinion (and probably is). I still have a primitive knowledge of Mac development tools and frameworks, and I expect my judgment to change when I get to know them better. Also many things on Mac seem difficult due to the fact that I'm used to Windows and Visual Studio. So expect some of the following statements to be revised later, but for now here is what I've seen.

Microsoft has definitely done a good job providing easy to use development tools with lots of functionality at your finger tips. Compared to Visual Studio, Xcode looks awkward unintuitive and not integrated properly. For example Interface Builder is a separate program in Xcode. Object Browser, Class/Function/Resource Views, and Debugging tools are limited or non-existing (or well-hidden) in Xcode. Visual Studio provides way more information in the source code editor (e.g. function syntax or object members, go to definition, etc). The interface in Visual Studio is more customizable and toolbars way more effective and manageable (Xcode has only one toolbar). Even things like Find and Replace are harder to use in Xcode and not integrated well with source code editor. Visual Studio's hierarchical view from Solution (non-existing in Xcode) and Project, to File/Class/Function/Resource, and it's arrangement of Output/Find windows, are more practical.

It is too early for me to compare the full functionality and complexity of Windows and OSX APIs. Apple is pushing for Cocoa that is based on Objective-C. The C-based APIs are being removed and not supported by Xcode project wizard (you don't have them as an option for a new project). Unlike that, Visual Studio still supports a variety of applications and APIs based on C/C++. In return of using proprietary .NET and C#, Windows programmers get an easy-to-use class library and GUI for making applications. The least I can say about Cocoa and Objective-C at this time is that programming is certainly not as easy as C#, to some level because of the interface and also complexities inherited from C that don't exist in C#. Relative complexity of Objective-C comes with an advantage: it is easier to combine C and Objective-C code than it is to do so for C and C#.

As far as compatibility is concerned, Xcode and Visual Studio are not that different. Unless you do standard C/C++ or use portable libraries like OpenGL (or my favourite educational tool, Allegro), you will lose compatibility. Porting your code to other devices like smartphones is tricky in both cases. Visual Studio does provide libraries like XNA that are really cross-platform. .NET provides a certain level of code portability between Windows on PC and other Microsoft platforms. I'm not so sure about differences, for example, between Cocoa and Cocoa Touch code and how portable they are.

MSDN library is the central knowledge base for Windows and .NET programming. Compared to MSDN library, Apple's documentations for Xcode and supported APIs are weak, hard-to-use and limited. There are also much more 3rd party online support and documentation for Windows programming. Same thing applies to sample codes.

Finally, two quick general notes:
1- Mac startup is definitely faster. From power-up to successfully opening Chrome: OSX 40 seconds, Windows 80 seconds
2- Why Mac doesn't come with a simple Paint program? There are free tools though. I downloaded a decent one called Paintbrush (old name in Windows before Paint). I found Gimp a little awkward and slow. Another missing simple tool is SoundRecorder which BTW was much better in XP than the Win7/Vista version. On Mac I installed free Audacity which is more than a simple SoundRecorder but is easy to use and does the job.

My next step: working on API functionality

I'LL BE BACK!