Sunday, December 20, 2009

VIM Plugin: matchit.vim

By default, VIM has match feature. For example, if you edit a .c file, press '%' when you are at {, [, or ( char, you will see the corresponding }, ], or ) is highlighted, and your cursor will jump back the forth. It is a very handy feature. For html file, a tag like <table> is also pair case, with </table> as end tag. % char would not smart enough to find this pair. Based on the Best of VIM Tips, VIM Wiki's Editor choice, matachit.vim is in the section of "Really useful".

I tried to install it to my Windows at my work computer. It works fine. Basically, you have to download the file to VIM installation path, such as "E:\Program Files\MyProgram\VIM\". There are two folders for syntax help information and plugin vim file in doc and plugin folders. Now I am going to try to install it at my iMac computer.

First, download the package of matchit.vim. Copy the downloaded zip file content doc\matchit.txt and puglin\matchit.vim to local ~/.vim/doc and ~/.vim/plugin.

[MyMac]:~ [user]$ cp ~/Download/matchit/doc/matchit.txt ~/.vim/doc
[MyMac]:~ [user]$ cp ~/Download/matchit/plugin/matchit.vim ~/.vim/plugin

Now % should work. To rebuild the help message for %, type the command in VIM:
  :helptags ~/.vim/doc

This match extension works well for html file tags. Locate your cursor within anywhere in a tag, press % will jump to its corresponding tag. It is a really useful extension.

Read More...

Saturday, December 12, 2009

PsExec.exe Access Denied

I thought PsExec.exe would never fail to start a process on remote. I got my first failure last week when I was working on a project to do a similar remote execution call as I did before for a Windows XP Prof box. I had not problem running PsExec.exe to zip a file on Windows XP box before. However, this time for that Windows XP box, I got a failure: Access denied.

I did notice a different file sharing property window when I tried to share a a folder. I thought that there must be some Windows settings difference between this one and the one I worked before. Here is the file sharing property:



I googled "PsExec Access Denied" and found some resolutions. Basically, PsExec.EXE uses SCM API (OpenSCManager, CreateService, StartService), where SCM is Service Control Manager. It is similar as Process class in .Net, or the core might be the same. Those APIs are very much based on Window security settings. Anything is not correct, then PsExec.exe may fail.

The box I run PsExec.exe successfully has the following folder sharing property:



Simple File Sharing

This difference confirms that the box I got failure has some setting issues, as indicated in this discussion in Sysinternals Forums. The first setting is easy to change. From File Explorer's menu Tools|Folder Options..., in the View Tab, clear the setting for "Use simple file sharing":



Turn off "network users identify as guests"

The second change is a security setting. From Control Panel, find the Administrative Tools snap-in, open Local Security Settings. Then find Local Policies->Security Options. Look for the line "Network Access:Sharing and security model for local accounts". The value for this "key" was the default value of "Guest only - local users authenticate as Guest". Change it back to "Classic - local users authenticate as themselves".



After all those changes. I did not reboot the box. Just waited for a while. Then I got my PsExec.exe working, remotely zip files on the remote by using 7-zip tool!

Read More...

Sunday, December 06, 2009

SysInternals Tool: PsExec.exe

PS supports remote process. That means you may run a process on a remote Windows. Recently I was working on a project which requires to run a process on a remote Windows box. I tried to use WMI process to start a process on remote. It works on one box (Windows XP), but the same codes do not work on a Windows 2008 server.

Quickly I found a solution: a SysInternals tool, PsExec.exe. It is very small and it works well. To start a process and wait it terminated, there is the code:

PsExec.exe \\computerName -u userName -p pwd -i program args...

the option -i is used to start the program in an interactive way.

Read More...

Tuesday, December 01, 2009

Zip Files with PowerShell Script

Recently I have been working on backup files from a remote server in network to another server PC. I use SyncToy tool to sync files from the remote to the server. Each time when you run the SyncToy, it will generate a SyncToy.log file as in "C:\Documents and Settings\username\Local Settings\Application Data\Microsoft\SyncToy\2.0\SyncToyLog.log". What I need to do is to copy the SyncToy.log file from that location to a specified location and zip to a monthly file as my log, for example, "C:\synclog\synctoy_122009.zip".

This job can be easily done in a .Net project, but I was required to write a script instead of another program. As I know very little about PowerShell, I spent about 2-3 days to find out a solution. Basically, you can access to almost any .Net classes in PS. I have used DotNetZip library before, which provides a very simple and nice library class to zip files. What I need to access to this library, create an instance from its class and call its methods to detect and zip files. In PS, it is very easy to do that.

# ZIP dll library file in the local PC folder:
$ZIP_DLL = "C:\bin\Ionic.Zip\Ionic.Zip.dll"
$assemblyLoaded = [System.Reflection.Assembly]::LoadFrom($ZIP_DLL);
# Zip class
$zipClass = "Ionic.Zip.ZipFile";

Here I use a var to hold LoadFrom(...) is to prevent output of loading results. The var is not needed for reference use. In PS, if you want to prevent some output while calling some methods, this may be a strategy to do it.

To zip files, I created a function to do the job. The function will zip a group of files (source files as a string such as "C:\temp\*.log"), with a constrain of days for last modified date stamp within those days from now, to a destination folder. In addition to that, I pass one flag to the function to provide option to include path in zip or not.

#*============================================
# FUNCTION DEFINITIONS
#*============================================
function ZipUpFiles (
  [string]$p_Source = ${throw "Missing parameter source"},
  [string]$p_DestFolder = ${throw "Missing parameter destination folder"},
  [int]$p_days = ${throw "Missing parameter int days"},
  $p_zipFile,
  [bool]$p_PathInZip,
  $p_zipClass
  )
{
...
}
...
#*============================================
# END OF FUNCTION DEFINITIONS
#*============================================

In PS, actually, you don't need to define input parameters. You can define () empty list, and you can still call it with a list of parameters. Within the function, you can get parameters by $args. However, it is much clear by defining parameters. You can think them as var definitions.

The first thing to do in the function is to get a list of files:

  $checkFileDate = ($p_days -ne 0)
  # adjust timestamp by days for comparing
  $dateToCompare = (Get-date).AddDays(-$p_days)
  $zipCount = 0;
  # get all the files matched and timestamp > comparing date
    $fs = Get-Item -Path $p_source | Where-Object {!$_.PSIsContainer -and (!$checkFileDate -or ($checkFileDate -and $_.lastwritetime -gt $dateToCompare))}
  if ( $fs -ne $null )
  {
    ...

The codes are pretty much straightforward. Here Get-Item command to check path with pipe to check each items to meet requirements: not sub-direction, and file created date great than days if specified. The result is a collection of files to be zipped.

In PS, all the comparison and logical operators are literal with -. For example, -gt for great than, -eq for equal to, and -or. This very handy and easy to understand. It also makes the blog HTML tags much easier, no need to convert "<" to "&lt;".

Next continue to zip files in a for loop. The function takes one parameter as zip file name. If it is specified, all the files will be zipped to that file with {mmyyyy}.zip as suffix. If it is not specified, each file will be zipped with that suffix.
    $zipObj = $null
    if ( $p_zipFile -ne $null )
    {
      $zipFile = "{0}{1}" -f $p_DestFolder, $p_zipFile
      $zipObj = new-object $p_zipClass($zipFile);
    }
    foreach ($file in $fs)
    {
      $addFile = $file.Name
      if ( $p_zipFile -eq $null )
      {
        $zipFile = "{0}{1}.zip" -f $p_DestFolder, $addFile
        $zipObj = new-object $p_zipClass($zipFile);
      }
      # Trim drive name out as key to check if file already in zip?
      if ( ($zipObj.Count -eq 0) -or 
                (!$p_PathInZip -and ($zipObj[$file.Name] -eq $null)) -or
                ($p_PathInZip -and ($zipObj[$file.FullName.Substring(3)] -eq $null))
                )
      {
        Write-Output "Zipping file $addFile to $zipFile..."
        $pathInZip = ""
        if ( $p_PathInZip )
        {
          $pathInZip = $file.Directory
        }
        $e= $zipObj.AddFile($file.FullName, $pathInZip)
        $zipCount += 1
      }
      if ( $p_zipFile -eq $null -and $zipCount -gt 0 )
      {
        $zipObj.Save()
        $zipObj.Dispose()
        $zipObj = $null
        $zipFile = $null
      }
    }
    if ( $zipObj -ne $null -and $zipCount -gt 0 )
    {
      $zipObj.Save()
      $zipObj.Dispose()
      $zipObj = $null
    }

Here $zipObj is created from .Net class. All the methods then are available in PS. You may refer to class definition in Visual Studio or ReFlector to view class structure. Before I add a file to zip, I check if the file is already in the zip file (two cases: path in zip or not). If so, no zip will be done.

In the end of the function, the $zipObj has to be saved and cleared if there is any files added:

...
    }
    if ( $zipObj -ne $null -and $zipCount -gt 0 )
    {
      $zipObj.Save()
      $zipObj.Dispose()
      $zipObj = $null
    }
  }
  if ( $zipcount -eq 0 )
  {
    Write-Output "Nothing to zip"
  }
}


Finally, in my PS script, after the function definition, which has to be declared before it is called, here is my main entrance:

#*============================================
#* SCRIPT BODY
#*============================================
# Example parameters:
# E:\Temp\*.bak E:\Temp\BackupZips\ 50 backup.zip
Write-Debug "Starting ZipFiles.ps1"
# check input arguments
$argsLen = 0 
if ($args -ne $null )
{
  $argsLen = $args.length
}
if ( $argsLen -lt 2 -or $argsLen -gt 5 )
{
  HelpInfo
  return
}
$i = 0;
# Get input parameters
$sourcePath = $args[$i++]
$destPath = $args[$i++]
if ( !$destPath.EndsWith("\") )
{
   $destPath += "\"
}
[int]$numOfDays = 0
$zipFile = $null
[bool]$pathInZip = $true
if ( $argsLen -gt $i )
{
  $r = [int]::TryParse($args[$i++], [ref]$numOfDays)
  if ( $argsLen -gt $i )
  {
    $zipFile = $args[$i++]
    if ( $zipFile -eq $null -or $zipFile.length -eq 0 )
    {
      $zipFile = $null
    }
    if ( $argsLen -gt $i )
    {
      $pathInZip = ($args[$i++] -eq 1)
    }
  }
}

# Test source & destiantion
if ( !(Test-Path $sourcePath) -or !(Test-Path $destPath) )
{
  Write-Output "Nothing to do. Either ""$sourcePath"" or ""$destPath"" is empty or does not exist."
  return
}

# ZIP library is from http://www.codeplex.com/DotNetZip
# ZIP dll library file in the local PC folder:
$ZIP_DLL = "C:\bin\Ionic.Zip\Ionic.Zip.dll"
$assemblyLoaded = [System.Reflection.Assembly]::LoadFrom($ZIP_DLL);
# Zip class
$zipClass = "Ionic.Zip.ZipFile";

Write-Debug "Start zip process ($sourcePath > $destPath)..."
ZipUpFiles $sourcePath $destPath $numOfDays $zipFile $pathInZip $zipClass

$assemblyLoaded = $null

#*============================================
#* END OF SCRIPT BODY
#*============================================

The first section of main body is to parse input parameters. As I mentioned, $args is a PS variable for arguments. If there is less or more required parameters, function HelpInfo is called, which just output the usage of the script and it is omitted. When all the required parameters are parsed, the function ZipUpFiles is called.

Read More...

Monday, November 23, 2009

Syntax Highlighting

notesI posted a blog on code syntax hilighting last year. I have tried to add syntax hilighting for my codes in my blogs since then. The color settings are based on css styles embeded in my Blogger's html settings. The css classes I used are based on ASP.NET forum editor page settings. The problem I have is that the classes are very limited. They are limited to C#, HTML and SQL codes. It is very hard to expand my css classes to cover a wide range of codes.

Later on, I changed my code syntax hilighting strategy from HTML class to direct style settings. The advantage I can easily use other editor's syntax color schema to my blog codes. VIM is a great editor I can use. The following are major steps I use.

Use VIM to Edit Source Codes

First I use VIM to edit as a tool for codes. VIM supports a wide range of languages. For some new languages, if I don't have syntax vim files, there must be some syntax vims available on web. For example, I got and installed Microsoft PowerShell script ps1.vim.

In addition to VIM's syntax lighliting feature, I also use VIM's colour scheme. I have put some of my favourite colour scheme files in my /.vim/colors folder. The following is current colour scheme files:

[Home] .vim $ ls colors
fog.vim  spring.vim torte.vim


This Google code project, VIM Color Scheme Test, contains hundreds of colour scheme files. I got my colour files from this site.

To change different color scheme in VIM, just type command

:colorscheme ...


or tab through different colours.

Convert to HTML

To covert colourful codes to html, you can use the command

:TOhtml


This command will open another window on top with html codes. Note: by default, my VIM has line number displayed. It is very convenient to locate codes. However, it is not good to convert line number to html as well. Normally I disable line number first:

:set nonumber


Here is an example of my VIM:



In the top window, it contains complete html codes as a HTML page. What I need is just a part of my codes starting from <font> to </font>



Clean up HTML Codes

Use the following commands to clean up codes:

:%s/\n//g


This command search for new lines globally and replace them with empty, that is removing all the new lines. Note to replace something with line breaks, the command is

:%s<b>/\r/g


Removing all the extra new lines is needed for Bloger content.

Finally, add a div tag around the html codes (<div class="outlinebox4wrappercodes">):



Copy and paste these html codes to my blog. Job is done!

Note: if you use colour scheme in VIM, you may need to copy the background colour from html <body bgcolor="..."> and paste it next to div like <span style="background-color:...;" >



Conclusion

The limitation of this strategy is not easy to manage or maintain blog codes, especially when I need to correct, delete or add a long section of codes. Normally, I would use VIM as my editor fro my source codes, and repeat these steps again if any changes are needed.

The advantage of this method is that there is no any dependency on css classes. All the syntax settings are done directly by using html font tag with colour settings. To manage classes in Blogger's header to cover various languages is very difficult. Any change to the classes may break previous blogs.

Read More...

Saturday, November 21, 2009

VIM Syntax Settings

Recently I started to learn and working on Windows PowerShell scripts. PowerGUI is a nice tool as script IDE or editor. However, when I wanted to write a blog on PowerShell with some example script codes, I have to use VIM or MacVim to convert codes to HTML. By default, my VIM does not have syntax vim file for PowerShell, since it is a new script language. Then I searched for the ps1.vim on web. It does not take much time to find out one at vim.org's syntax library.

This is first time for me to add syntax file to VIM. It was challenge for me and I took about 2 hours in last evening to figure it out. VIM is an excellent editor for programmers. It provides configuration settings for adding syntax files. Basically, the settings are in two different areas. The first one is to add syntax file to a specified directory. Most cases, syntax files can be found on web by many VIM fans and gurus. I don't need to write one even there are detail information about writing syntax files. Taking PowerShell script as example, the syntax file should be ps1.vim.

Add Syntax File

In case of my Mac OS(UNIX system), there are two places I can place the file. One is for all users and another one is a login user. All the syntax files come with VIM are installed in my Mac at /usr/share/vim/vim72/syntax/ folder. I found this location by using the following command in Terminal:


[Home] $ find / -name "html.vim" -print


Since I have never added any syntax file by myself before, there is no syntax files specific for my login name. I have to create the following folder for my syntax files.


[Home] $ mdkdir ~/.vim/syntax
[Home] $ cp ~/Downloads/ps1.vim ~/.vim/syntax/ps1.vim


After that, I thought I should be able to test a ps1 file in VIM with syntax colors. I did not see any syntax colour when I opened a test file: test.ps1. It took a while to figure out a way to manually load or test the syntax file by using the command in vim:


:source ~/.vim/syntax/ps1.vim



The loading process was failed because there are some syntax errors in the ps1.vim file. I think that actually is the line break problems when I downloaded the file from web: some line breaks are actually ^M chars. Then I found another updated ps1.vim from tomsr. I copied the source codes from web page and replaced ps1.vim's whole content. After that, the loading process was OK.

NOTE on September 16, 2010: for Windows XP, the vim.ps1 from VIM syntax library is OK. I have to copy this file to my VIM\vimfiles\syntax folder.

Define File Types

Still I could not see syntax colour. This is because VIM requires another setting to define file types. This second setting is called as *new-filetype*. I choose C option to define my new file type. I copied the following codes to a file at ~/.vim/filetype.vim:

if exists("did_load_filetypes")
  finish
endif
augroup filetypedetect
  au! BufRead,BufNewFile *.ps1    setfiletype ps1
augroup END


Then I restarted my VIM. Finally I can see my test.ps1 file in VIM with syntax colours.



NOTE on September 16, 2010: for Windows XP, the file filetype.vim should be copied to VIM\vimfiles.

Read More...

Thursday, November 19, 2009

Google Chrome OS

Today Google announced its Chrome OS progress. Goole revealed more detail information and early stage of the Chrome OS. I watched some video shows and it is a new way to explore the usage of web. For sure its new structure opens a different view of OS. It is new but it is a result based on the development of computer hardware and software, as well as the development of Web in the past 10 years.


More shows are available at YouTube.com: Google Chrome Themes.

I really like its way to explore OS for net-book. OS is based on web browser and web applications to provide applications for end users. That's not new. The only problem is that it is heavily depended on Web or Cloud servers. In case of none-web available cases, it would be useless. That's why Google come out Chrome OS. It provides OS level APIs and storage (solid-base HD) with off-line features, just like OS still live. As many critics recall this light-weighted OS is not new thing. Early in 21st century, Sun Microsystems tried to provide Java based NC (network computers). I remembered that I attended a tech talk in Calgary by Sun, talking on JENI Project based computer terminals. The speaker said that NC should be like a telephone device, which is very cheap and can be placed anywhere. It is a network based computer systems. I was expecting the new change happening and it looks like a failure.

However, I think Chrome OS will be very different. It will be great challenge for Microsoft and Windows. For Apple, it is a challenge as well, but I think Apple has better position to move its OS forward. I believe that UNIX based OS has great future since it is based on solid and proven computer concept. 2010 will be very interesting year to see new OS and net-book hardware appearing. Chrome OS will have great impact in all areas and out daily lives.

Read More...

Wednesday, November 11, 2009

Go: New Open Source Language from Google

Last night I read a news from CNET news about a new language as an experiment and promising language targeting today's computer trend with multi-processors: Google hopes to remake programming with Go. Then I went to Google Go language site: golang.org. I watched two shows there. One is an introduction about Go, last about one hour, and another promotion show in about 2 minutes. Here is the introduction show on YouTube:


Show link: The Go Programming Language

It is very impressive. I really like it even I did understand all the codes in their new syntax. The codes look very similar as C/C++ and some like C# or Java, however, many new features are introduced to Go. The team behind Go is a group of very talent people who were the original creator of UNIX and Chrome's Javascript and compiler. As it is mentioned at Go's web page: Go is ...simple, ...fast, ...safe, ...concurrent, ...fun, and ...open source.

I am very impressed with its compiler result, 13k lines of codes compiled within 209 nanoseconds and whole math and other calculation libraries compiled in about 9 seconds. The current version is only available for Linux and Mac OS.

It looks that Go will have great feature. Comparing to the current computer languages, Go's great performance will be very attractive to programers. It is built from ground up with clean design. I guess the future Google OS may be based on Go and more great applications written by Go will be coming out soon. Still as Rob pike said in the show, (Go) is early yet ... more will be coming soon.

Read More...

Sunday, November 08, 2009

SyncToy 2.0 - A Windows 7 Tool by Microsoft Home Server Team

toolsLast week, I was asked to investigate an issue of backup some files from remote network drives. I thought the basic concept is to find out if a file has been changed or not. This could be file name, modified date. It can go further complicated such as file name changes, deleting, as well as content change. This must be a very common issue in Windows platform, I thought, and there must be something available such as libraries or tools.

Soon I found out SyncToy 2.0, a free Windows utility application offered by Microsoft. It was actually for Windows 7. The tool is based on Microsoft Sync Framework File Synchronization Provider. In other words, you can use the tool or use the library in .Net. The tool provides three basic actions:

  • Sync: bi-directional sync, any change in either left or right will synced to opposite side. New and updated files are copied both ways. Renames and deletes on either side are repeated on the other.
  • Echo: any change on the left side will be echoed to the right side, but any changes on the right side will not be on right side or reload to right side. New and updated files are copied left to right. Renames and deletes on the left are repeated on the right.
  • Contribute: only new files on left side will be added to the right side. New and updated files are copied left to right. Renames on the left are repeated on the right. No deletions.



For my case, Contribute is the right action for me. I tried this tool and it works well. If there are a lots of files on the left side, the first time may take long time since all the files will be copied to the right side. The later sync process may be much faster if there is a few files changed or no change. It can be used against network drive and the sync process can be done as a scheduled job with sync name. Each sync is defined as a pair of left and right, with additional options such as file exclusion, hidden files and more.

The tool provides log under current user profile. I think the log file is saved at %userprofile%Local Settings/Application Data/Microsoft/SyncToy 2.0. The left and right sides, there are some hidden dat files, which store sync information for the next sync.

There are also some examples of how to use the Sync Framework. I think this provides more more control over how to sync and realtime sync.

Reference: a blog on SyncToy 2.0 by Microsoft Home Server Team.

Read More...

Wednesday, November 04, 2009

SuperUser.com and WidExplorer

StackOverflow has another site for computer Q&As: Super User. The structure of this site is very similar as StackOverflow. Any user can ask and answer computer related questions, vote Q&As, and make comments. Any user can gain reputations by badges. This is a complementary to StackOverflow's developers site. I, as David.Chu.ca with 163 reputation currently, have asked several questions and answered a few.

Today, I asked a question about using browser to display multiple web sites in one page or tab. This could be done by browser's add-ins or web services. I prefer web services since it is less dependent on browsers. I got some answers, but soon I found a web service doing what I want: a wide view of the web(WE). Basically, the web service has the following features:

  • Search engine: it provides a text field for search. The search results will be a list of google, being, yahoo, wikipedia, youtube and more. The display is one page with wide view of those results.
  • An edit UI for multiple URLs. From there you can enter a list of ULRs and get a wide view of results.
  • Short URLs. Since the view is in one page or tab with a list of results of web sites separated in horizontal view, this browser view has a long URL. At the end of the view (far right side), WE provides a UI for edit URLs and a short or tiny URL for the current view. For example, this tiny URL name(http://tinyurl.com/ybvhcdv) is a view of a list of stocks.


In addition to that, the wide view also provides "-" and "+" to shrink and expand a URL view. The wide view and UIs are based javascript codes on client side so the view is very fast. The only thing is that each URL view has its full width without its own horizontal scroll bar. Another thing is that some empty URL are displayed as expanded mode instead of shrink mode, or empty URLs should not be displayed.

Read More...

Saturday, October 31, 2009

AutoMapping Library

I watched another great show on DNRYV: Jimmy Bogard on AutoMapper. It is a tool or framework to map from one class to another one, almost automatically. Of course, you have to follow its default conversion to define the new mapped class. If you prefer new names as mapped properties, you can still use methods from its framework to do that.

Although its' usage may only for mapping a class to another class., I am very impressed by this library's design and interface. This is the most import point I learned from the show. The library is available from CodePlex web site. I subscribed to Jimmy Bogard's blog as well.

Read More...

Monday, October 26, 2009

Pex and Code Contrat

Last week I found two new .Net libraries from Microsoft Research: Pex and Code Contract. Both are right now available as review versions for Visual Studio 2008 Professional version and will be part of .Net framework 4.0 and Visual Studio 2010.

Pex is a library with tools for unit test. It looks very handy and easy to use. It reduces a lot of tedious codes to create test with data. The Pex Exploration integrated with VS 2005 will be able to generate tests with all the data to cover up to 100% of codes. Pex is smart enough to search all the possible branches with data and tests. I remember that I had created a lots of tests to cover all the possible cases and it is very tedious to find branches and repeated uncountable times. The biggest issue to maintain unit tests is that when source codes are changed, the coverage will changed again. In order to cover new cases or update tests, re-examinate source codes and tests need a lots effort. Pex looks much better to handle all those cases. That's really awesome.

In additional to auto-generate test codes, Pex also cover mocking data cases. Actually, I found Pex from the resent DNRTV show: White Box Tesing with Pex, where mocking demo is shown there simulate reading file case. I'll be very interested in trying this great library.

Another very interesting library is Code Contract library. This is really cool stuff I have been looking for. Basically, it is used to define contract for class members, method parameters and method return expectations. The contract is used as static class within .Net codes. Within VS, you can enable is static checking and run-time checking. Any violations will generate exceptions as expected by Contract. In addition to that, it will generate code documentation as well. From its syntax, it is not too complicated but makes the coding much easier. This is definitely a great enhancement to .Net 4.0 and VS 2008 and coming VS 2010.

Read More...

Friday, October 23, 2009

MEF Framework for Dependency Injection

Recently I have time to read MEF (Managed Extensibility Framework) on Microsoft's Open Souce web site CodePlex. I knew this one for quite a while. I decided to spend time to read the whole document about the framework because this one appears many times when I search for new stuff and IOC/DI. Many places also mention that MEF will be part of .Net Framework 4.0 and Visual Studio 2010.

The framework seems very simple. Basically it uses attribute classes Import and Export to define dependency relationships. Export is used to export a class and in most cases the class should implement an interface. Then Import is used by MEF to find out matched class to get an instance. MEF also supports a collection of same Imports with same types. In this way, Parts are separated by application and libraries which are linked by MEF only by types specified in Import and Export. One advantage of MEF is that it is based on reflection to get DI relationship by Catalog. As a result, there is no need to define DIs by xml or config files.

I have used StructureMap and I really like its strength in IOC/DI, as in my previous blogs. SM also separates dependent parts between clients and users very well. I think I'll try MEF out in my projects to find out really how MEF works.

Read More...

Tuesday, October 13, 2009

Microsfot Channel 9 Shows on Functional Programming Language

Last night I watched two new lectures on Functional Programming Fundamentals by Eric Mejier. I watched his discussion on Visual Studio 2010 Inversion of Enumerations before. That was very impressive. This series talks explore more fundamentals about Functional Language.

I watched DNRTV on F# before. My impression of this language is that it is back to old style of programming in structured functions. It is far away from objective oriented language such as C# and VB.Net. However, Eric talks demonstrate some unique powers of functional language. I really like its stateless and easy test features.

One question about functional language is how it is integrated with OOP languages such as C# and VB.NET? F# will be part of VS 2010. There must be ways easily to integrate it to C#. When I finish Eric's talks, I think I'll get much clear picture of FP's role.

One interesting point of Eric's talks is that he starts from popular FP language Haskell 98, instead of Microsoft's F#. The first talk is mainly on the history of FP and the second one jumps into Haskell. The tool or FP system shell he used for loading and running Haskell scripts is Hugs.

Read More...

Thursday, September 17, 2009

LINQPad as a .Net Snippet Code IDE

InI Posted a very simple question about "a ? b : c" expression to Stackoverflow last night. I was at Cafe with my friend discussing a case of this expression: if b and c are different types. Since we did not have Visual Studio available to test my codes out. I posted the question to Stackoverflow. Within minutes, we got several answers to confirm my guess: a Cast can be used before either one to make them the same type. The question is indeed a very simple one. However, this lead to a new discovery, at least for me, of a great IDE tool, only one EXE with 2MB size, for .Net C# snippet codes.

The tool is called as LINQPad. Its user interface is very simple. On the top are menus and tool bars. The left panel is for Database connection or database structure list. The right panel is composed of top and bottom parts: code IDE and result view. This tool supports C# & VB expression, statements, programs, and SQL. It is very easy to use. However, it is very buggy. This morning I give it a try. After I added a reference to my library and added several my namespace lists. I could not to get a simple Console out work in a new tab when I tried to show it to my work colleges. I had to restart the application and removed references and namespaces to get it back to work.

All the codes can be saved an xml file. Within the xml file, all the references, namespaces, and code snips are saved there. It is really cool. No wonder a user who answered my question recommended me to try this tool. I guess he knew I had no VIsual Studio available. We actually had a Mac computer.

The project is not an open source project. It is a closed project. The standard version is free and an advanced version with auto-completion feature is for sale. After I give it a thought for this back engine, I guess that application is not hard to create. I would use the .Net CodeCom namespace as .Net compiler engine to create a dynamic project with a template for a snippet as plug in codes, just as I posted in my previous blogs. The dynamic project could be a simple console application> if it compiles OK, then run it through Process class. The process is hidden and all the outputs can be redirected as output back to the application. The application interface parts can be done with MEF framework so that each view parts can be plugged with various UIs to support IDE code editor (such as supporting syntax color schemas for various languages), result view (grid or table layout) and other views.

With this structure, LinPad should be an open source project so that talent developers can make it much better and extendable. One person's dedication is great, but with Web world available, great resources from the world should be utilized.

Read More...

Tuesday, September 15, 2009

My Stackoverflow Reputation Points Reach to 1K

Today my reputation score at Stackoverflow reaches to 1007 points! The recent question on Parse String to Enum Type boosts my score over 1K points. I could touch this target earlier than up to today, but my main intention to use this web site is to help me to get the best and the quickest resources and answers for my programming questions or issues. I have not focused on gaining scores or badges at all. Therefore, I only spend some time to search and answer other people's questions when I have time.

The following is my statics:

  • Score: 1007
  • Badges: 15 bronze badges
    • Popular questions: 7
    • Tumbleweed: 1
    • Scholar: 1
    • Organizer: 1
    • Editor: 1
    • Commentatior: 1
    • Teacher: 1
    • Supporter: 1
    • Student: 1
  • Questions asked: 86
  • Questions answered: 34
  • Votes: 70
  • Tags: 66

With Stackoverflow, it have been my great resources to helpresolving my questions and issues. I got many great answers and explanations for my program questions. Normally, if I cannot resolve my issue within short period time, or I have concerns about my strategy, or just want to consult experts, I post my questions there. In most cases, I get answers in just less than 5 minutes. Sometimes, I have to wait longer. Soon I learned that I don't need to mark the quick response as answer right away. People compete on Stackoverflow for earning score points. However, sometimes, good ones may not be the quick ones. Just wait for the good ones.

Unfortunately, recently I cannot access Stackoverflow from my work by using my Blogger OpenID. Husky would not allow me to access to my Blogger login web page. I had to create another account (David Chu) by using Google OpenID. With that account, I have about 185 score points. I was reluctant to use that account to ask questions initially, since I preferred to use one account to accumulate scores. However, I do need to get my issues resolved during my work most of time. Therefore, recently I started to use that account more. Still if I can wait, I'll post my questions after work or early in the morning. That's the reason my reputation score points have grown slow. By the way, the total score of points of my two accounts have reached 1K more than one month ago.

Anyway, I set 1k as a target day to celebrate my reputation on Stackoverflow.

Cheers!

Read More...

Sunday, September 13, 2009

Free CodeRush Xpress Tool by DevExpress

toolsDevExpress released a free tool for .Net Visual Studio 2008 users: CodeRush Xpress. I found this out from DVRTV show 143: Mark Miller on CodeRush Xpress.

I knew this tool for long time but I have never tried the tool as I used Resharper before. With the free offering, I tried this re-factory tool right away. It is a nice tool for .Net developers. But I found that some features are not working such as Tab to Next Reference and Template snip codes for switch and for loops. Maybe there are some option settings I have not set up yet. I also realize that the tool does provide hint on the left scroll bar to indicate questionable codes such as greying out unreferenced using statements, like Resharper has (which provides hints as yellow marks).

Here are some additional links related to this tool:

Read More...

Sunday, September 06, 2009

CodeDom and Expression Calculator (2)

In order to compile a snip of codes dynamically, I need to define a template in the class of ExpressionEvaluation so that it can be used as a base. The template contains several parameters which will be replaced (such as a class name, a data type and an expression). I defined a string with parameters enclosed by {} so that those parameters can be easily replaced by dynamic values (string.Format(template, parameters...)).

Here is the template of snip codes in the class:


using System;
using System.Reflection;
using System.CodeDom;
using System.CodeDom.Compiler;
using Microsoft.CSharp;

public class ExpressEvaluation { // class name
  private const string _InitalizeValueCode = "private {0} _value = {1};"; // valType, valExp
  private const string _ClassName = "_CalculatorWithFormula"// 0
  private const string _MethodAnswer = "Answer"// 1
  private const string _SourceCodeTemplate = @"
using System;
public class {} {{ // {{0}} Class name
  {2}
  public {} {1}()  // {{1}} method to be called to get result
  {{
    return _value;
  }}
}}";


  public int GetAnswer()         // method to get result as type
  {
    return _value;
  }
}


The key methods in the class are BuildCodes() and GetAnswerByBuildAndRunCodes():

private static string BuildCodes(
    string valueExp,
    string varType)
{
    string initializeValueCodes = string.Format(
            _InitalizeValueCode, varType, valueExp);

    string codes = string.Format(_SourceCodeTemplate,
        _ClassName, _MethodAnswer, initializeValueCodes, varType);

    return codes;
}

private static T GetAnswserByBuildAndRunCodes<T>(
    string sourceCodes) where T : struct
{
    object value = default(T);
    CompilerResults cr = GetCompiler(sourceCodes);

    var instance = cr.CompiledAssembly.CreateInstance(_ClassName);
    Type type = instance.GetType();
    MethodInfo m = type.GetMethod(_MethodAnswer);
    value = m.Invoke(instance, null);

    return (T)value;
}

Those two methods are very straightforward. In GetAnswserByBuildAndRunCodes(), a method GetCompiler() is called to get a C# CompilerResults object in the current context:

private static CompilerResults GetCompiler(string codes)
{
    CSharpCodeProvider codeProvider = new CSharpCodeProvider();

    CompilerParameters parameters = new CompilerParameters();
    parameters.GenerateExecutable = false;
    parameters.GenerateInMemory = true;
    parameters.IncludeDebugInformation = false;

    foreach (Assembly asm in AppDomain.CurrentDomain.GetAssemblies())
    {
        parameters.ReferencedAssemblies.Add(asm.Location);
    }

    return codeProvider.CompileAssemblyFromSource(parameters, codes);
}

Here is the complete source codes for download.

Read More...

Sunday, August 30, 2009

RESTful Service Start Kit

I have been very interested in RESTful services, since it is a service based on HTTP Get, Put and Post protocol. Basically, you can use this service by using URL to provide your resources to end users with simple web protocol service. One way to explore the service is by using a web browser, which is URL based application to handle resources such as HTML and XML.

I have watched Pluralsight Screencast demos on Microsoft WCF REST Startkit. The Start Kit project was launched last year with Preview 1. Now it is in the stage of Preview 2. As MSD's artical: A Developer's Guid to WCF REST Start Kit, this new framework will be in the future versions of the .Net Framework.

However, what I tried the Preview 2 last week, I realized that some of template projects behavior differently from the version of Preview 1, as I saw in the Screencast. For example, one class of Service.basic.svc.cs is missing. From the class of Service.svc.cs I can see base class such CollectionServiceBase and ICollectionService but they are not available for editing. This makes it impossible to customized help descriptions, implementations and templates. Not sure if all these changes are available from other alternative classes or interfaces. I think I have to spend a little more time to read information about the Preview 2 and to get it work as I expected.

At the same time, I posted my questions about the change of Preview 2 and asking if there is other alternative Open Source based library available on StackOverflow. It seems like that there are very a few EST kits or libraries for .Net platform. One alternative is OpenRasta. It looks like that OpenRasta is very good package. Basically its architecture is based on three elements: resources, handlers and codec. Resources are information to be exposed to the REST service, handlers are HTTP handlers for GET, PUT and POST for various request patterns, and Codec is enCoding/deCoding of HTTP requests.

Though the person who provides OpenRasta is a very smart .Net developer, he has very limit resources, time and effort to move it out. So far it is not so widely used and it looks like that it does not support Atom and feed services. Maybe in the new release coming the next month, some more enhancement will be available.

I'll keep eye on this library and at the same time, I have to look at Preview 2 in a deep level. Enjoy exploring!

Read More...

Sunday, August 23, 2009

CodeDom and Expression Calculator (1)

I have subscribed to .Net Rock podcast and I enjoy listening to this talk on weekly base. At the beginning of almost each talk Carl always gives a small section about one .Net framework library. It is a very brief information about the .Net library or namespace and some descriptions or usages about using it. Recently he mentioned something about Compiler to compile codes but he admitted that he does not know where to use it and how to use it. He just thrown it out.

This reminded me immediately about System.CodeDom namespace. I have used this to dynamically build source codes, to compile codes, to create an instance and to get result by calling a method of the instance. It is very cool stuff. I use this feature to get result of a expression such as:

    24*60*60

so that I can use expression in a configuration XML file. Since the expression is evaluated by building a snip of dynamic codes, the expression can also be a expression of C# codes like:
   Date.Parse(Date.Now.ToShrotDateString())

in the configuration file as a current date in a short date string. Eventually, the value will be a DateTime value as a property value.

I created a class called as ExpressionCalculator. Here is an examples of how to use the class to evaluate expressions:


int iVal = true;
ExpressionCalculator expCalc = new ExpressionCalculator();
expCalc.SetConfiguration("24*60*60",
    ExpressionCalculator.ECValueType.Integer).
    GetAnswer<int>(ref iVal);

Think about the snip codes to evaluate an expression. This is very simple. Here is an example:

using System;
public class ExpresssionEvaluation { // class name
  private int _value = 24*60*60; // type and expression
  public int GetAnswer()         // method to get result as type
  {
    return _value;
  }
}

What I need to evaluate an expression result is to generate a snip of codes with expression and expected type dynamically replaced. By using CodeDom classes, the snip of codes can be built and compiled. Then instance of ExpressionEvaluation can be created. Finally the result can be obtained by calling the method GetAnswer().

Interestingly, I found that the CodeDom compiler uses the exactly same compiler to compile the source codes in %tmp% directory. The assembly then is loaded by using Reflection to get all the classes, properties and methods. I found those background stuff while I had some compiler error with the snip of codes.

Read More...

Sunday, August 16, 2009

MSDN Channel 9

In the past month I have been watching talks or conversations on MSDN Channel 9. Microsoft is going to be more open than before. I enjoyed those open talks on the technology and new stuff. Actually, I think that Microsoft attract many talent people who have been on Open development for years and Microsoft has learned that they can gain benefit by join the Open world!

I found the Channel 9 one day when I tried to find out any shows or videos about REST web service with .Net or Visual Studio. To my surprise, I found a series talks on this at Channel 9. The best talks are offered by PluralSight.com, which is a search on its site for REST. PuralSight has better quality videos. Based on the talks, I found that Microsoft provided a REST package for REST web service development, and it is very impressive.

In addition to that, from Channel 9, I found many other great Open source projects posted to CodePlex, which is Microsoft Open Source web site. I have used many Open source libraries there, including Json.Net, MVC...

Read More...

Sunday, August 09, 2009

Json.NET and Its Usage (3)

In this blog, I'l continue to the issues related to XML. In many cases, I have XML strings as data source, such as configuration files, and data in the format of XML from ADO.Net. JSon.Net provides some APIs to convert from XML to Json strings or vice versa. Based on those APIs, I have added some methods to my wrap class.

The first method is to convert from a Json string to an XML string:

public static string ConvertToXMLString(string jsonString)
{
  XmlNode xmlNode = JsonConvert.DeserializeXmlNode(jsonString);
  string xmlString = xmlNode.OuterXml;
  
  return xmlString;
}

Note: make sure the JsonString must have a single root item on top. If the JsonString contains more than one property values at the top or root, this conversion will throw exception since the converted XML has to have a single root node.

The opposite method is to convert an XML string to a JsonString:

public static string ConvertToJsonString(string xmlString)
{
  var xmlDoc = new XmlDocument();
  xmlDoc.Load(new StringReader(xmlString));
  string jsonString = JsonConvert.SerializeXmlNode(xmlDoc.DocumentElement);

  return jsonString;
}

public static string ConvertToFormattedJsonString(
    string xmlString,
    bool quoteName)
{
  string jsonStr;
  using (MemoryStream msJson = new MemoryStream(xmlString.StrToByteArray()))
  {
    using (MemoryStream ms = new MemoryStream())

    {
       using (JsonTextWriter jtw = new JsonTextWriter(new StreamWriter(ms)))
       {
          jtw.QuoteName = quoteName;
          jtw.Formatting = Newtonsoft.Json.Formatting.Indented;
          jtw.WriteToken(new JsonTextReader(new StreamReader(msJson)));

          jtw.Flush();

          ms.Flush();
          ms.Position = 0;
          using (StreamReader sr = new StreamReader(ms))
          {
              jsonStr = sr.ReadToEnd();
          }
       }
    }
  }

  return jsonStr.Replace(@"\r\n", Environment.NewLine);
}

Depending on the usage, you may need to get a nice formatted JsonString or just a long JsonString.

One reason I added some XML API methods in my wrapper class is that I find out it is very easy to manipulate XML strings by using XML Parser or XMLDoc class. For example, when I get an XML string from a configuration file, before I convert it to an instance, I have to prepare the XML in a correct format. By the time I got Json.Net library, the library did not support XML string with comments(the author promised to handle this issue), So I have to remove all the comments before converting the XML string to JsonString.

Another example is that I may have only one node in XML file as a property value. However, to convert the property value to an array of property values, I have to add a dummy node to XML file so that the JsonString from XML file will be in the correct layout before I map it to an instance. Therefore, I have the following API methods to cover those cases:


public static int AddNewNodeToRefNode(
  ref string xmlString,
  string refNodeXPath,
  string newNodeName,
  string newInnerText,
  bool beforeOrAfter,

  bool firstOrAll)
{
  //  Add a new node to a reference node
  //  ...
}

public static int AddNewNodeToRefNodeAsChildNode(
  ref string xmlString,
  string refNodeXPath,

  string newNodeName,
  string newInnerText,
  bool firstOrLastChildren,
  bool firstOrAll)
{
  // Add new node as a child node to a reference node]
  // ...
}

public static int GetXMLNodesCount(

  string xmlString,
  string nodeXPath)
{
  // Get count of a node in XML string
  // Use this method get count before adding new dummy nodes
  // ...
}

public static string RemoveComments(
  string xmlString)
{

  // Remove all comments in XML string
  // ...
}

public static bool UpdateXMLNodes(
  ref string xmlString,
  string nodeXPath,
  string newInnerText,
  bool firstNodeOrAll)
{

  // Use this method to update XML node content
  // if you use XML as input to JsonString then to Instance
  // and save the changes in Instance back to XML
  // ...
}


This is the conclusion of my serials of brief introduction of Json.Net library and my wrapper class. I'll continue to provide some examples about how to use the library.

Read More...

Saturday, July 25, 2009

LINQ and Lambda Expressions in .Net 3.5

I began to switch to .Net 3.5 framework and to use Visual Studio 2008 about couple months ago. As a experienced .Net developer in 2.0 for many years, I did not notice the big difference at the beginning. When I created a new project or new class, I saw Visual Studio adding System.LINQ, System.Data and System.XML automatically. Since I did not use any classes from those namespaces, I had to remove or delete them manually. I felt quite annoying. I had been basically writing the .Net 2.0 codes. Of course, I took advantage of Property features at the start point and that one makes codes much simpler than before.

Until the last week, I started to realize the simplicity and power of .Net 3.5 framework. Actually, it started to get my attention from some people's blogs, open source codes, and especially from Stackoverflow. The initial trigger was to search for something in a collection. I have quite good experience to use anonymous delegate to search for item or items in a collection. Then I tried to ask people about the same search in LINQ and Lambda expressions:


I got so many great answers and alternative ways to do the job by using LINQ and Lambda. I really like them. They are more descriptive and shorter. As you can see the performance difference is not a big deal, even LINQ is marginally with small amount of time slower. When I start to use them, I think the approach in much easy way. Since the codes are more descriptive, it makes my code maintenance much easy. I can recall the logic of codes much fast.

Therefore, I have to keep change and update my skills along the time. It is good to upgrade my knowledge and skills to another level and it has been really enjoyable experience.

Read More...

Sunday, July 19, 2009

My Favorite Browser Still FireFox

Google has launched its Chrome for a while. It was very impressive in terms of its peed and simplicity. I enjoyed it very much. It does have have Mac version and it is still in development stage.

Apple released Safari for Windows for quite a while, but the most impressive version is the recent 4.0. It is speedy fast and has vice nice interface. I like its' Top Sites and History, with Cover Flow Interface. I actually has been used it a lot. For most of my web surfing, I start to use Safari.

However, as a developer, I cannot live without FireFox. The main reason for this is my favorite add-ins: Vimperator and Firebug. In addition to that, its context menu item "See selection source" is very convenient for me just viewing partial source html codes. Safari, on the other hand, does have this choice at all! Therefore, I actually use two browsers most of time. If I cannot get what I get from Safari, I then switch to FireFox. From my experience, I think this is the best choice. Just depending on one browser only is not practical at all. The biggest problem for FireFox is that it does crashes constantly and very slow sometimes(launching at start and loading pages), comparing to Safari. I think this might be caused by Add-ins.

Safari has a list of very convenient short-cut keys. The most keys I used are:

  • Command-L: jump to URL address area
  • Command-W: close tab
  • Command-T: new tab
  • Command-R: refresh tab

With those keys, I can do similar quick actions like I use Vimerator in FireFox. However, one thing is missing in Safari is undo closing tab!

Talking about speed, I found Opera actually very fast in one case. For this web site, vimcolorschemetest, there are some links to lists of vim color schemes, C for example. All those schemes are in frames, with hundreds of frames. I tried with both FireFox and Safari, both are very slow to load all the frames, but Opera is fast! Even though, I rarely use Opera since I can get most from Safari and FireFox.

Read More...

Network Drive Mapping Class

Today my friend asked me a question about how to map to a network share-point as a local logical drive with user name and password. I recalled that I had written a utility class to do that.

Basically this static class contains two methods: MappingDrive() and UnMappingDrive(). The first one is used to map to a network share point by its path, user name and password, as well as a specified local logic drive name. The unmap method just takes one parameter of a mapped drive name.

There are many ways to do that. I saw many people using Windows API method. However, I don't like that. The reason is that 1) API are un-managed codes based on dynamic library dll files; and 2)possible un-catchable exceptions, which are very nasty for applications. To map to a network shared point, it is very possible to pass incorrect parameters. Then I found a simple way to do it: by using .Net Process to shell net.exe, which is one of core Windows utilities. Actually, I have used this many times in cmd console. I think most Windows UI tools, including File Explorer, may rely on this tool to do the job.

When I revisit my codes today, I saw that MappingDrive() method is checking if the specified drive is already mapped or not first. If it is true, it will un-map and remap it again. It is done behind sense. This may un-map other people's mapping. I keep this logic and remind my friend about the logic.

I also updated the class with some changes. First, I expose the previously private method of IsMappedDrive() as public. Secondly, I added two methods: GetLocalDrives() and GetLocalUnusedDrives(). The first one is very simple. It is based on System.IO.Directory.GetLogicalDrives(), but I think the second one is more useful. It will get all the un-used local logical drives, which depends on the first method.

Finally, I uploaded this simple NetworkDrive class to my code project. Enjoy it!

Read More...

Wednesday, July 08, 2009

Json.NET and Its Usage (2)

In this blog, I'll continue to describe my wrapper class based on Json.Net. The main reason I want to write a wrapper class to hide Json.Net is to provide only APIs I am interested in. Json.Net provides a rich framework of classes, interfaces, types, and enums. I don't know all of them. In practice, there is no need to know all.

The second reason is to break direct dependency on Json.Net. My wrapper class provides all the APIs I need. There may be some cases in the future that a better library available than Json.Net or some potential problems preventing me from using Json.Net. Then what I need to do is to rewrite the wrapper class's internal implementation without change my application or libraries which are dependent on the wrapper class. Actually, I find this the best practice to use other library or components.

I created a common .Net library, DotNetCommonLibrary, with some common and generic classes, including my wrapper class for Json.Net. The first thing I need is to add a reference to Newtonsoft.Json to the library.

Next, I include the following namespace in the wrapper class:

# region Using
using System;
using System.IO;
using System.Xml;
using System.Xml.Linq;
using System.Xml.XPath;
using Newtonsoft.Json;
using JsonFormatting = Newtonsoft.Json.Formatting;
using Newtonsoft.Json.Converters;
# endregion

The wrapper class is called as JsonXMLHelper. The class does not contain any private instance data, therefore, the class is a static class.

public static class JsonXMLHelper
{
...
}

The first method is a simple one: convert a Jsonstring to an instance.

public static T GetInstance<T>(
  string jsonString)
  where T : class
{
  T instance = JsonConvert.DeserializeObject(jsonString, typeof(T)) as T;
  return instance;
}


Vice-versa, there is the method to convert instance to Jsonstring:


public static string GetJsonString<T>(
  T instance) where T : class
{
  return GetJsonString<T>(instance, false);
}

public static string GetJsonString<T>(
  T instance,
  bool indented)
  where T : class
{
  return GetJsonString<T>(
    instance, indented, true, null);
}

public static string GetJsonString<T>(
  T instance,
  bool indented,
  bool quoteName,
  string dtFormat) where T : class
{
  JsonFormatting f = indented ?
    JsonFormatting.Indented :
    JsonFormatting.None;
  string jsonStr;
  // Date format available?
  if (!string.IsNullOrEmpty(dtFormat))
  {
    JsonSerializerSettings jss =
      new JsonSerializerSettings();
    IsoDateTimeConverter dtConvert =
      new IsoDateTimeConverter();
    dtConvert.DateTimeFormat = dtFormat;
    jss.Converters.Add(dtConvert);
    jsonStr = JsonConvert.SerializeObject(
      instance, f, jss);
  }
  else
  {
    jsonStr = JsonConvert.SerializeObject(
      instance, f);
  }

  if (!quoteName)
  {
    jsonStr = ConvertToFormattedJsonString(
      jsonStr, quoteName);
  }

  return jsonStr;
}

GetJsonstring() has several overloads, which provide options to specify if a Jsonstring is indented, if a quote " char is used for Jsonstring names, and what is the format for DateTime type. By default, a Jsonstring is not indented as one long string and Json names are quoted by ", and DateTime values are displayed in the format of Date(tick_numbers). I need to log some instances as Jsonstrings in a nice and readable format. Those overloads provide options I need.

I used this GetJsonstring() method a lot. It saves me a lots of time to override ToString() method to format instance in a nice string. The most frustrated thing is that when I update class property names, I often forget to update ToString(). With this API method, I don't need to worry about this.

I even don't need to override ToString() any more. I can directly call this method with an instance to get a nice Json string. It is the most handy way to print or log .Net or third party class instances. You can even print or log an instance of List<T> type, but be prepared for very long strings.

Read More...