Study material from MSDN, Technet and others

Plan for availability for SharePoint sites

http://technet.microsoft.com/en-us/library/cc748824%28v=office.14%29.aspx

Customize / Code behind for list forms

http://thesharepointdive.wordpress.com/2012/03/20/list-forms-deployment-for-sharepoint-2010-part-1-of-4/ 

Repeated authentication prompts from SharePoint

Some users complain about repeated requests for user name and password whenever they accessed the site which is very irritating. With some users entering the user name and password a few times it worked, and some users would request 10 times and then show the site, and sometimes would just result in a blank, empty page. While entering username password so many times, some users accidentally enter wrong password and end up locking their account. When I faced the issue, I tried to switch the browser, and voila it worked.


If you search the issue, you will find a lot of references to authentication providers, registry issues, trusted sites or intranet zone settings or maybe the users are on the server and the loopback check was not disabled.
But why it sometimes worked and sometimes it didn't.
I found that to some extent Content Advisor, Web sense, Firewall ,Proxy settings, either of it could also be a problem.


Turns users had a proxy server, with proxy settings deployed to desktops via group policy.The group policy forces the browser to ask for authentication on some content which according to the policy may be inappropriate. The proxy would behave differently every time it was asked for the new web site address, and would cause an authentication prompt for each image on the page

You could confirm this yourself by temporarily switching to a different browser than your default one. If the new browser doesn't take the policies or Content Advisor settings your default browser and your site works - you found your answer.You can ask network admins to either add the web site to the proxy exception list or fix the proxy server itself or switch over to a new browser or revise the Content Advisor.

Note: This article doesnot consider the repeated authentication prompts on Web Server due to loopback issue and Alternate Access Mappings. You may refer Guide to Alternate Access Mappings in Sharepoint

Guide to Alternate Access Mappings in Sharepoint

Below are the steps that you can use to configure Alternamte Access Mappings for you Sharepoint Site.

1 Add the URL to DNS with a pointer to the servers IP address. or add an entry to HOST file.
2 Add a Public URL in Alternate Access Mappings.
3 Add a binding for the Web Site in Internet Information Services. (Sharepoint doesn't do so).
4 Add the site as a local intranet site in IE to avoid logon prompt.
5 Verify access.
Note: If you cannot verify the access on Web server check ifloopback check.
A workaround for above issue is available in Mircosoft article
http://support.microsoft.com/kb/926642/en-us

As a complete article reference use below link
http://social.technet.microsoft.com/Forums/zh/sharepoint2010setup/thread/7ab4f87c-2246-4531-91bf-e0ec9f07a10a

How to setup robots file for your site

What is a Robots.txt

Robots.txt is a text (not html) file placed in the root of your site to tell search robots/spiders which pages should and should not be visited/indexed/crawled. It is not mandatory for search engines to adhere to the instructions found in the robots.txt but generally search engines obey what they are asked not to do.

It is important to note that a robots.txt does not completely prevent search engines from crawling your site (i.e. it is not a firewall) and the fact that you may have a robots.txt file on your site is something like putting a note "Please, do not enter" on your unlocked front door. Put simply, it will not prevent thieves from coming in but the good guys will not open to door and enter.

It goes without saying therefore, if you have sensitive data, you cannot rely 100% on a robots.txt to protect it from being indexed and displayed in search results.

The location of robots.txt is very important. It must be in the main directory because otherwise user agents (search engines) will not be able to find it. They do not search the whole site for a file named robots.txt. Instead, they look first in the main directory (i.e. http://www.sitename.com/robots.txt) and if they don't find it there, they simply assume that this site does not have a robots.txt file and therefore they index everything they find along the way. So, if you don't put robots.txt in the right place, don't be surprised that search engines index your whole site.

Purpose of a Robots.txt


The main reason why robots.txt would be used is to keep sensitive information private.

How to setup robots file for your site



Launch Notepad
Put the following in your robots.txt file:

User-agent: *
Disallow: /

Save the file as: robots.txt
Adding a robots.txt file to the root of your public anonymous site.
You can add it in the root directory of your Visual Studio Website Project.
You can place it directly in Virtual directory at root level of your website folder.


Adding a robots.txt file to the root of your public anonymous SharePoint site.
Open up your root site in SharePoint Designer.
Double Click the folder All Files
Drag and drop the newly created robots.txt to the All Files folder.
Exit SharePoint Designer.
Alternatively you can create the robots.txt from within SharePoint Designer itself.

To ensure the file is accessible to search engines go to your site URL and append "/robots.txt". Example: http://www.sitename.com/robots.txt

Additional reading can be done on
http://www.robotstxt.org/robotstxt.html

Note:
What if you failed or skipped to put a robots file before you deployed your pages and search engines have already crawled the sensitive content.
Well we have to run Webmaster tools of each search engine and explicitly request each page to be removed from search results.

How to setup google analytics for your site

Google analytics is a tool to track visits on your site. It gives the site owner an idea about the usage of the site, how the site is used, most commonly used pages, frequent visitors, how they arrived , how do the users navigate on the site, all time visitor count and numerous other features.

We will see a basic setup of google analytics for your site -

1. Browse to google analytics home page.
2. Create your free google analytics account. If you already have gmail account you can use that to sign in.
3.After you login in , you will have to open Admin tab. Then click on New Account.
4.Enter you website name, this will be referred as title. Enter actual url of the site.
Other settings are optional. Click on Get Tracking ID button and accept the License Agreement.
5. After this you will see a small code snippet which says -"This is your tracking code. Copy and paste it into the code of every page you want to track.". Embed this code in your website pages to start tracking visits on your site.
  • For a custom dot net page you can embed it opening Code View in Visual Studio project.
  • For a custom SharePoint  page you can edit the custom page in designer and put your tracking code.
  • If you also want to track the visits on application pages, you can directly put the tracking code in the default applied Site Master Page.

6. You will see that there is a tracking id specified starting with
UA- <unique number set for you> - <serial number>.
This entire combination of characters specifies that this tracking number is utilized uniquely for your site.
7.By this step, you will see Reporting tab. This opens audience overview page. It give reports for "% of visits". However data is not populated immediately for a freshly created site.
8. In order to verify if your site is setup properly, click on Real time in left navigation and open overview page. Then browse your site in another tab. Switch to google analytics tab keeping you site open. You should be able to see the active visitors count increasing in Right now section.

Understanding cookies

One of the Stage Management techniques of dot net development is using data from cookies. Cookie is simply a text file which stores user and session related information.



Cookies can be stored or created at two places, namely local file system and the current browser.

Cookies created in file system are usually located in Temporary Internet Files directory which you can locate from your browser history settings.
The location of the cookie in the file system varies upon the browser and the operating system in use.
Browser reads these cookies if the cookies are not expired.
However old expired cookies are not deleted automatically by your browser.

Cookies created in browser are temporary cookies usually having very less amount of data, eg :- Current user details, previous page URL, temporary variables, etc.
These cookies will be flushed automatically by closing the browser.

Underwork..
 

Black screen on Remote Desktop Connection or RDC

Scenario: When you try to connect to a remote desktop using mstsc command, after successfully logging in the screen appears black. It seems the remote server has stopped responding.
However the system is processing you request in the background.

Solution
1. Check if too many users connected to the server.If so , request them to free the session.
2. Check if too many processes, services or tasks are running.

To do so, the trick is to press 'Ctrl + Alt + End' key. This will open task manager. Here you can check both, number of users logged on in the Users Tab and running processes in Processes tab.