Eclipse Authors: Pat Romanski, Elizabeth White, Liz McMillan, David H Deans, JP Morgenthal

Related Topics: Industrial IoT, Microsoft Cloud

Industrial IoT: Article

P2P Explained: Introducing the Windows Communication Foundation Peer Channel

WCF is a means by which .NET developers can create peer applications

Kevin Hoffman's Blog

To quickly recap from the previous article, a peer network is a logical graph of computers (or applications, depending on your abstraction level) which are connected in some way. In a pure serverless peer network, there is no single designated machine in the network that holds more or less state than any other computer. Hybrid variations of peer networks involve peer communication for some tasks and client/server communication with a state/central server for other tasks.

The Windows Communication Foundation (WCF) peer channel is a means by which .NET developers can create peer applications. Using the simple endpoint-channel paradigm, every application that is a member of the peer network creates an endpoint through which that application communicates with other computers on the mesh.

The interesting thing here is that any given application can communicate with multiple peer meshes at the same time and, depending on what you need to do, you can even have a single application that uses multiple meshes for different things. For example, you might have a data transmission/synchronization mesh and you might have another general-purpose mesh that allows users of your application to chat with each other. The segmentation of traffic over multiple meshes is an architectural decision that is entirely up to the application developer.

So how does it work? Basically you create a WCF service host the same way you would in any other WCF application. The difference is that the service you are hosting is on the WCF peer channel and has an endpoint address that looks something like this:

net.p2p://myapplication/ or net.p2p://myapplication/subcontext

The format of the P2P URL is entirely up to you and totally arbitrary with the exception of the net.p2p:// URL scheme identifier. When your application opens a P2P endpoint on a given mesh (identified using the URLs shown above), it communicates with a Peer Name Resolver and registers the name. If your app is the first endpoint on that mesh, then you're essentially alone in your own world. As other applications are brought up on the same mesh name, they are added to the mesh and become "peers" to your application. When your app invokes a method on the service proxy for that mesh, all the other peers in the mesh (usually, but I'll discuss that later) will have that method invoked on their own endpoints, effectively multicasting method calls across the peer network.

Messages are propagated out to the peer network using the "flood" method. Basically this means that when you call a method, let's say SendChatMessage("Hello Peers!"), that method is converted into a WCF message. That message is then sent to every single one of the nodes that are connected directly to you on the peer channel. These nodes are said to have a "hop count" of 1 and are technically referred to as neighbors (for obvious reasons). Each of these neighbors then takes the message, turns it into a local method call, and has their local SendChatMessage(string msg) method invoked with the appropriate parameters. Additionally, that message is re-sent to each of that neighbor's neighbors, except the neighbor that sent the message. The message is then "flooded" throughout the peer network until everyone in the network has received the message once and only once.

You don't have control over the message distribution mechanism using the WCF peer channel, nor do you have control over who is considered a neighbor and the connection routes among peers. The only thing you can control at the peer protocol level is how far your message can travel. If you set the PeerHopCount attribute on an outbound data message, that hop count will be decremented each time it is retransmitted and messages with a hop count of 0 are not sent further along down the line. This becomes ridiculously useful if you want to send a quick message to local neighbors to get some data without bothering the rest of the peer mesh. Be careful when using this mechanism, however, because you can easily forget that you were distance-limiting your messages and will have a devil of a time trying to track down erratic behavior in your application unless you're careful.

Not only does the WCF peer channel let instances of your application communication with other instances of your application on the same network segment (I believe some routing between segments is possible but I have never tried it so I can't speak to the accuracy of this...), but it allows other applications to communicate with your application as well. Think about this hypothetical scenario - you have a school network. On this school network there are teacher applications and student applications. Teachers can administer tests and publish information for public consumption, which can also be seen by other teachers. Teachers can also publish information on the "teacher only" mesh and share data that way. Students can submit answers to administered tests, etc. Obviously its a contrived sample, but, if you spend a little time thinking, you may come to the same conclusion that I came to a long time ago:

If you are building a new application for Windows XP/Vista, then given how easy it has become to create peer applications using the .NET Framework, you should be asking yourself, "Why is my app not peer enabled?" Obviously there are some good reasons against peer enabling, but... how much happier would your users be if your apps were peer enabled, could communicate with each other and allow users of the same application to collaborate with each other in real-time? 

p.s. I know I mentioned peer name resolvers in this blog entry. I plan on discussing this at length in another article For now, you just need to know that the default name resolver for the WCF peer channel is PNRP (Peer Name Resolution Protocol), and it comes standard with XP SP2+ and Vista.

More Stories By Kevin Hoffman

Kevin Hoffman, editor-in-chief of SYS-CON's iPhone Developer's Journal, has been programming since he was 10 and has written everything from DOS shareware to n-tier, enterprise web applications in VB, C++, Delphi, and C. Hoffman is coauthor of Professional .NET Framework (Wrox Press) and co-author with Robert Foster of Microsoft SharePoint 2007 Development Unleashed. He authors The .NET Addict's Blog at .NET Developer's Journal.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...