[Return to Library] [Contents] [Previous Chapter] [Next Section] [Next Chapter] [Index] [Help]


3    RPC Programming Interface

For most applications, you do not need the information in this chapter; you can simply use the automatic features of the rpcgen protocol compiler (described in Chapter 2. This chapter requires an understanding of network theory; it is for programmers who must write customized network applications using open-network computing remote procedure calls (ONC RPC), and who need to know about the RPC mechanisms hidden by rpcgen.


[Return to Library] [Contents] [Previous Chapter] [Next Section] [Next Chapter] [Index] [Help]


3.1    RPC Layers

The ONC RPC interface consists of three layers, highest, middle, and lowest. For ONC RPC programming, only the middle and lowest layers are of interest; the highest layer is transparent to the operating system, machine, and network upon which it is run. For a complete specification of the routines in the remote procedure call library, see the rpc(3) reference page.

The middle layer routines are adequate for most applications. This layer is "RPC proper" because you do not need to write additional programming code for network sockets, the operating system, or any other low-level implementation mechanisms. At this level, you simply make remote procedure calls to routines on other machines. For example, you can make simple ONC RPC calls by using the following system routines:

The middle layer is not suitable for complex programming tasks because it sacrifices flexibility for simplicity. Although it is adequate for many tasks, the middle layer does not enable the following:

The lowest layer is suitable for programming tasks that require greater efficiency or flexibility. The lowest layer routines include client creation routines such as:

The following sections describe the middle and lowest RPC layers.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.1.1    Middle Layer of RPC

The middle layer is the simplest RPC program interface; from this layer you make explicit RPC calls and use the functions callrpc and registerrpc.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.1.1.1    Using callrpc

The simplest way to make remote procedure calls is through the RPC library routine callrpc. The programming code in Example 3-1, which obtains the number of remote users, shows the usage of callrpc.

Example 3-1: Using callrpc

/*
 * Print the number of users on a remote systedm using callrpc
 */

 
#include <stdio.h> #include <rpc/rpc.h> #include <rpcsvc/rusers.h>
 
main(argc, argv) int argc; char **argv; { unsigned long nusers; int stat;
 
if (argc != 2) { fprintf(stderr, "usage: nusers hostname\n"); exit(1); } if (stat = callrpc(argv[1], RUSERSPROG, RUSERSVERS, RUSERSPROC_NUM, xdr_void, 0, xdr_u_long, &nusers) != 0) { clnt_perrno(stat); exit(1); } printf("%d users on %s\n", nusers, argv[1]); exit(0); }

The callrpc library routine has eight parameters. In Example 3-1 the first parameter, argv[1], is the name of the remote server machine. The next three, RUSERSPROG, RUSERSVERS, and RUSERSPROC_NUM, are the program, version, and procedure numbers that together identify the procedure to be called. The fifth and sixth parameters are an XDR filter (xdr_void) and an argument (0) to be encoded and passed to the remote procedure. You provide an XDR filter procedure to encode or decode machine-dependent data to or from the XDR format.

The final two parameters are an XDR filter, xdr_u_long, for decoding the results returned by the remote procedure and a pointer, &nusers, to the storage location of the procedure results. Multiple arguments and results are handled by embedding them in structures.

If callrpc completes successfully, it returns zero; otherwise it returns a nonzero value. The return codes are found in rpc/clnt.h. The callrpc library routine needs the type of the RPC argument, as well as a pointer to the argument itself (and similarly for the result). For RUSERSPROC_NUM, the return value is an unsigned long. This is why callrpc has xdr_u_long as its first return parameter, which means that the result is of type unsigned long, and &nusers as its second return parameter, which is a pointer to the location that stores the long result. RUSERSPROC_NUM takes no argument, so the argument parameter of callrpc is xdr_void. In such cases the argument must be NULL.

If callrpc gets no answer after trying several times to deliver a message, it returns with an error code. Methods for adjusting the number of retries or for using a different protocol require you to use the lower layer of the RPC library, discussed in Section 3.1.2.

The remote server procedure corresponding to the callrpc usage example might look like the one in Example 3-2.

Example 3-2: Remote Server Procedure

unsigned long *
nuser(indata)
        char *indata;
{
        static unsigned long nusers;

 
/* * Code here to compute the number of users * and place result in variable nusers. */ return(&nusers); }

This procedure takes one argument, a pointer to the input of the remote procedure call (ignored in the example), and returns a pointer to the result. In the current version of C, character pointers are the generic pointers, so the input argument and the return value can be cast to char *.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.1.1.2    Using registerrpc

Normally, a server registers all of the RPC calls it plans to handle, and then goes into an infinite loop waiting to service requests. Using rpcgen for this also generates a server dispatch function. You can write a server yourself by using registerrpc. Example 3-3 is a program segment showing how you would use registerrpc in the main body of a server program that registers a single procedure; the remote procedure call passes a single unsigned long.

Example 3-3: Using registerrpc in the Main Body of a Server Program

#include <stdio.h>
#include <rpc/rpc.h>            /* required */
#include <rpcsvc/rusers.h>      /* for prog, vers definitions */

 
unsigned long *nuser();
 
main() { registerrpc(RUSERSPROG, RUSERSVERS, RUSERSPROC_NUM, nuser, xdr_void, xdr_u_long); svc_run(); /* Never returns */ fprintf(stderr, "Error: svc_run returned!\n"); exit(1); }

The registerrpc routine registers a procedure as corresponding to a given RPC procedure number. The first three parameters, RUSERPROG, RUSERSVERS, and RUSERSPROC_NUM, are the program, version, and procedure numbers of the remote procedure to be registered; nuser is the name of the local procedure that implements the remote procedure; and xdr_void and xdr_u_long are the XDR filters for the remote procedure's arguments and results, respectively. (Multiple arguments or multiple results are passed as structures.)

The underlying transport mechanism for registerrpc is UDP.

Note

The UDP transport mechanism can handle only arguments and results that are less than 8K bytes in length.

After registering the local procedure, the main procedure of the server program calls svc_run, the remote procedure dispatcher for the RPC library; svc_run calls the remote procedures in response to RPC requests and decodes remote procedure arguments and encodes results. To do this, it uses the XDR filters specified when the remote procedure was registered with registerrpc.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.1.1.3    Passing Arbitrary Data Types

RPC can handle arbitrary data structures, regardless of machine conventions, for byte order and structure layout, by converting them to a network standard called External Data Representation (XDR) before sending them over the network. The process of converting from a particular machine representation to XDR format is called serializing, and the reverse process is called deserializing. The type field parameters of callrpc and registerrpc can be a built-in procedure like xdr_u_long (in the previous example), or one that you supply. XDR has the following built-in routines:
Built-in XDR primitive routines:    
xdr_hyper xdr_u_hyper xdr_enum
xdr_int xdr_u_int xdr_bool
xdr_long xdr_u_long xdr_wrapstring
xdr_longlong_t xdr_u_longlong_t  
xdr_short xdr_u_short  
xdr_char xdr_u_char  
Other built-in XDR routines:    
xdr_array xdr_bytes xdr_reference
xdr_vector xdr_union xdr_pointer
xdr_string xdr_opaque  

You cannot use the xdr_string routine with either callrpc or registerrpc, each of which passes only two parameters to an XDR routine. Instead, use xdr_wrapstring, which takes only two parameters and calls xdr_string.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.1.1.4    User-Defined Routines

Suppose that you want to send the following structure:

struct simple {
        int a;
        short b;
} simple;

To send it, you would use the following callrpc call:

callrpc(hostname, PROGNUM, VERSNUM, PROCNUM,
        xdr_simple, &simple ...);

With this call to callrpc, you could define the routine xdr_simple as in the following example:

#include <rpc/rpc.h>

 
xdr_simple(xdrsp, simplep) XDR *xdrsp; struct simple *simplep; { if (!xdr_int(xdrsp, &simplep->a)) return (0); if (!xdr_short(xdrsp, &simplep->b)) return (0); return (1); }

An XDR routine returns nonzero (evaluates to TRUE in C) if it completes successfully; otherwise, it returns zero: For a complete description of XDR, refer to XDR Protocol Specification: RFC 1014 and Appendix A of this manual.

Note

It is best to use rpcgen to generate XDR routines. Use the -c option of rpcgen to generate only the _xdr.c file.

As another example, if you want to send a variable array of integers, you might package them as a structure like this:

struct varintarr {
        int *data;
        int arrlnth;
} arr;

Then, you would make an RPC call such as this:

callrpc(hostname, PROGNUM, VERSNUM, PROCNUM,
        xdr_varintarr, &arr...);

You could then define xdr_varintarr as shown:

xdr_varintarr(xdrsp, arrp)
        XDR *xdrsp;
        struct varintarr *arrp;
{
        return (xdr_array(xdrsp, &arrp->data, &arrp->arrlnth,
                MAXLEN, sizeof(int), xdr_int));
}

This routine takes as parameters the XDR handle, a pointer to the array, a pointer to the size of the array, the maximum allowable array size, the size of each array element, and an XDR routine for handling each array element.

If you know the size of the array in advance, you can use xdr_vector, which serializes fixed-length arrays, as shown in the following example:

int intarr[SIZE];

 
xdr_intarr(xdrsp, intarr) XDR *xdrsp; int intarr[]; { return (xdr_vector(xdrsp, intarr, SIZE, sizeof(int), xdr_int)); }


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.1.1.5    XDR Serializing Defaults

XDR always converts quantities to 4-byte multiples when serializing. If the examples in Section 3.1.1.4 had used characters instead of integers, each character would occupy 32 bits. This is why XDR has the built-in routine xdr_bytes, which is like xdr_array except that it packs characters. The xdr_bytes routine has four parameters, similar to the first four of xdr_array. For null-terminated strings, XDR provides the built-in routine xdr_string, which is the same as xdr_bytes without the length parameter.

When serializing, XDR gets the string length from strlen, and on deserializing it creates a null-terminated string. The following example calls the user-defined routine xdr_simple, as well as the built-in functions xdr_string and xdr_reference (which locates pointers):

struct finalexample {
        char *string;
        struct simple *simplep;
} finalexample;

 
xdr_finalexample(xdrsp, finalp) XDR *xdrsp; struct finalexample *finalp; {
 
if (!xdr_string(xdrsp, &finalp->string, MAXSTRLEN)) return (0); if (!xdr_reference(xdrsp, &finalp->simplep, sizeof(struct simple), xdr_simple); return (0); return (1); }

Note that xdr_simple could be called here instead of xdr_reference.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.1.2    Lowest Layer of RPC

Examples in previous sections show how RPC handles many details automatically through defaults. The following sections describe how to change the defaults by using the lowest layer of the RPC library.

The following capabilities are available only with the lowest layer of RPC:


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.1.2.1    The Server Side and the Lowest RPC Layer

The server for the nusers program in Example 3-4 does the same work as the previous nuser program that used registerrpc (see Example 3-3). However, it uses the lowest layer of RPC.

Example 3-4: Server Program Using Lowest Layer of RPC

#include <stdio.h>
#include <rpc/rpc.h>
#include <utmp.h>
#include <rpcsvc/rusers.h>

 
main() { SVCXPRT *transp; int nuser();
 
transp = svcudp_create(RPC_ANYSOCK); if (transp == NULL){ fprintf(stderr, "can't create an RPC server\n"); exit(1); } pmap_unset(RUSERSPROG, RUSERSVERS); if (!svc_register(transp, RUSERSPROG, RUSERSVERS, nuser, IPPROTO_UDP)) { fprintf(stderr, "can't register RUSER service\n"); exit(1); } svc_run(); /* Never returns */ fprintf(stderr, "should never reach this point\n"); }
 
nuser(rqstp, transp) struct svc_req *rqstp; SVCXPRT *transp; { unsigned long nusers;
 
switch (rqstp->rq_proc) { case NULLPROC: if (!svc_sendreply(transp, xdr_void, 0)) fprintf(stderr, "can't reply to RPC call\n"); return; case RUSERSPROC_NUM: /* * Code here to compute the number of users * and assign it to the variable nusers */ if (!svc_sendreply(transp, xdr_u_long, &nusers)) fprintf(stderr, "can't reply to RPC call\n"); return; default: svcerr_noproc(transp); return; } }

In this example, the server gets a transport handle for receiving and replying to RPC messages. If the argument to svcudp_create is RPC_ANYSOCK, the RPC library creates a socket on which to receive and reply to RPC calls. Otherwise, svcudp_create expects its argument to be a valid socket number. If you specify your own socket, it can be bound or unbound. If it is bound to a port by the user, the port numbers of svcudp_create and clntudp_create (the low-level client routine) must match. The registerrpc routine uses svcudp_create to get a UDP handle. If you need a more reliable protocol, call svctcp_create instead.

After creating a handle with the command SVCXPRT *transp, the next step is to call pmap_unset so that if the nusers server crashed earlier, any previous trace of it is erased before restarting. More precisely, pmap_unset erases the entry for RUSERSPROG from the port mapper tables.

Finally, associate the program number RUSERSPROG and the version RUSERSVERS with the procedure nuser, which in this case is IPPROTO_UDP. Unlike registerrpc, there are no XDR routines in the registration process, and registration is at the program level rather than the procedure level.

A service can register its port number with the local portmapper service by specifying a non-zero protocol number in the final argument of svc_register. A client determines the server's port number by consulting the portmapper on its server machine. Specifying a zero port number in clntudp_create or clnttcp_create does this automatically.

The user routine nuser must call and dispatch the appropriate XDR routines based on the procedure number. The nusers routine explicitly handles two cases that are taken care of automatically by registerrpc:

The nusers service routine serializes the results and returns them to the RPC caller via svc_sendreply. Its first parameter is the SVCXPRT handle, the second is the XDR routine, and the third is a pointer to the data to be returned. It is not necessary to have nusers declared as static here because svc_sendreply is called within that function itself.

To show how a server handles an RPC program that receives data, you could add to the previous example, a procedure called RUSERSPROC_BOOL, which has an argument nusers, and returns TRUE or FALSE depending on whether the number of users logged on is equal to nusers. It would look like this:

case RUSERSPROC_BOOL: {
        int bool;
        unsigned nuserquery;

 
if (!svc_getargs(transp, xdr_u_int, &nuserquery) { svcerr_decode(transp); return; } /* * Code to set nusers = number of users */ if (nuserquery == nusers) bool = TRUE; else bool = FALSE; if (!svc_sendreply(transp, xdr_bool, &bool)) fprintf(stderr, "can't reply to RPC call\n"); return; }

Here, the svc_getargs routine takes as arguments an SVCXPRT handle, the XDR routine, and a pointer to where the input is to be placed.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.1.2.2    The Client Side and the Lowest RPC Layer

When you use callrpc, you cannot control the RPC delivery mechanism and socket that transport the data. The lowest layer of RPC enables you to modify these parameters, as shown in Example 3-5, which calls the nusers service.

Example 3-5: Using Lowest RPC Layer to Control Data Transport and Delivery

#include <stdio.h>
#include <rpc/rpc.h>
#include <rpcsvc/rusers.h>
#include <sys/time.h>
#include <netdb.h>

 
main(argc, argv) int argc; char **argv; { struct hostent *hp; struct timeval pertry_timeout, total_timeout; struct sockaddr_in server_addr; int sock = RPC_ANYSOCK; register CLIENT *client; enum clnt_stat clnt_stat; unsigned long nusers;
 
if (argc != 2) { fprintf(stderr, "usage: nusers hostname\n"); exit(-1); } if ((hp = gethostbyname(argv[1])) == NULL) { fprintf(stderr, "can't get addr for %s\n",argv[1]); exit(-1); } pertry_timeout.tv_sec = 3; pertry_timeout.tv_usec = 0; bcopy(hp->h_addr, (caddr_t)&server_addr.sin_addr, hp->h_length); server_addr.sin_family = AF_INET; server_addr.sin_port = 0; if ((client = clntudp_create(&server_addr, RUSERSPROG, RUSERSVERS, pertry_timeout, &sock)) == NULL) { clnt_pcreateerror("clntudp_create"); exit(-1); } total_timeout.tv_sec = 20; total_timeout.tv_usec = 0; clnt_stat = clnt_call(client, RUSERSPROC_NUM, xdr_void, 0, xdr_u_long, &nusers, total_timeout); if (clnt_stat != RPC_SUCCESS) { clnt_perror(client, "rpc"); exit(-1); } printf("%d users on %s\n", nusers, argv[1]); clnt_destroy(client); exit(0); }

In this example, CLIENT pointer is encoded with the transport mechanism. The callrpc routine uses UDP and calls clntudp_create to get a CLIENT pointer; to get TCP you would use clnttcp_create.

The parameters to clntudp_create are the server address, the program number, the version number, a timeout value (between tries), and a pointer to a socket. When the sin_port is 0, the remote portmapper is queried to find out the address of the remote service.

The low-level version of callrpc is clnt_call, which takes a CLIENT pointer rather than a host name. The parameters to clnt_call are a CLIENT pointer, the procedure number, the XDR routine for serializing the argument, a pointer to the argument, the XDR routine for deserializing the return value, a pointer to where the return value will be placed, and the time in seconds to wait for a reply. If the client does not hear from the server within the time specified in pertry_timeout, the request may be sent again to the server. The number of tries that clnt_call makes to contact the server is equal to the clnt_call timeout divided by the clntudp_create timeout.

The clnt_destroy call always deallocates the space associated with the CLIENT handle. It closes the socket associated with the CLIENT handle only if the RPC library opened it. If the socket was opened by the user, it remains open. This makes it possible, in cases where there are multiple client handles using the same socket, to destroy one handle without closing the socket that other handles are using.

To make a stream connection, the call to clntudp_create is replaced with clnttcp_create:

clnttcp_create(&server_addr, prognum, versnum, &sock,
               inbufsize, outbufsize);

Here, there is no timeout argument; instead, the receive and send buffer sizes must be specified. When the clnttcp_create call is made, a TCP connection is established. All RPC calls using that CLIENT handle would use this connection. The server side of an RPC call using TCP has svcudp_create replaced by svctcp_create:

transp = svctcp_create(RPC_ANYSOCK, 0, 0);

The last two arguments to svctcp_create are "send" and "receive" sizes, respectively. If, as here, 0 is specified for either of these, the system chooses default values.

The simplest routine that creates a client handle is clnt_create:

clnt = clnt_create(server_host, prognum, versnum, transport);

The parameters here are the name of the host on which the service resides, the program and version number, and the transport to be used. The transport can be either udp for UDP or tcp for TCP. You can change the default timeouts by using clnt_control. For more information, refer to Section 2.6.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.1.2.3    Memory Allocation with XDR

To enable memory allocation, the second parameter of xdr_array is a pointer to an array, rather than the array itself. If it is NULL, then xdr_array allocates space for the array and returns a pointer to it, putting the size of the array in the third argument. For example, the following XDR routine xdr_chararr1, handles a fixed array of bytes with length SIZE:

xdr_chararr1(xdrsp, chararr)
        XDR *xdrsp;
        char chararr[];
{
        char *p;
        int len;

 
p = chararr; len = SIZE; return (xdr_bytes(xdrsp, &p, &len, SIZE)); }


Here, if space has already been allocated in chararr, it can be called from a server like this:

char chararr[SIZE];

 
svc_getargs(transp, xdr_chararr1, chararr);

If you want XDR to do the allocation, you must rewrite this routine in this way:

xdr_chararr2(xdrsp, chararrp)
        XDR *xdrsp;
        char **chararrp;
{
        int len;

 
len = SIZE; return (xdr_bytes(xdrsp, charrarrp, &len, SIZE)); }

The RPC call might look like this:

char *arrptr;

 
arrptr = NULL; svc_getargs(transp, xdr_chararr2, &arrptr); /* * Use the result here */ svc_freeargs(transp, xdr_chararr2, &arrptr);

After using the character array, you can free it with svc_freeargs; this will not free any memory if the variable indicating it is NULL. For example, in the earlier routine xdr_finalexample, if finalp->string was NULL, it would not be freed. The same is true for finalp->simplep.

To summarize, each XDR routine is responsible for serializing, deserializing, and freeing memory as follows:

When building simple examples as shown in this section, you can ignore the three modes. See Appendix A for examples of more sophisticated XDR routines that determine mode and any required modification.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.2    Raw RPC

Raw RPC refers to the use of pseudo-RPC interface routines that do not use any real transport at all. These routines, clntraw_create and svcraw_create, help in debugging and testing the non-communications oriented aspects of an application before running it over a real network. Example 3-6 shows their use.

Example 3-6: Debugging and Testing Noncommunication Parts of an Application

/*
 * A simple program to increment the number by 1
 */
#include <stdio.h>
#include <rpc/rpc.h>
#include <rpc/raw.h>        /* required for raw */

 
struct timeval TIMEOUT = {0, 0}; static void server();
 
main() { CLIENT *clnt; SVCXPRT *svc; int num = 0, ans;
 
if (argc == 2) num = atoi(argv[1]); svc = svcraw_create(); if (svc == NULL) { fprintf(stderr, "Could not create server handle\n"); exit(1); } svc_register(svc, 200000, 1, server, 0); clnt = clntraw_create(200000, 1); if (clnt == NULL) { clnt_pcreateerror("raw"); exit(1); } if (clnt_call(clnt, 1, xdr_int, &num, xdr_int, &num1, TIMEOUT) != RPC_SUCCESS) { clnt_perror(clnt, "raw"); exit(1); } printf("Client: number returned %d\n", num1); exit(0) ; }
 
static void server(rqstp, transp) struct svc_req *rqstp; SVCXPRT *transp; { int num;
 
switch(rqstp->rq_proc) { case 0: if (svc_sendreply(transp, xdr_void, 0) == NULL) { fprintf(stderr, "error in null proc\n"); exit(1); } return; case 1: break; default: svcerr_noproc(transp); return; } if (!svc_getargs(transp, xdr_int, &num)) { svcerr_decode(transp); return; } num++; if (svc_sendreply(transp, xdr_int, &num) == NULL) { fprintf(stderr, "error in sending answer\n"); exit(1); } return; }

In this example,


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.3    Miscellaneous RPC Features

The following sections describe other useful features for RPC programming.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.3.1    Using Select on the Server Side

Suppose a process simultaneously responds to RPC requests and performs another activity. If the other activity periodically updates a data structure, the process can set an alarm signal before calling svc_run. However, if the other activity must wait on a file descriptor, the svc_run call does not work. The code for svc_run is as follows:

void
svc_run()
{
        fd_set readfds;
        int dtbsz = getdtablesize();

 
for (;;) { readfds = svc_fds; switch (select(dtbsz, &readfds, NULL,NULL,NULL)) {
 
case -1: if (errno != EBADF) continue; perror("select"); return; case 0: continue; default: svc_getreqset(&readfds); } } }

You can bypass svc_run and call svc_getreqset if you know the file descriptors of the sockets associated with the programs you are waiting on. In this way, you can have your own select that waits on the RPC socket, and you can have your own descriptors. Note that svc_fds is a bit mask of all the file descriptors that RPC uses for services. It can change whenever any RPC library routine is called, because descriptors are constantly being opened and closed; for example, for TCP connections.

Note

If you are handling signals in your application, do not make any system call that accidentally sets errno. If this happens, reset errno to its previous value before returning from your signal handler.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.3.2    Broadcast RPC

The portmapper required by broadcast RPC is a daemon that converts RPC program numbers into DARPA protocol port numbers. The main differences between broadcast RPC and normal RPC are the following:

In the following example, the procedure eachresult is called each time a response is obtained. It returns a Boolean that indicates whether or not the user wants more responses. If the argument to eachresult is NULL, clnt_broadcast returns without waiting for any replies:

#include <rpc/pmap_clnt.h>

.
.
.
enum clnt_stat clnt_stat;
.
.
.
clnt_stat = clnt_broadcast(prognum, versnum, procnum, inproc, in, outproc, out, eachresult) u_long prognum; /* program number */ u_long versnum; /* version number */ u_long procnum; /* procedure number */ xdrproc_t inproc; /* xdr routine for args */ caddr_t in; /* pointer to args */ xdrproc_t outproc; /* xdr routine for results */ caddr_t out; /* pointer to results */ bool_t (*eachresult)();/* call with each result gotten */

In the following example, if done is TRUE, broadcasting stops and clnt_broadcast returns successfully. Otherwise, the routine waits for another response. The request is rebroadcast after a few seconds of waiting. If no responses come back in a default total timeout period, the routine returns with RPC_TIMEDOUT:

bool_t done;

.
.
.
done = eachresult(resultsp, raddr) caddr_t resultsp; struct sockaddr_in *raddr; /* Addr of responding server */

For more information, refer to Section 2.7.1.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.3.3    Batching

In normal RPC, clients send a call message and wait for the server to reply by indicating that the call succeeded. This implies that clients must wait idle while servers process a call. This is inefficient if the client does not want or need an acknowledgment for every message sent. However, calls made by clients are buffered, causing no processing on the servers. When the connection is flushed, a normal RPC request is sent, and processed by the server, which sends back the reply.

RPC messages can be placed in a "pipeline" of calls to a desired server; this is called batching, in which:

Because the server does not respond to every call, the client can generate new calls in parallel with the server executing previous calls. Also, the TCP/IP implementation holds several call messages in a buffer and sends them to the server in one write system call. This overlapped execution greatly decreases the interprocess communication overhead of the client and server processes, and the total elapsed time of a series of calls. Because the batched calls are buffered, the client must eventually do a nonbatched call to flush the pipeline.

In the following example of batching, assume that a string-rendering service (for example, a window system) has two similar calls: one provides a string and returns void results, and the other provides a string and does nothing else. The service (using the TCP/IP transport) may look like Example 3-7.

Example 3-7: Batching RPC Messages

#include <stdio.h>
#include <rpc/rpc.h>
#include <suntool/windows.h>

 
void windowdispatch();
 
main() { SVCXPRT *transp;
 
transp = svctcp_create(RPC_ANYSOCK, 0, 0); if (transp == NULL){ fprintf(stderr, "can't create an RPC server\n"); exit(1); } pmap_unset(WINDOWPROG, WINDOWVERS); if (!svc_register(transp, WINDOWPROG, WINDOWVERS, windowdispatch, IPPROTO_TCP)) { fprintf(stderr, "can't register WINDOW service\n"); exit(1); } svc_run(); /* Never returns */ fprintf(stderr, "should never reach this point\n"); }
 
void windowdispatch(rqstp, transp) struct svc_req *rqstp; SVCXPRT *transp; { char *s = NULL;
 
switch (rqstp->rq_proc) { case NULLPROC: if (!svc_sendreply(transp, xdr_void, 0)) fprintf(stderr, "can't reply to RPC call\n"); return; case RENDERSTRING: if (!svc_getargs(transp, xdr_wrapstring, &s)) { fprintf(stderr, "can't decode arguments\n"); /* * Tell caller he erred */ svcerr_decode(transp); return; } /* * Code here to render the string "s" */ if (!svc_sendreply(transp, xdr_void, NULL)) fprintf(stderr, "can't reply to RPC call\n"); break; case RENDERSTRING_BATCHED: if (!svc_getargs(transp, xdr_wrapstring, &s)) { fprintf(stderr, "can't decode arguments\n"); /* * We are silent in the face of protocol errors */ break; } /* * Code here to render string s, but send no reply! */ break; default: svcerr_noproc(transp); return; } /* * Now free string allocated while decoding arguments */ svc_freeargs(transp, xdr_wrapstring, &s); }

In this example, the service could have one procedure that takes the string and a Boolean to indicate whether or not the procedure will respond. For a client to use batching effectively, the client must perform RPC calls on a TCP-based transport and the actual calls must have the following attributes:

If a UDP transport is used instead, the client call becomes a message to the server and the RPC mechanism becomes simply a message passing system, with no batching possible. In Example 3-8, a client uses batching to supply several strings; batching is flushed when the client gets a null string (EOF).

Example 3-8: Client Batching

#include <stdio.h>
#include <rpc/rpc.h>
#include <suntool/windows.h>

 
main(argc, argv) int argc; char **argv; { struct timeval total_timeout; register CLIENT *client; enum clnt_stat clnt_stat; char buf[1000], *s = buf;
 
if ((client = clnt_create(argv[1], WINDOWPROG, WINDOWVERS, "tcp")) == NULL) { perror("clnttcp_create"); exit(-1); } total_timeout.tv_sec = 0; /* set timeout to zero */ total_timeout.tv_usec = 0; while (scanf("%s", s) != EOF) { clnt_stat = clnt_call(client, RENDERSTRING_BATCHED, xdr_wrapstring, &s, NULL, NULL, total_timeout); if (clnt_stat != RPC_SUCCESS) { clnt_perror(client, "batching rpc"); exit(-1); } }
 
/* Now flush the pipeline */
 
total_timeout.tv_sec = 20; clnt_stat = clnt_call(client, NULLPROC, xdr_void, NULL, xdr_void, NULL, total_timeout); if (clnt_stat != RPC_SUCCESS) { clnt_perror(client, "batching rpc"); exit(-1); } clnt_destroy(client); exit(0); }

In this example, the server sends no message, making the clients unable to receive notice of any failures that may occur. Therefore, the clients must handle any errors. This example was completed to render all of the lines (approximately 2000) in the file /etc/termcap. The rendering service simply discarded the entire file. The example was run in four configurations, in different amounts of time:

Running only fscanf on /etc/termcap requires 6 seconds. These timings show the advantage of protocols that enable overlapped execution, although they are difficult to design.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.3.4    Authentication of RPC Calls

In the examples presented so far, the caller never identified itself to the server, nor did the server require it from the caller. Every RPC call is authenticated by the RPC package on the server, and similarly, the RPC client package generates and sends authentication parameters. Just as different transports (TCP/IP or UDP/IP) can be used when creating RPC clients and servers, different forms of authentication can be associated with RPC clients; the default authentication type is none. The authentication subsystem of the RPC package, with its ability to create and send authentication parameters, can support commercially available authentication software. This manual describes only one type of authentication, authentication through the operating system.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.3.5    Authentication Through the Operating System

The following sections describe client and server side authentication through the operating system.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.3.5.1    The Client Side

Assume that a caller creates the following new RPC client handle:

clnt = clntudp_create(address, prognum, versnum, wait, sockp)

The transport for this client handle defaults to the following associate authentication handle:

clnt->cl_auth = authnone_create();

The RPC client can choose to use authentication that is native to the operating system by setting clnt->cl_auth after creating the RPC client handle:

clnt->cl_auth = authunix_create_default();

This causes each RPC call associated with clnt to carry with it the following authentication credentials structure:

/*
 * credentials native to the operating system
 */
struct authunix_parms {
        u_long   aup_time;          /* credentials creation time */
        char     *aup_machname;     /* host name where client is */
        int      aup_uid;           /* client's UNIX effective uid */
        int      aup_gid;           /* client's current group id */
        u_int    aup_len;           /* element length of aup_gids */
        int      *aup_gids;         /* array of groups user is in */
};

In this example, the fields are set by authunix_create_default by invoking the appropriate system calls. Because the RPC user created this new style of authentication, the user is responsible for destroying it (to save memory) with the following:

auth_destroy(clnt->cl_auth);


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.3.5.2    The Server Side

It is difficult for service implementors to handle authentication because the RPC package passes the service dispatch routine a request that has an arbitrary authentication style associated with it. Consider the fields of a request handle passed to a service dispatch routine:

/*
 * An RPC Service request
 */
struct svc_req {
        u_long   rq_prog;           /* service program number */
        u_long   rq_vers;           /* service protocol vers num */
        u_long   rq_proc;           /* desired procedure number */
        struct opaque_auth rq_cred; /* raw credentials from wire */
        caddr_t  rq_clntcred;       /* credentials (read only) */
};

The rq_cred is mostly opaque, except for one field, the style of authentication credentials:

/*
 * Authentication info.  Mostly opaque to the programmer.
 */
struct opaque_auth {
        enum_t        oa_flavor;    /* style of credentials */
        caddr_t       oa_base;      /* address of more auth stuff */
        u_int         oa_length;    /* not to exceed MAX_AUTH_BYTES */
};

The RPC package guarantees the following to the service dispatch routine:

The rq_clntcred field also could be cast to a pointer to an authunix_parms structure. If rq_clntcred is NULL, the service implementor can inspect the other (opaque) fields of rq_cred to determine whether the service knows about a new type of authentication that is unknown to the RPC package.

Example 3-9 extends the previous remote users service (see Example 3-3) so that it computes results for all users except UID 16.

Example 3-9: Modifying the Remote Users Service

nuser(rqstp, transp)
        struct svc_req *rqstp;
        SVCXPRT *transp;
{
        struct authunix_parms *unix_cred;
        int uid;
        unsigned long nusers;

 
/* * we don't care about authentication for null proc */ if (rqstp->rq_proc == NULLPROC) { if (!svc_sendreply(transp, xdr_void, 0)) fprintf(stderr, "can't reply to RPC call\n"); return; } /* * now get the uid */ switch (rqstp->rq_cred.oa_flavor) {
 
case AUTH_UNIX: unix_cred = (struct authunix_parms *)rqstp->rq_clntcred; uid = unix_cred->aup_uid; break;
 
case AUTH_NULL:
 
default: /* return weak authentication error */ svcerr_weakauth(transp); return; } switch (rqstp->rq_proc) {
 
case RUSERSPROC_NUM: /* * make sure caller is allowed to call this proc */ if (uid == 16) { svcerr_systemerr(transp); return; } /* * Code here to compute the number of users * and assign it to the variable nusers */ if (!svc_sendreply(transp, xdr_u_long, &nusers)) fprintf(stderr, "can't reply to RPC call\n"); return;
 
default: svcerr_noproc(transp); return; } }

As in this example, it is not customary to check the authentication parameters associated with NULLPROC (procedure number zero). Also, if the authentication parameter type is not suitable for your service, have your program call svcerr_weakauth.

The service protocol itself returns status for access denied; in this example, the protocol does not do this; instead, a call to the service primitive, svcerr_systemerr, is made. RPC deals only with authentication and not with the access control of an individual service. The services themselves must implement their own access control policies and reflect these policies as return statuses in their protocols.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.3.6    Using the Internet Service Daemon (inetd)

You can start an RPC server from inetd. The only difference from the usual code is that it is best to have the service creation routine called in the following form because inetd passes a socket as file descriptor 0:

transp = svcudp_create(0);     /* For UDP */
transp = svctcp_create(0,0,0); /* For listener TCP sockets */
transp = svcfd_create(0,0,0);  /* For connected TCP sockets */

Also, call svc_register as follows, with the last parameter flag set to 0, because the program is already registered with the portmapper by inetd:

svc_register(transp, PROGNUM, VERSNUM, service, 0);

If you want to exit from the server process and return control to inetd, you must do so explicitly, because svc_run never returns.

The format of entries in /etc/inetd.conf for RPC services is in one of the following two forms:

p_name/version dgram  rpc/udp wait/nowait user server args

 
p_name/version stream rpc/tcp wait/nowait user server args

The variable, p_name, is the symbolic name of the program as it appears in the file /etc/rpc; server is the program implementing the server; and program and version are the program and version numbers of the service. For more information, see inetd.conf(5)

If the same program handles multiple versions, then the version number can be a range, as in this example:

rstatd/1-2 dgram rpc/udp wait root /usr/etc/rpc.rstatd


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.4    Additional Examples

The following sections present additional examples for server and client sides, TCP, and callback procedures.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.4.1    Program Versions on the Server Side

By convention, the first version of program PROG is designated as PROGVERS_ORIG and the most recent version is PROGVERS. Suppose there is a new version of the user program that returns an unsigned short rather than a long. If you name this version RUSERSVERS_SHORT, then a server that wants to support both versions would register both. It is not necessary to create another server handle for the new version, as shown in this segment of code:

if (!svc_register(transp, RUSERSPROG, RUSERSVERS_ORIG,
  nuser, IPPROTO_TCP)) {
        fprintf(stderr, "can't register RUSER service\n");
        exit(1);
}
if (!svc_register(transp, RUSERSPROG, RUSERSVERS_SHORT,
  nuser, IPPROTO_TCP)) {
        fprintf(stderr, "can't register new service\n");
        exit(1);
}

You can handle both versions with the same C procedure, as in Example 3-10.

Example 3-10: C Procedure that Returns Two Different Data Types

nuser(rqstp, transp)
    struct svc_req *rqstp;
    SVCXPRT *transp;
{
    unsigned long nusers;
    unsigned short nusers2;

 
switch (rqstp->rq_proc) { case NULLPROC: if (!svc_sendreply(transp, xdr_void, 0)) { fprintf(stderr, "can't reply to RPC call\n"); return; } return; case RUSERSPROC_NUM: /* * Code here to compute the number of users * and assign it to the variable, nusers */ nusers2 = nusers; switch (rqstp->rq_vers) { case RUSERSVERS_ORIG: if (!svc_sendreply(transp, xdr_u_long, &nusers)) { fprintf(stderr,"can't reply to RPC call\n"); } break; case RUSERSVERS_SHORT: if (!svc_sendreply(transp, xdr_u_short, &nusers2)) { fprintf(stderr,"can't reply to RPC call\n"); } break; } default: svcerr_noproc(transp); return; } }


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.4.2    Program Versions on the Client Side

The network can have different versions of an RPC server. For example, one server might run RUSERSVERS_ORIG, and another might run RUSERSVERS_SHORT.

If the version of the server running does not match the version number in the client creation routines, then clnt_call fails with a RPCPROGVERSMISMATCH error. You can determine the version numbers supported by the server and then create a client handle with an appropriate version number. To do this, use clnt_create_vers (refer to rpc(3) for more information) or the routine shown in Example 3-11.

Example 3-11: Determining Server-Supported Versions and Creating Associated Client Handles

main()
{
        enum clnt_stat status;
        u_short num_s;
        u_int num_l;
        struct rpc_err rpcerr;
        int maxvers, minvers;

 
clnt = clnt_create(host, RUSERSPROG, RUSERSVERS_SHORT, "udp"); if (clnt == NULL) { clnt_pcreateerror("clnt"); exit(-1); } to.tv_sec = 10; /* set the time outs */ to.tv_usec = 0; status = clnt_call(clnt, RUSERSPROC_NUM, xdr_void, NULL, xdr_u_short, &num_s, to); if (status == RPC_SUCCESS) { /* We found the latest version number */ clnt_destroy(clnt); printf("num = %d\n",num_s); exit(0); } if (status != RPC_PROGVERSMISMATCH) { /* Some other error */ clnt_perror(clnt, "rusers"); exit(-1); } clnt_geterr(clnt, &rpcerr); maxvers = rpcerr.re_vers.high; /* highest version supported */ minvers = rpcerr.re_vers.low; /* lowest version supported */ if (RUSERSVERS_ORIG < minvers || RUSERS_ORIG > maxvers) { /* doesn't meet minimum standards */ clnt_perror(clnt, "version mismatch"); exit(-1); } /* This version not supported */ clnt_destroy(clnt); /* destroy the earlier handle */ clnt = clnt_create(host, RUSERSPROG, RUSERSVERS_ORIG, "udp"); /* try different version */ if (clnt == NULL) { clnt_pcreateerror("clnt"); exit(-1); } status = clnt_call(clnt, RUSERSPROCNUM, xdr_void, NULL, xdr_u_long, &num_l, to); if (status == RPC_SUCCESS) { /* We found the latest version number */ printf("num = %d\n", num_l); } else { clnt_perror(clnt, "rusers"); exit(-1); } }


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


3.4.3    Using the TCP Transport

Example 3-12 works like the remote file copy command rcp; that is, the initiator of the RPC call, snd, takes its standard input and sends it to the server rcv, which prints it on standard output; the RPC call uses TCP. The example also shows how an XDR procedure behaves differently on serialization than on deserialization.

Example 3-12: RPC Call That Uses TCP Protocol

/*
 * The xdr routine:
 *                on decode, read from wire, write onto fp
 *                on encode, read from fp, write onto wire
 */
#include <stdio.h>
#include <rpc/rpc.h>

 
xdr_rcp(xdrs, fp) XDR *xdrs; FILE *fp; { unsigned long size; char buf[BUFSIZ], *p;
 
if (xdrs->x_op == XDR_FREE)/* nothing to free */ return 1; while (1) { if (xdrs->x_op == XDR_ENCODE) { if ((size = fread(buf, sizeof(char), BUFSIZ, fp)) == 0 && ferror(fp)) { fprintf(stderr, "can't fread\n"); return (1); } } p = buf; if (!xdr_bytes(xdrs, &p, &size, BUFSIZ)) return (0); if (size == 0) return (1); if (xdrs->x_op == XDR_DECODE) { if (fwrite(buf, sizeof(char), size, fp) != size) { fprintf(stderr, "can't fwrite\n"); return (1); } } } } /* * The sender routines */ #include <stdio.h> #include <netdb.h> #include <rpc/rpc.h> #include <sys/socket.h> #include "rcp.h" /* for prog, vers definitions */
 
main(argc, argv) int argc; char **argv; { int xdr_rcp(); int err;
 
if (argc < 2) { fprintf(stderr, "usage: %s servername\n", argv[0]); exit(-1); } if ((err = callrpctcp(argv[1], RCPPROG, RCPPROC, RCPVERS, xdr_rcp, stdin, xdr_void, 0) > 0)) { clnt_perrno(err); fprintf(stderr, "can't make RPC call\n"); exit(1); } exit(0); }
 
callrpctcp(host, prognum, procnum, versnum, inproc, in, outproc, out) char *host, *in, *out; xdrproc_t inproc, outproc; { struct sockaddr_in server_addr; int socket = RPC_ANYSOCK; enum clnt_stat clnt_stat; struct hostent *hp; register CLIENT *client; struct timeval total_timeout;
 
if ((hp = gethostbyname(host)) == NULL) { fprintf(stderr, "can't get addr for '%s'\n", host); return (-1); } bcopy(hp->h_addr, (caddr_t)&server_addr.sin_addr, hp->h_length); server_addr.sin_family = AF_INET; server_addr.sin_port = 0; if ((client = clnttcp_create(&server_addr, prognum, versnum, &socket, BUFSIZ, BUFSIZ)) == NULL) { clnt_createerror("rpctcp_create"); return (-1); } total_timeout.tv_sec = 20; total_timeout.tv_usec = 0; clnt_stat = clnt_call(client, procnum, inproc, in, outproc, out, total_timeout); clnt_destroy(client); return ((int)clnt_stat); } /* * The receiving routines */ #include <stdio.h> #include <rpc/rpc.h> #include "rcp.h" /* for prog, vers definitions */
 
main() { register SVCXPRT *transp; int rcp_service(), xdr_rcp();
 
if ((transp = svctcp_create(RPC_ANYSOCK, BUFSIZ, BUFSIZ)) == NULL) { fprintf("svctcp_create: error\n"); exit(1); } pmap_unset(RCPPROG, RCPVERS); if (!svc_register(transp, RCPPROG, RCPVERS, rcp_service, IPPROTO_TCP)) { fprintf(stderr, "svc_register: error\n"); exit(1); } svc_run(); /* never returns */ fprintf(stderr, "svc_run should never return\n"); } rcp_service(rqstp, transp) register struct svc_req *rqstp; register SVCXPRT *transp; { switch (rqstp->rq_proc) { case NULLPROC: if (svc_sendreply(transp, xdr_void, 0) == 0) fprintf(stderr, "err: rcp_service"); return; case RCPPROC_FP: if (!svc_getargs(transp, xdr_rcp, stdout)) { svcerr_decode(transp); return; } if (!svc_sendreply(transp, xdr_void, 0)) fprintf(stderr, "can't reply\n"); return; default: svcerr_noproc(transp); return; } }


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Chapter] [Index] [Help]


3.4.4    Callback Procedures

It is sometimes useful to have a server become a client, and make an RPC call back to the process that is its client. An example of this is remote debugging, where the client is a window-system program and the server is a debugger running on the remote machine. Mostly, the user clicks a mouse button at the debugging window (converting this to a debugger command), and then makes an RPC call to the server (where the debugger is actually running), telling it to execute that command. However, when the debugger reaches a breakpoint, the roles are reversed, and the debugger wants to make an RPC call to the window program, so that it can tell the user that a breakpoint has been reached.

Callbacks are also useful when the client cannot block (that is, wait) to hear back from the server (possibly because of excessive processing in serving the request). In such cases, the server could acknowledge the request and use a callback to reply.

To do an RPC callback, you need a program number on which to make the RPC call. The program number is dynamically generated, so it must be in the transient range 0x40000000. The routine gettransient returns a valid program number in the transient range, and registers it with the portmapper. It only communicates with the portmapper running on the same machine as the gettransient routine itself.

The call to pmap_set is a test-and-set operation, because it indivisibly tests whether or not a program number has been registered; if not, it is reserved. The following example shows the gettransient routine:

#include <stdio.h>
#include <rpc/rpc.h>

 
gettransient(proto, vers, portnum) int proto; u_long vers; u_short portnum; { static u_long prognum = 0x40000000;
 
while (!pmap_set(prognum++, vers, proto, portnum)) continue; return (prognum - 1); }

Note that the call to ntohs for portnum is unnecessary because it was already passed in host byte order (as pmap_set expects).

The following list describes how the client/server programs in Example 3-13 use the gettransient routine:

In Example 3-13, both the client and the server are on the same machine; otherwise, host name handling would be different.

Example 3-13: Client-Server Usage of gettransient Routine

/*
 * client
 */
#include <stdio.h>
#include <rpc/rpc.h>
#include "example.h"

 
int callback();
 
main() { int tmp_prog; char hostname[256]; SVCXPRT *xprt; int stat;
 
gethostname(hostname, sizeof(hostname)); if ((xprt = svcudp_create(RPC_ANYSOCK)) == NULL) { fprintf(stderr, "rpc_server: svcudp_create\n"); exit(1); } if (tmp_prog = gettransient(IPPROTO_UDP, 1, xprt->xp_port) == 0) { fprintf(stderr, "failed to get transient number\n"); exit(1); } fprintf(stderr, "client gets prognum %d\n", tmp_prog);
 
/* protocol is 0 - gettransient does registering */
 
(void)svc_register(xprt, tmp_prog, 1, callback, 0); stat = callrpc(hostname, EXAMPLEPROG, EXAMPLEVERS, EXAMPLEPROC_CALLBACK, xdr_int, &tmp_prog, xdr_void, 0);
 
if (stat != RPC_SUCCESS) { clnt_perrno(stat); exit(1); } svc_run(); fprintf(stderr, "Error: svc_run shouldn't return\n"); }
 
callback(rqstp, transp) register struct svc_req *rqstp; register SVCXPRT *transp; { switch (rqstp->rq_proc) {
 
case 0: if (!svc_sendreply(transp, xdr_void, 0)) { fprintf(stderr, "err: exampleprog\n"); return (1); } return (0);
 
case 1: fprintf(stderr, "client got callback\n"); if (!svc_sendreply(transp, xdr_void, 0)) { fprintf(stderr, "err: exampleprog\n"); return (1); } } return (0); }
 
/* * server */ #include <stdio.h> #include <rpc/rpc.h> #include <sys/signal.h> #include "example.h"
 
char *getnewprog(); char hostname[256]; int docallback(); int pnum = -1; /* program number for callback routine */
 
main() { gethostname(hostname, sizeof(hostname)); registerrpc(EXAMPLEPROG, EXAMPLEVERS, EXAMPLEPROC_CALLBACK, getnewprog, xdr_int, xdr_void); signal(SIGALRM, docallback); alarm(10); svc_run(); fprintf(stderr, "Error: svc_run shouldn't return\n"); }
 
char * getnewprog(pnump) int *pnump; { pnum = *(int *)pnump; return NULL; }
 
docallback() { int ans;
 
if (pnum == -1) { signal(SIGALRM, docallback); return; /* program number not yet received */ } ans = callrpc(hostname, pnum, 1, 1, xdr_void, 0, xdr_void, 0); if (ans != RPC_SUCCESS) fprintf(stderr, "server: %s\n",clnt_sperrno(ans)); }