summaryrefslogtreecommitdiff
path: root/Documentation/staging/rpmsg.rst
blob: dba3e5f6561295d0a05b84935b32f64505c51a15 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
============================================
Remote Processor Messaging (rpmsg) Framework
============================================

.. note::

  This document describes the rpmsg bus and how to write rpmsg drivers.
  To learn how to add rpmsg support for new platforms, check out remoteproc.txt
  (also a resident of Documentation/).

Introduction
============

Modern SoCs typically employ heterogeneous remote processor devices in
asymmetric multiprocessing (AMP) configurations, which may be running
different instances of operating system, whether it's Linux or any other
flavor of real-time OS.

OMAP4, for example, has dual Cortex-A9, dual Cortex-M3 and a C64x+ DSP.
Typically, the dual cortex-A9 is running Linux in a SMP configuration,
and each of the other three cores (two M3 cores and a DSP) is running
its own instance of RTOS in an AMP configuration.

Typically AMP remote processors employ dedicated DSP codecs and multimedia
hardware accelerators, and therefore are often used to offload CPU-intensive
multimedia tasks from the main application processor.

These remote processors could also be used to control latency-sensitive
sensors, drive random hardware blocks, or just perform background tasks
while the main CPU is idling.

Users of those remote processors can either be userland apps (e.g. multimedia
frameworks talking with remote OMX components) or kernel drivers (controlling
hardware accessible only by the remote processor, reserving kernel-controlled
resources on behalf of the remote processor, etc..).

Rpmsg is a virtio-based messaging bus that allows kernel drivers to communicate
with remote processors available on the system. In turn, drivers could then
expose appropriate user space interfaces, if needed.

When writing a driver that exposes rpmsg communication to userland, please
keep in mind that remote processors might have direct access to the
system's physical memory and other sensitive hardware resources (e.g. on
OMAP4, remote cores and hardware accelerators may have direct access to the
physical memory, gpio banks, dma controllers, i2c bus, gptimers, mailbox
devices, hwspinlocks, etc..). Moreover, those remote processors might be
running RTOS where every task can access the entire memory/devices exposed
to the processor. To minimize the risks of rogue (or buggy) userland code
exploiting remote bugs, and by that taking over the system, it is often
desired to limit userland to specific rpmsg channels (see definition below)
it can send messages on, and if possible, minimize how much control
it has over the content of the messages.

Every rpmsg device is a communication channel with a remote processor (thus
rpmsg devices are called channels). Channels are identified by a textual name
and have a local ("source") rpmsg address, and remote ("destination") rpmsg
address.

When a driver starts listening on a channel, its rx callback is bound with
a unique rpmsg local address (a 32-bit integer). This way when inbound messages
arrive, the rpmsg core dispatches them to the appropriate driver according
to their destination address (this is done by invoking the driver's rx handler
with the payload of the inbound message).


User API
========

::

  int rpmsg_send(struct rpmsg_endpoint *ept, void *data, int len);

sends a message across to the remote processor from the given endpoint.
The caller should specify the endpoint, the data it wants to send,
and its length (in bytes). The message will be sent on the specified
endpoint's channel, i.e. its source and destination address fields will be
respectively set to the endpoint's src address and its parent channel
dst addresses.

In case there are no TX buffers available, the function will block until
one becomes available (i.e. until the remote processor consumes
a tx buffer and puts it back on virtio's used descriptor ring),
or a timeout of 15 seconds elapses. When the latter happens,
-ERESTARTSYS is returned.

The function can only be called from a process context (for now).
Returns 0 on success and an appropriate error value on failure.

::

  int rpmsg_sendto(struct rpmsg_endpoint *ept, void *data, int len, u32 dst);

sends a message across to the remote processor from a given endpoint,
to a destination address provided by the caller.

The caller should specify the endpoint, the data it wants to send,
its length (in bytes), and an explicit destination address.

The message will then be sent to the remote processor to which the
endpoints's channel belongs, using the endpoints's src address,
and the user-provided dst address (thus the channel's dst address
will be ignored).

In case there are no TX buffers available, the function will block until
one becomes available (i.e. until the remote processor consumes
a tx buffer and puts it back on virtio's used descriptor ring),
or a timeout of 15 seconds elapses. When the latter happens,
-ERESTARTSYS is returned.

The function can only be called from a process context (for now).
Returns 0 on success and an appropriate error value on failure.

::

  int rpmsg_send_offchannel(struct rpmsg_endpoint *ept, u32 src, u32 dst,
							void *data, int len);


sends a message across to the remote processor, using the src and dst
addresses provided by the user.

The caller should specify the endpoint, the data it wants to send,
its length (in bytes), and explicit source and destination addresses.
The message will then be sent to the remote processor to which the
endpoint's channel belongs, but the endpoint's src and channel dst
addresses will be ignored (and the user-provided addresses will
be used instead).

In case there are no TX buffers available, the function will block until
one becomes available (i.e. until the remote processor consumes
a tx buffer and puts it back on virtio's used descriptor ring),
or a timeout of 15 seconds elapses. When the latter happens,
-ERESTARTSYS is returned.

The function can only be called from a process context (for now).
Returns 0 on success and an appropriate error value on failure.

::

  int rpmsg_trysend(struct rpmsg_endpoint *ept, void *data, int len);

sends a message across to the remote processor from a given endpoint.
The caller should specify the endpoint, the data it wants to send,
and its length (in bytes). The message will be sent on the specified
endpoint's channel, i.e. its source and destination address fields will be
respectively set to the endpoint's src address and its parent channel
dst addresses.

In case there are no TX buffers available, the function will immediately
return -ENOMEM without waiting until one becomes available.

The function can only be called from a process context (for now).
Returns 0 on success and an appropriate error value on failure.

::

  int rpmsg_trysendto(struct rpmsg_endpoint *ept, void *data, int len, u32 dst)


sends a message across to the remote processor from a given endoint,
to a destination address provided by the user.

The user should specify the channel, the data it wants to send,
its length (in bytes), and an explicit destination address.

The message will then be sent to the remote processor to which the
channel belongs, using the channel's src address, and the user-provided
dst address (thus the channel's dst address will be ignored).

In case there are no TX buffers available, the function will immediately
return -ENOMEM without waiting until one becomes available.

The function can only be called from a process context (for now).
Returns 0 on success and an appropriate error value on failure.

::

  int rpmsg_trysend_offchannel(struct rpmsg_endpoint *ept, u32 src, u32 dst,
							void *data, int len);


sends a message across to the remote processor, using source and
destination addresses provided by the user.

The user should specify the channel, the data it wants to send,
its length (in bytes), and explicit source and destination addresses.
The message will then be sent to the remote processor to which the
channel belongs, but the channel's src and dst addresses will be
ignored (and the user-provided addresses will be used instead).

In case there are no TX buffers available, the function will immediately
return -ENOMEM without waiting until one becomes available.

The function can only be called from a process context (for now).
Returns 0 on success and an appropriate error value on failure.

::

  struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_device *rpdev,
					  rpmsg_rx_cb_t cb, void *priv,
					  struct rpmsg_channel_info chinfo);

every rpmsg address in the system is bound to an rx callback (so when
inbound messages arrive, they are dispatched by the rpmsg bus using the
appropriate callback handler) by means of an rpmsg_endpoint struct.

This function allows drivers to create such an endpoint, and by that,
bind a callback, and possibly some private data too, to an rpmsg address
(either one that is known in advance, or one that will be dynamically
assigned for them).

Simple rpmsg drivers need not call rpmsg_create_ept, because an endpoint
is already created for them when they are probed by the rpmsg bus
(using the rx callback they provide when they registered to the rpmsg bus).

So things should just work for simple drivers: they already have an
endpoint, their rx callback is bound to their rpmsg address, and when
relevant inbound messages arrive (i.e. messages which their dst address
equals to the src address of their rpmsg channel), the driver's handler
is invoked to process it.

That said, more complicated drivers might do need to allocate
additional rpmsg addresses, and bind them to different rx callbacks.
To accomplish that, those drivers need to call this function.
Drivers should provide their channel (so the new endpoint would bind
to the same remote processor their channel belongs to), an rx callback
function, an optional private data (which is provided back when the
rx callback is invoked), and an address they want to bind with the
callback. If addr is RPMSG_ADDR_ANY, then rpmsg_create_ept will
dynamically assign them an available rpmsg address (drivers should have
a very good reason why not to always use RPMSG_ADDR_ANY here).

Returns a pointer to the endpoint on success, or NULL on error.

::

  void rpmsg_destroy_ept(struct rpmsg_endpoint *ept);


destroys an existing rpmsg endpoint. user should provide a pointer
to an rpmsg endpoint that was previously created with rpmsg_create_ept().

::

  int register_rpmsg_driver(struct rpmsg_driver *rpdrv);


registers an rpmsg driver with the rpmsg bus. user should provide
a pointer to an rpmsg_driver struct, which contains the driver's
->probe() and ->remove() functions, an rx callback, and an id_table
specifying the names of the channels this driver is interested to
be probed with.

::

  void unregister_rpmsg_driver(struct rpmsg_driver *rpdrv);


unregisters an rpmsg driver from the rpmsg bus. user should provide
a pointer to a previously-registered rpmsg_driver struct.
Returns 0 on success, and an appropriate error value on failure.


Typical usage
=============

The following is a simple rpmsg driver, that sends an "hello!" message
on probe(), and whenever it receives an incoming message, it dumps its
content to the console.

::

  #include <linux/kernel.h>
  #include <linux/module.h>
  #include <linux/rpmsg.h>

  static void rpmsg_sample_cb(struct rpmsg_channel *rpdev, void *data, int len,
						void *priv, u32 src)
  {
	print_hex_dump(KERN_INFO, "incoming message:", DUMP_PREFIX_NONE,
						16, 1, data, len, true);
  }

  static int rpmsg_sample_probe(struct rpmsg_channel *rpdev)
  {
	int err;

	dev_info(&rpdev->dev, "chnl: 0x%x -> 0x%x\n", rpdev->src, rpdev->dst);

	/* send a message on our channel */
	err = rpmsg_send(rpdev->ept, "hello!", 6);
	if (err) {
		pr_err("rpmsg_send failed: %d\n", err);
		return err;
	}

	return 0;
  }

  static void rpmsg_sample_remove(struct rpmsg_channel *rpdev)
  {
	dev_info(&rpdev->dev, "rpmsg sample client driver is removed\n");
  }

  static struct rpmsg_device_id rpmsg_driver_sample_id_table[] = {
	{ .name	= "rpmsg-client-sample" },
	{ },
  };
  MODULE_DEVICE_TABLE(rpmsg, rpmsg_driver_sample_id_table);

  static struct rpmsg_driver rpmsg_sample_client = {
	.drv.name	= KBUILD_MODNAME,
	.id_table	= rpmsg_driver_sample_id_table,
	.probe		= rpmsg_sample_probe,
	.callback	= rpmsg_sample_cb,
	.remove		= rpmsg_sample_remove,
  };
  module_rpmsg_driver(rpmsg_sample_client);

.. note::

   a similar sample which can be built and loaded can be found
   in samples/rpmsg/.

Allocations of rpmsg channels
=============================

At this point we only support dynamic allocations of rpmsg channels.

This is possible only with remote processors that have the VIRTIO_RPMSG_F_NS
virtio device feature set. This feature bit means that the remote
processor supports dynamic name service announcement messages.

When this feature is enabled, creation of rpmsg devices (i.e. channels)
is completely dynamic: the remote processor announces the existence of a
remote rpmsg service by sending a name service message (which contains
the name and rpmsg addr of the remote service, see struct rpmsg_ns_msg).

This message is then handled by the rpmsg bus, which in turn dynamically
creates and registers an rpmsg channel (which represents the remote service).
If/when a relevant rpmsg driver is registered, it will be immediately probed
by the bus, and can then start sending messages to the remote service.

The plan is also to add static creation of rpmsg channels via the virtio
config space, but it's not implemented yet.