Discussion:
[RFC][PATCH v6 00/23] virtagent: host/guest RPC communication agent
(too old to reply)
Michael Roth
2011-01-17 13:14:54 UTC
Permalink
These patches apply to master (1-14-2011), and can also be obtained from:
git://repo.or.cz/qemu/mdroth.git virtagent_v6

CHANGES IN V6:

- Added a sentinel value to reliably detect the start of an "http" hdr. Used to skip past partially sent http content from previous "sessions"
- Added http hdr tag (currently hardcoded for testing, will switch to uuid) to filter out valid-but-unexpected content in channel from previous "sessions"
- Added timeout mechanism to avoid hanging monitor when agent isn't running
- Added timed back-off on read's from a virtio-serial that result in ret=0 to avoid spinning if host isn't connected.
- Added daemonize flags to qemu-va
- Added sane defaults for channel type and virtio-serial port path
- Various bug fixes for state machine/job handling logic

CHANGES IN V5:

- Dependency on virtproxy dropped, virtagent now handles transport and multiplexing of bi-directional RPCs internally
- Removed duplification of qemu_set_fd_handler()-centered i/o code. Support for interacting with objects that use qemu_set_fd_handler() now available to tools via qemu-tools.c and a set of generalized utility functions
- Fixed memory leaks in client/monitor functions
- Various cleanups

CHANGES IN V4:

- Added guest agent capabilities negotiation
- Added RPC/monitor command to invoke guest shutdown/reboot/powerdown
- Added RPC/monitor command to the guest agent
- Added guest startup notification ("hello")
- Added syslog()'ing of guest agent RPCs
- Various cleanups

CHANGES IN V3:

- Integrated virtagent server into virtproxy chardev. Usage examples below.
- Consolidated RPC server/client setup into a pair of init routines
- Fixed buffer overflow in agent_viewfile() and various memory leaks

CHANGES IN V2:

- All RPC communication is now done using asynchronous/non-blocking read/write handlers
- Previously fork()'d RPC server loop is now integrated into qemu-vp/virtproxy i/o loop
- Cleanups/suggestions from previous RFC

OVERVIEW:

There are a wide range of use cases motivating the need for a guest agent of some sort to extend the functionality/usability/control offered by QEMU. Some examples include graceful guest shutdown/reboot and notifications thereof, copy/paste syncing between host/guest, guest statistics gathering, file access, etc.

Ideally these would all be served by a single, easilly extensible agent that can be deployed in a wide range of guests. Virtagent is an XMLRPC server, integrated into QEMU and a simple guest daemon, aimed at providing this type of functionality.

DESIGN:

There are actually 2 RPC servers:

1) a server in the guest agent which handles RPC requests from QEMU
2) a server in the host, integrated into the virtagent chardev, to handle RPC requests sent by the guest agent (mainly for handling asynchronous events reported by the agent).

Communication is done via RPCs (XMLRPC/HTTP between host and guest), albeit with a non-standard implementation that allows for multiplexing server/client RPC over a single virtio/isa serial channel.

EXAMPLE USAGE:

- Configure guest agent to talk to host via virtio-serial
# start guest with virtio-serial/virtagent. for example (RHEL6):
qemu \
-chardev virtagent,id=test0 \
-device virtio-serial \
-device virtserialport,chardev=test0,name=virtagent0 \
-monitor stdio
...
# in the guest:
./qemu-va -c virtio-serial -p /dev/virtio-ports/virtagent0
...
# monitor commands
(qemu) agent_viewdmesg
[139311.710326] wlan0: deauthenticating from 00:30:bd:f7:12:d5 by local choice (reason=3)
[139323.469857] wlan0: deauthenticating from 00:21:29:cd:41:ee by local choice (reason=3)
...
[257683.375646] wlan0: authenticated
[257683.375684] wlan0: associate with AP 00:30:bd:f7:12:d5 (try 1)
[257683.377932] wlan0: RX AssocResp from 00:30:bd:f7:12:d5 (capab=0x411 status=0 aid=4)
[257683.377940] wlan0: associated

(qemu) agent_viewfile /proc/meminfo
MemTotal: 3985488 kB
MemFree: 400524 kB
Buffers: 220556 kB
Cached: 2073160 kB
SwapCached: 0 kB
...
Hugepagesize: 2048 kB
DirectMap4k: 8896 kB
DirectMap2M: 4110336 kB
(qemu) agent_shutdown powerdown
(qemu)

KNOWN ISSUES/PLANS:
- Implement RPC for guest script/command execution
- Use UUIDs for generating unique tags for each http request/response
- Scan for sentinel value while reading http content as well to immediately detect truncated requests/responses and avoid accidentally consuming new ones
- switch to standard logging/trace mechanisms
- outstanding requests should be reset if we get a hello notification (this implies guest/guest agent restart)
- the client socket that qemu connects to send RPCs is a hardcoded filepath. This is unacceptable as the socket is channel/process specific and things will break when multiple guests are started.

[RFC][PATCH v6 01/23] Move code related to fd handlers into utility functions
[RFC][PATCH v6 02/23] Add qemu_set_fd_handler() wrappers to qemu-tools.c
[RFC][PATCH v6 03/23] Make qemu timers available for tools
[RFC][PATCH v6 04/23] virtagent: common code for managing client/server rpc jobs
[RFC][PATCH v6 05/23] virtagent: transport definitions read/send callback functions
[RFC][PATCH v6 06/23] virtagent: base client definitions
[RFC][PATCH v6 07/23] virtagent: base server definitions
[RFC][PATCH v6 08/23] virtagent: add va.getfile RPC
[RFC][PATCH v6 09/23] virtagent: add agent_viewfile qmp/hmp command
[RFC][PATCH v6 10/23] virtagent: add va.getdmesg RPC
[RFC][PATCH v6 11/23] virtagent: add agent_viewdmesg qmp/hmp commands
[RFC][PATCH v6 12/23] virtagent: add va.shutdown RPC
[RFC][PATCH v6 13/23] virtagent: add agent_shutdown qmp/hmp commands
[RFC][PATCH v6 14/23] virtagent: add va.ping RPC
[RFC][PATCH v6 15/23] virtagent: add agent_ping qmp/hmp commands
[RFC][PATCH v6 16/23] virtagent: add agent_capabilities qmp/hmp commands
[RFC][PATCH v6 17/23] virtagent: add client capabilities init function
[RFC][PATCH v6 18/23] virtagent: add va.hello RPC
[RFC][PATCH v6 19/23] virtagent: add "hello" notification function for guest agent
[RFC][PATCH v6 20/23] virtagent: add va.capabilities RPC
[RFC][PATCH v6 21/23] virtagent: add virtagent guest daemon
[RFC][PATCH v6 22/23] virtagent: integrate virtagent server/client via chardev
[RFC][PATCH v6 23/23] virtagent: various bits to build QEMU with virtagent

Makefile | 4 +-
Makefile.objs | 2 +-
Makefile.target | 2 +-
configure | 32 ++
cpus.c | 83 +----
hmp-commands.hx | 80 ++++
monitor.c | 1 +
qemu-char.c | 44 +++
qemu-char.h | 4 +
qemu-ioh.c | 208 +++++++++++
qemu-ioh.h | 43 +++
qemu-tool.c | 115 ++++++-
qemu-tool.h | 26 ++
qemu-va.c | 238 ++++++++++++
qerror.c | 8 +
qerror.h | 6 +
qmp-commands.hx | 164 +++++++++
virtagent-common.c | 1028 ++++++++++++++++++++++++++++++++++++++++++++++++++++
virtagent-common.h | 73 ++++
virtagent-server.c | 350 ++++++++++++++++++
virtagent-server.h | 34 ++
virtagent.c | 642 ++++++++++++++++++++++++++++++++
virtagent.h | 50 +++
vl.c | 86 +----
25 files changed, 3177 insertions(+), 148 deletions(-)
Michael Roth
2011-01-17 13:14:56 UTC
Permalink
This adds state information for managing fd handlers to qemu-tools.c so
that tools that build against it can implement an I/O loop for
interacting with objects that use qemu_set_fd_handler()

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
qemu-tool.c | 25 ++++++++++++++++++++++++-
1 files changed, 24 insertions(+), 1 deletions(-)

diff --git a/qemu-tool.c b/qemu-tool.c
index 392e1c9..78d3532 100644
--- a/qemu-tool.c
+++ b/qemu-tool.c
@@ -22,6 +22,8 @@
QEMUClock *rt_clock;

FILE *logfile;
+static QLIST_HEAD(, IOHandlerRecord) io_handlers =
+ QLIST_HEAD_INITIALIZER(io_handlers);

struct QEMUBH
{
@@ -103,11 +105,32 @@ void qemu_bh_delete(QEMUBH *bh)
qemu_free(bh);
}

+/* definitions to implement i/o loop for fd handlers in tools */
int qemu_set_fd_handler2(int fd,
IOCanReadHandler *fd_read_poll,
IOHandler *fd_read,
IOHandler *fd_write,
void *opaque)
{
- return 0;
+ return qemu_set_fd_handler3(&io_handlers, fd, fd_read_poll, fd_read,
+ fd_write, opaque);
+}
+
+int qemu_set_fd_handler(int fd,
+ IOHandler *fd_read,
+ IOHandler *fd_write,
+ void *opaque)
+{
+ return qemu_set_fd_handler2(fd, NULL, fd_read, fd_write, opaque);
+}
+
+void qemu_get_fdset(int *nfds, fd_set *rfds, fd_set *wfds, fd_set *xfds)
+{
+ return qemu_get_fdset2(&io_handlers, nfds, rfds, wfds, xfds);
+}
+
+void qemu_process_fd_handlers(const fd_set *rfds, const fd_set *wfds,
+ const fd_set *xfds)
+{
+ return qemu_process_fd_handlers2(&io_handlers, rfds, wfds, xfds);
}
--
1.7.0.4
Michael Roth
2011-01-17 13:14:55 UTC
Permalink
This allows us to implement an i/o loop outside of vl.c that can
interact with objects that use qemu_set_fd_handler()

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
Makefile.objs | 2 +-
qemu-char.h | 4 ++
qemu-ioh.c | 115 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
qemu-ioh.h | 34 +++++++++++++++++
vl.c | 86 ++++++++----------------------------------
5 files changed, 170 insertions(+), 71 deletions(-)
create mode 100644 qemu-ioh.c
create mode 100644 qemu-ioh.h

diff --git a/Makefile.objs b/Makefile.objs
index c3e52c5..0078921 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -14,7 +14,7 @@ oslib-obj-$(CONFIG_POSIX) += oslib-posix.o
# block-obj-y is code used by both qemu system emulation and qemu-img

block-obj-y = cutils.o cache-utils.o qemu-malloc.o qemu-option.o module.o
-block-obj-y += nbd.o block.o aio.o aes.o qemu-config.o
+block-obj-y += nbd.o block.o aio.o aes.o qemu-config.o qemu-ioh.o
block-obj-$(CONFIG_POSIX) += posix-aio-compat.o
block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o

diff --git a/qemu-char.h b/qemu-char.h
index e6ee6c4..7d0794a 100644
--- a/qemu-char.h
+++ b/qemu-char.h
@@ -7,6 +7,7 @@
#include "qemu-config.h"
#include "qobject.h"
#include "qstring.h"
+#include "qemu-ioh.h"

/* character device */

@@ -118,4 +119,7 @@ int qemu_set_fd_handler(int fd,
IOHandler *fd_read,
IOHandler *fd_write,
void *opaque);
+void qemu_get_fdset(int *nfds, fd_set *rfds, fd_set *wfds, fd_set *xfds);
+void qemu_process_fd_handlers(const fd_set *rfds, const fd_set *wfds,
+ const fd_set *xfds);
#endif
diff --git a/qemu-ioh.c b/qemu-ioh.c
new file mode 100644
index 0000000..cc71470
--- /dev/null
+++ b/qemu-ioh.c
@@ -0,0 +1,115 @@
+/*
+ * QEMU System Emulator
+ *
+ * Copyright (c) 2003-2008 Fabrice Bellard
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+#include "qemu-ioh.h"
+#include "qlist.h"
+
+/* XXX: fd_read_poll should be suppressed, but an API change is
+ necessary in the character devices to suppress fd_can_read(). */
+int qemu_set_fd_handler3(void *ioh_record_list,
+ int fd,
+ IOCanReadHandler *fd_read_poll,
+ IOHandler *fd_read,
+ IOHandler *fd_write,
+ void *opaque)
+{
+ QLIST_HEAD(, IOHandlerRecord) *io_handlers_ptr = ioh_record_list;
+ IOHandlerRecord *ioh;
+
+ if (!fd_read && !fd_write) {
+ QLIST_FOREACH(ioh, io_handlers_ptr, next) {
+ if (ioh->fd == fd) {
+ ioh->deleted = 1;
+ break;
+ }
+ }
+ } else {
+ QLIST_FOREACH(ioh, io_handlers_ptr, next) {
+ if (ioh->fd == fd)
+ goto found;
+ }
+ ioh = qemu_mallocz(sizeof(IOHandlerRecord));
+ QLIST_INSERT_HEAD(io_handlers_ptr, ioh, next);
+ found:
+ ioh->fd = fd;
+ ioh->fd_read_poll = fd_read_poll;
+ ioh->fd_read = fd_read;
+ ioh->fd_write = fd_write;
+ ioh->opaque = opaque;
+ ioh->deleted = 0;
+ }
+ return 0;
+}
+
+/* add entries from ioh record list to fd sets. nfds and fd sets
+ * should be cleared/reset by caller if desired. set a particular
+ * fdset to NULL to ignore fd events of that type
+ */
+void qemu_get_fdset2(void *ioh_record_list, int *nfds, fd_set *rfds,
+ fd_set *wfds, fd_set *xfds)
+{
+ QLIST_HEAD(, IOHandlerRecord) *io_handlers = ioh_record_list;
+ IOHandlerRecord *ioh;
+
+ QLIST_FOREACH(ioh, io_handlers, next) {
+ if (ioh->deleted)
+ continue;
+ if ((rfds != NULL && ioh->fd_read) &&
+ (!ioh->fd_read_poll ||
+ ioh->fd_read_poll(ioh->opaque) != 0)) {
+ FD_SET(ioh->fd, rfds);
+ if (ioh->fd > *nfds)
+ *nfds = ioh->fd;
+ }
+ if (wfds != NULL && ioh->fd_write) {
+ FD_SET(ioh->fd, wfds);
+ if (ioh->fd > *nfds)
+ *nfds = ioh->fd;
+ }
+ }
+}
+
+/* execute registered handlers for r/w events in the provided fdsets. unset
+ * handlers are cleaned up here as well
+ */
+void qemu_process_fd_handlers2(void *ioh_record_list, const fd_set *rfds,
+ const fd_set *wfds, const fd_set *xfds)
+{
+ QLIST_HEAD(, IOHandlerRecord) *io_handlers = ioh_record_list;
+ IOHandlerRecord *ioh, *pioh;
+
+ QLIST_FOREACH_SAFE(ioh, io_handlers, next, pioh) {
+ if (!ioh->deleted && ioh->fd_read && FD_ISSET(ioh->fd, rfds)) {
+ ioh->fd_read(ioh->opaque);
+ }
+ if (!ioh->deleted && ioh->fd_write && FD_ISSET(ioh->fd, wfds)) {
+ ioh->fd_write(ioh->opaque);
+ }
+
+ /* Do this last in case read/write handlers marked it for deletion */
+ if (ioh->deleted) {
+ QLIST_REMOVE(ioh, next);
+ qemu_free(ioh);
+ }
+ }
+}
diff --git a/qemu-ioh.h b/qemu-ioh.h
new file mode 100644
index 0000000..7c6e833
--- /dev/null
+++ b/qemu-ioh.h
@@ -0,0 +1,34 @@
+#ifndef QEMU_IOH_H
+#define QEMU_IOH_H
+
+#include "qemu-common.h"
+#include "qlist.h"
+
+/* common i/o loop definitions */
+
+typedef struct IOHandlerRecord {
+ int fd;
+ IOCanReadHandler *fd_read_poll;
+ IOHandler *fd_read;
+ IOHandler *fd_write;
+ int deleted;
+ void *opaque;
+ /* temporary data */
+ struct pollfd *ufd;
+ QLIST_ENTRY(IOHandlerRecord) next;
+} IOHandlerRecord;
+
+/* XXX: fd_read_poll should be suppressed, but an API change is
+ necessary in the character devices to suppress fd_can_read(). */
+int qemu_set_fd_handler3(void *io_handlers_ptr,
+ int fd,
+ IOCanReadHandler *fd_read_poll,
+ IOHandler *fd_read,
+ IOHandler *fd_write,
+ void *opaque);
+void qemu_get_fdset2(void *ioh_record_list, int *nfds, fd_set *rfds,
+ fd_set *wfds, fd_set *xfds);
+void qemu_process_fd_handlers2(void *ioh_record_list, const fd_set *rfds,
+ const fd_set *wfds, const fd_set *xfds);
+
+#endif
diff --git a/vl.c b/vl.c
index 0292184..d3bdfec 100644
--- a/vl.c
+++ b/vl.c
@@ -148,6 +148,7 @@ int main(int argc, char **argv)
#include "qemu-config.h"
#include "qemu-objects.h"
#include "qemu-options.h"
+#include "qemu-ioh.h"
#ifdef CONFIG_VIRTFS
#include "fsdev/qemu-fsdev.h"
#endif
@@ -1007,18 +1008,6 @@ void pcmcia_info(Monitor *mon)
/***********************************************************/
/* I/O handling */

-typedef struct IOHandlerRecord {
- int fd;
- IOCanReadHandler *fd_read_poll;
- IOHandler *fd_read;
- IOHandler *fd_write;
- int deleted;
- void *opaque;
- /* temporary data */
- struct pollfd *ufd;
- QLIST_ENTRY(IOHandlerRecord) next;
-} IOHandlerRecord;
-
static QLIST_HEAD(, IOHandlerRecord) io_handlers =
QLIST_HEAD_INITIALIZER(io_handlers);

@@ -1031,31 +1020,8 @@ int qemu_set_fd_handler2(int fd,
IOHandler *fd_write,
void *opaque)
{
- IOHandlerRecord *ioh;
-
- if (!fd_read && !fd_write) {
- QLIST_FOREACH(ioh, &io_handlers, next) {
- if (ioh->fd == fd) {
- ioh->deleted = 1;
- break;
- }
- }
- } else {
- QLIST_FOREACH(ioh, &io_handlers, next) {
- if (ioh->fd == fd)
- goto found;
- }
- ioh = qemu_mallocz(sizeof(IOHandlerRecord));
- QLIST_INSERT_HEAD(&io_handlers, ioh, next);
- found:
- ioh->fd = fd;
- ioh->fd_read_poll = fd_read_poll;
- ioh->fd_read = fd_read;
- ioh->fd_write = fd_write;
- ioh->opaque = opaque;
- ioh->deleted = 0;
- }
- return 0;
+ return qemu_set_fd_handler3(&io_handlers, fd, fd_read_poll, fd_read,
+ fd_write, opaque);
}

int qemu_set_fd_handler(int fd,
@@ -1066,6 +1032,17 @@ int qemu_set_fd_handler(int fd,
return qemu_set_fd_handler2(fd, NULL, fd_read, fd_write, opaque);
}

+void qemu_get_fdset(int *nfds, fd_set *rfds, fd_set *wfds, fd_set *xfds)
+{
+ return qemu_get_fdset2(&io_handlers, nfds, rfds, wfds, xfds);
+}
+
+void qemu_process_fd_handlers(const fd_set *rfds, const fd_set *wfds,
+ const fd_set *xfds)
+{
+ return qemu_process_fd_handlers2(&io_handlers, rfds, wfds, xfds);
+}
+
/***********************************************************/
/* machine registration */

@@ -1295,7 +1272,6 @@ void qemu_system_powerdown_request(void)

void main_loop_wait(int nonblocking)
{
- IOHandlerRecord *ioh;
fd_set rfds, wfds, xfds;
int ret, nfds;
struct timeval tv;
@@ -1316,22 +1292,7 @@ void main_loop_wait(int nonblocking)
FD_ZERO(&rfds);
FD_ZERO(&wfds);
FD_ZERO(&xfds);
- QLIST_FOREACH(ioh, &io_handlers, next) {
- if (ioh->deleted)
- continue;
- if (ioh->fd_read &&
- (!ioh->fd_read_poll ||
- ioh->fd_read_poll(ioh->opaque) != 0)) {
- FD_SET(ioh->fd, &rfds);
- if (ioh->fd > nfds)
- nfds = ioh->fd;
- }
- if (ioh->fd_write) {
- FD_SET(ioh->fd, &wfds);
- if (ioh->fd > nfds)
- nfds = ioh->fd;
- }
- }
+ qemu_get_fdset(&nfds, &rfds, &wfds, &xfds);

tv.tv_sec = timeout / 1000;
tv.tv_usec = (timeout % 1000) * 1000;
@@ -1342,22 +1303,7 @@ void main_loop_wait(int nonblocking)
ret = select(nfds + 1, &rfds, &wfds, &xfds, &tv);
qemu_mutex_lock_iothread();
if (ret > 0) {
- IOHandlerRecord *pioh;
-
- QLIST_FOREACH_SAFE(ioh, &io_handlers, next, pioh) {
- if (!ioh->deleted && ioh->fd_read && FD_ISSET(ioh->fd, &rfds)) {
- ioh->fd_read(ioh->opaque);
- }
- if (!ioh->deleted && ioh->fd_write && FD_ISSET(ioh->fd, &wfds)) {
- ioh->fd_write(ioh->opaque);
- }
-
- /* Do this last in case read/write handlers marked it for deletion */
- if (ioh->deleted) {
- QLIST_REMOVE(ioh, next);
- qemu_free(ioh);
- }
- }
+ qemu_process_fd_handlers(&rfds, &wfds, &xfds);
}

slirp_select_poll(&rfds, &wfds, &xfds, (ret < 0));
--
1.7.0.4
Gerd Hoffmann
2011-01-17 13:56:57 UTC
Permalink
Hi,
Post by Michael Roth
+/* XXX: fd_read_poll should be suppressed, but an API change is
+ necessary in the character devices to suppress fd_can_read(). */
FYI: Amit (Cc'ed) looks at this api issue too for other reasons.

cheers,
Gerd
Michael Roth
2011-01-17 13:15:05 UTC
Permalink
Add commands to view guest dmesg output. Currently it is a 16K buffer.

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
hmp-commands.hx | 16 +++++++++
qmp-commands.hx | 35 +++++++++++++++++++++
virtagent.c | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
virtagent.h | 3 ++
4 files changed, 146 insertions(+), 0 deletions(-)

diff --git a/hmp-commands.hx b/hmp-commands.hx
index a3e5e27..f60c64f 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -1344,6 +1344,22 @@ STEXI
Echo the file identified by @var{filepath} on the guest filesystem
ETEXI

+ {
+ .name = "agent_viewdmesg",
+ .args_type = "",
+ .params = "",
+ .help = "View guest dmesg output",
+ .user_print = do_agent_viewdmesg_print,
+ .mhandler.cmd_async = do_agent_viewdmesg,
+ .flags = MONITOR_CMD_ASYNC,
+ },
+
+STEXI
+@item agent_viewdmesg
+@findex agent_viewdmesg
+View guest dmesg output
+ETEXI
+
STEXI
@end table
ETEXI
diff --git a/qmp-commands.hx b/qmp-commands.hx
index 9dca7b9..0db38bd 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -828,6 +828,41 @@ Example:
EQMP

{
+ .name = "agent_viewdmesg",
+ .args_type = "",
+ .params = "",
+ .help = "View guest dmesg output",
+ .user_print = do_agent_viewdmesg_print,
+ .mhandler.cmd_async = do_agent_viewdmesg,
+ .flags = MONITOR_CMD_ASYNC,
+ },
+
+STEXI
+@item agent_viewdmesg
+@findex agent_viewdmesg
+View guest dmesg output
+ETEXI
+SQMP
+agent_viewdmesg
+--------
+
+View guest dmesg output
+
+Arguments:
+
+(none)
+
+Example:
+
+-> { "execute": "agent_viewdmesg" }
+<- { "return": {
+ "contents": "[353487.942215] usb 1-4: USB disconnect, address 9\n..."
+ }
+ }
+
+EQMP
+
+ {
.name = "qmp_capabilities",
.args_type = "",
.params = "",
diff --git a/virtagent.c b/virtagent.c
index cd5caf1..0976afe 100644
--- a/virtagent.c
+++ b/virtagent.c
@@ -252,3 +252,95 @@ int do_agent_viewfile(Monitor *mon, const QDict *mon_params,
xmlrpc_DECREF(params);
return ret;
}
+
+void do_agent_viewdmesg_print(Monitor *mon, const QObject *data)
+{
+ QDict *qdict;
+ const char *contents = NULL;
+ int i;
+
+ qdict = qobject_to_qdict(data);
+ if (!qdict_haskey(qdict, "contents")) {
+ goto out;
+ }
+
+ contents = qdict_get_str(qdict, "contents");
+ if (contents != NULL) {
+ /* monitor_printf truncates so do it in chunks. also, file_contents
+ * may not be null-termed at proper location so explicitly calc
+ * last chunk sizes */
+ for (i = 0; i < strlen(contents); i += 1024) {
+ monitor_printf(mon, "%.1024s", contents + i);
+ }
+ }
+
+out:
+ monitor_printf(mon, "\n");
+}
+
+static void do_agent_viewdmesg_cb(const char *resp_data,
+ size_t resp_data_len,
+ MonitorCompletion *mon_cb,
+ void *mon_data)
+{
+ xmlrpc_value *resp = NULL;
+ char *dmesg = NULL;
+ int ret;
+ xmlrpc_env env;
+ QDict *qdict = qdict_new();
+
+ if (resp_data == NULL) {
+ LOG("error handling RPC request");
+ goto out_no_resp;
+ }
+
+ xmlrpc_env_init(&env);
+ resp = xmlrpc_parse_response(&env, resp_data, resp_data_len);
+ if (va_rpc_has_error(&env)) {
+ ret = -1;
+ goto out_no_resp;
+ }
+
+ xmlrpc_parse_value(&env, resp, "s", &dmesg);
+ if (va_rpc_has_error(&env)) {
+ ret = -1;
+ goto out;
+ }
+
+ if (dmesg != NULL) {
+ qdict_put(qdict, "contents", qstring_from_str(dmesg));
+ }
+
+out:
+ xmlrpc_DECREF(resp);
+out_no_resp:
+ if (mon_cb) {
+ mon_cb(mon_data, QOBJECT(qdict));
+ }
+}
+
+/*
+ * do_agent_viewdmesg(): View guest dmesg output
+ */
+int do_agent_viewdmesg(Monitor *mon, const QDict *mon_params,
+ MonitorCompletion cb, void *opaque)
+{
+ xmlrpc_env env;
+ xmlrpc_value *params;
+ int ret;
+
+ xmlrpc_env_init(&env);
+
+ params = xmlrpc_build_value(&env, "()");
+ if (va_rpc_has_error(&env)) {
+ return -1;
+ }
+
+ ret = va_do_rpc(&env, "va.getdmesg", params, do_agent_viewdmesg_cb, cb,
+ opaque);
+ if (ret) {
+ qerror_report(QERR_VA_FAILED, ret, strerror(ret));
+ }
+ xmlrpc_DECREF(params);
+ return ret;
+}
diff --git a/virtagent.h b/virtagent.h
index 1bd7595..b67abc3 100644
--- a/virtagent.h
+++ b/virtagent.h
@@ -33,5 +33,8 @@ int va_client_close(void);
void do_agent_viewfile_print(Monitor *mon, const QObject *qobject);
int do_agent_viewfile(Monitor *mon, const QDict *mon_params,
MonitorCompletion cb, void *opaque);
+void do_agent_viewdmesg_print(Monitor *mon, const QObject *qobject);
+int do_agent_viewdmesg(Monitor *mon, const QDict *mon_params,
+ MonitorCompletion cb, void *opaque);

#endif /* VIRTAGENT_H */
--
1.7.0.4
Michael Roth
2011-01-17 13:14:57 UTC
Permalink
To be able to use qemu_mod_timer() and friends to register timeout
events for virtagent's qemu-va tool, we need to do the following:

Move several blocks of code out of cpus.c that handle initialization
of qemu's io_thread_fd and working with it via
qemu_notify_event()/qemu_event_read()/etc, and make them accessible
as backend functions to both the emulator code and qemu-tool.c via
wrapper functions within cpus.c and qemu-tool.c, respectively. These
have been added to qemu-ioh.c, where similar treatment was given to
qemu_set_fd_handler() and friends.

Some of these wrapper functions lack declarations when being
built into tools, so we add those via qemu-tool.h, which can be included
by a tool to access them. With these changes we can drive timers in a
tool linking it against qemu-timer.o and then implementing something
similar to the main i/o loop in vl.c:

init_clocks();
configure_alarms("dynticks");
if (init_timer_alarm() < 0) {
errx(EXIT_FAILURE, "could not initialize alarm timer");
}

while (running) {
//do work
qemu_run_all_timers();
}

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
cpus.c | 83 +++++++---------------------------------------------
qemu-ioh.c | 93 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
qemu-ioh.h | 9 ++++++
qemu-tool.c | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
qemu-tool.h | 26 ++++++++++++++++
5 files changed, 229 insertions(+), 74 deletions(-)
create mode 100644 qemu-tool.h

diff --git a/cpus.c b/cpus.c
index 0309189..2f1adf6 100644
--- a/cpus.c
+++ b/cpus.c
@@ -163,90 +163,24 @@ static int io_thread_fd = -1;

static void qemu_event_increment(void)
{
- /* Write 8 bytes to be compatible with eventfd. */
- static const uint64_t val = 1;
- ssize_t ret;
-
- if (io_thread_fd == -1)
- return;
-
- do {
- ret = write(io_thread_fd, &val, sizeof(val));
- } while (ret < 0 && errno == EINTR);
-
- /* EAGAIN is fine, a read must be pending. */
- if (ret < 0 && errno != EAGAIN) {
- fprintf(stderr, "qemu_event_increment: write() filed: %s\n",
- strerror(errno));
- exit (1);
- }
-}
-
-static void qemu_event_read(void *opaque)
-{
- int fd = (unsigned long)opaque;
- ssize_t len;
- char buffer[512];
-
- /* Drain the notify pipe. For eventfd, only 8 bytes will be read. */
- do {
- len = read(fd, buffer, sizeof(buffer));
- } while ((len == -1 && errno == EINTR) || len == sizeof(buffer));
+ return iothread_event_increment(&io_thread_fd);
}

static int qemu_event_init(void)
{
- int err;
- int fds[2];
-
- err = qemu_eventfd(fds);
- if (err == -1)
- return -errno;
-
- err = fcntl_setfl(fds[0], O_NONBLOCK);
- if (err < 0)
- goto fail;
-
- err = fcntl_setfl(fds[1], O_NONBLOCK);
- if (err < 0)
- goto fail;
-
- qemu_set_fd_handler2(fds[0], NULL, qemu_event_read, NULL,
- (void *)(unsigned long)fds[0]);
-
- io_thread_fd = fds[1];
- return 0;
-
-fail:
- close(fds[0]);
- close(fds[1]);
- return err;
+ return iothread_event_init(&io_thread_fd);
}
#else
HANDLE qemu_event_handle;

-static void dummy_event_handler(void *opaque)
-{
-}
-
static int qemu_event_init(void)
{
- qemu_event_handle = CreateEvent(NULL, FALSE, FALSE, NULL);
- if (!qemu_event_handle) {
- fprintf(stderr, "Failed CreateEvent: %ld\n", GetLastError());
- return -1;
- }
- qemu_add_wait_object(qemu_event_handle, dummy_event_handler, NULL);
- return 0;
+ return win32_event_init(&qemu_event_handle);
}

static void qemu_event_increment(void)
{
- if (!SetEvent(qemu_event_handle)) {
- fprintf(stderr, "qemu_event_increment: SetEvent failed: %ld\n",
- GetLastError());
- exit (1);
- }
+ win32_event_increment(&qemu_event_handle);
}
#endif

@@ -296,11 +230,10 @@ void qemu_cpu_kick(void *env)
return;
}

-void qemu_notify_event(void)
+static void qemu_stop_all_vcpus(void)
{
CPUState *env = cpu_single_env;

- qemu_event_increment ();
if (env) {
cpu_exit(env);
}
@@ -309,6 +242,12 @@ void qemu_notify_event(void)
}
}

+void qemu_notify_event(void)
+{
+ qemu_event_increment();
+ qemu_stop_all_vcpus();
+}
+
void qemu_mutex_lock_iothread(void) {}
void qemu_mutex_unlock_iothread(void) {}

diff --git a/qemu-ioh.c b/qemu-ioh.c
index cc71470..001e7a2 100644
--- a/qemu-ioh.c
+++ b/qemu-ioh.c
@@ -22,7 +22,11 @@
* THE SOFTWARE.
*/
#include "qemu-ioh.h"
+#include "qemu-char.h"
#include "qlist.h"
+#ifdef CONFIG_EVENTFD
+#include <sys/eventfd.h>
+#endif

/* XXX: fd_read_poll should be suppressed, but an API change is
necessary in the character devices to suppress fd_can_read(). */
@@ -113,3 +117,92 @@ void qemu_process_fd_handlers2(void *ioh_record_list, const fd_set *rfds,
}
}
}
+
+#ifndef _WIN32
+void iothread_event_increment(int *io_thread_fd)
+{
+ /* Write 8 bytes to be compatible with eventfd. */
+ static const uint64_t val = 1;
+ ssize_t ret;
+
+ if (*io_thread_fd == -1)
+ return;
+
+ do {
+ ret = write(*io_thread_fd, &val, sizeof(val));
+ } while (ret < 0 && errno == EINTR);
+
+ /* EAGAIN is fine, a read must be pending. */
+ if (ret < 0 && errno != EAGAIN) {
+ fprintf(stderr, "qemu_event_increment: write() filed: %s\n",
+ strerror(errno));
+ exit (1);
+ }
+}
+
+static void qemu_event_read(void *opaque)
+{
+ int fd = (unsigned long)opaque;
+ ssize_t len;
+ char buffer[512];
+
+ /* Drain the notify pipe. For eventfd, only 8 bytes will be read. */
+ do {
+ len = read(fd, buffer, sizeof(buffer));
+ } while (len == -1 && errno == EINTR);
+}
+
+
+int iothread_event_init(int *io_thread_fd)
+{
+ int err;
+ int fds[2];
+
+ err = qemu_eventfd(fds);
+ if (err == -1)
+ return -errno;
+
+ err = fcntl_setfl(fds[0], O_NONBLOCK);
+ if (err < 0)
+ goto fail;
+
+ err = fcntl_setfl(fds[1], O_NONBLOCK);
+ if (err < 0)
+ goto fail;
+
+ qemu_set_fd_handler2(fds[0], NULL, qemu_event_read, NULL,
+ (void *)(unsigned long)fds[0]);
+
+ *io_thread_fd = fds[1];
+ return 0;
+
+fail:
+ close(fds[0]);
+ close(fds[1]);
+ return err;
+}
+#else
+static void dummy_event_handler(void *opaque)
+{
+}
+
+int win32_event_init(HANDLE *qemu_event_handle)
+{
+ *qemu_event_handle = CreateEvent(NULL, FALSE, FALSE, NULL);
+ if (!qemu_event_handle) {
+ fprintf(stderr, "Failed CreateEvent: %ld\n", GetLastError());
+ return -1;
+ }
+ qemu_add_wait_object(*qemu_event_handle, dummy_event_handler, NULL);
+ return 0;
+}
+
+void win32_event_increment(HANDLE *qemu_event_handle)
+{
+ if (!SetEvent(*qemu_event_handle)) {
+ fprintf(stderr, "qemu_event_increment: SetEvent failed: %ld\n",
+ GetLastError());
+ exit (1);
+ }
+}
+#endif
diff --git a/qemu-ioh.h b/qemu-ioh.h
index 7c6e833..2c714a9 100644
--- a/qemu-ioh.h
+++ b/qemu-ioh.h
@@ -31,4 +31,13 @@ void qemu_get_fdset2(void *ioh_record_list, int *nfds, fd_set *rfds,
void qemu_process_fd_handlers2(void *ioh_record_list, const fd_set *rfds,
const fd_set *wfds, const fd_set *xfds);

+
+#ifndef _WIN32
+void iothread_event_increment(int *io_thread_fd);
+int iothread_event_init(int *io_thread_fd);
+#else
+int win32_event_init(HANDLE *qemu_event_handle);
+void win32_event_increment(HANDLE *qemu_event_handle);
+#endif
+
#endif
diff --git a/qemu-tool.c b/qemu-tool.c
index 78d3532..027ea31 100644
--- a/qemu-tool.c
+++ b/qemu-tool.c
@@ -12,6 +12,7 @@
*/

#include "qemu-common.h"
+#include "qemu-tool.h"
#include "monitor.h"
#include "qemu-timer.h"
#include "qemu-log.h"
@@ -19,12 +20,11 @@

#include <sys/time.h>

-QEMUClock *rt_clock;
+QEMUClock *rtc_clock;

FILE *logfile;
static QLIST_HEAD(, IOHandlerRecord) io_handlers =
QLIST_HEAD_INITIALIZER(io_handlers);
-
struct QEMUBH
{
QEMUBHFunc *cb;
@@ -134,3 +134,91 @@ void qemu_process_fd_handlers(const fd_set *rfds, const fd_set *wfds,
{
return qemu_process_fd_handlers2(&io_handlers, rfds, wfds, xfds);
}
+
+#ifndef _WIN32
+static int io_thread_fd = -1;
+
+void qemu_event_increment(void)
+{
+ return iothread_event_increment(&io_thread_fd);
+}
+
+int qemu_event_init(void)
+{
+ return iothread_event_init(&io_thread_fd);
+}
+#else
+HANDLE qemu_event_handle;
+
+int qemu_event_init(void)
+{
+ return win32_event_init(&qemu_event_handle);
+}
+
+void qemu_event_increment(void)
+{
+ win32_event_increment(&qemu_event_handle);
+}
+#endif
+
+void qemu_notify_event(void)
+{
+ qemu_event_increment ();
+}
+
+/*
+ * Creates an eventfd that looks like a pipe and has EFD_CLOEXEC set.
+ */
+int qemu_eventfd(int fds[2])
+{
+#ifdef CONFIG_EVENTFD
+ int ret;
+
+ ret = eventfd(0, 0);
+ if (ret >= 0) {
+ fds[0] = ret;
+ qemu_set_cloexec(ret);
+ if ((fds[1] = dup(ret)) == -1) {
+ close(ret);
+ return -1;
+ }
+ qemu_set_cloexec(fds[1]);
+ return 0;
+ }
+
+ if (errno != ENOSYS) {
+ return -1;
+ }
+#endif
+
+ return qemu_pipe(fds);
+}
+
+void qemu_put_be64(QEMUFile *f, uint64_t v)
+{
+}
+
+uint64_t qemu_get_be64(QEMUFile *f)
+{
+ return 0;
+}
+
+const VMStateInfo vmstate_info_int64;
+int use_icount = 0;
+int vm_running = 1;
+int64_t qemu_icount;
+
+int vmstate_register(DeviceState *dev, int instance_id,
+ const VMStateDescription *vmsd, void *opaque)
+{
+ return 0;
+}
+int64_t cpu_get_icount(void) {
+ return 0;
+}
+
+VMChangeStateEntry *qemu_add_vm_change_state_handler(VMChangeStateHandler *cb,
+ void *opaque)
+{
+ return NULL;
+}
diff --git a/qemu-tool.h b/qemu-tool.h
new file mode 100644
index 0000000..fd693cf
--- /dev/null
+++ b/qemu-tool.h
@@ -0,0 +1,26 @@
+#ifndef QEMU_TOOL_H
+#define QEMU_TOOL_H
+
+#include "qemu-common.h"
+
+#ifdef CONFIG_EVENTFD
+#include <sys/eventfd.h>
+#endif
+
+typedef void VMStateDescription;
+typedef int VMStateInfo;
+
+#ifndef _WIN32
+void qemu_event_increment(void);
+int qemu_event_init(void);
+#else
+int qemu_event_init(void);
+void qemu_event_increment(void);
+#endif
+
+void qemu_put_be64(QEMUFile *f, uint64_t v);
+uint64_t qemu_get_be64(QEMUFile *f);
+int vmstate_register(DeviceState *dev, int instance_id,
+ const VMStateDescription *vmsd, void *opaque);
+
+#endif
--
1.7.0.4
Jes Sorensen
2011-01-21 16:30:40 UTC
Permalink
Post by Michael Roth
diff --git a/qemu-ioh.c b/qemu-ioh.c
index cc71470..001e7a2 100644
--- a/qemu-ioh.c
+++ b/qemu-ioh.c
@@ -22,7 +22,11 @@
* THE SOFTWARE.
*/
#include "qemu-ioh.h"
+#include "qemu-char.h"
#include "qlist.h"
+#ifdef CONFIG_EVENTFD
+#include <sys/eventfd.h>
+#endif
/* XXX: fd_read_poll should be suppressed, but an API change is
necessary in the character devices to suppress fd_can_read(). */
@@ -113,3 +117,92 @@ void qemu_process_fd_handlers2(void *ioh_record_list, const fd_set *rfds,
}
}
}
+
+#ifndef _WIN32
+void iothread_event_increment(int *io_thread_fd)
Please split the WIN32 stuff into it's own file, similar to oslib-posix
and oslib-win32.c etc.
Post by Michael Roth
diff --git a/qemu-ioh.h b/qemu-ioh.h
index 7c6e833..2c714a9 100644
--- a/qemu-ioh.h
+++ b/qemu-ioh.h
@@ -31,4 +31,13 @@ void qemu_get_fdset2(void *ioh_record_list, int *nfds, fd_set *rfds,
void qemu_process_fd_handlers2(void *ioh_record_list, const fd_set *rfds,
const fd_set *wfds, const fd_set *xfds);
+
+#ifndef _WIN32
+void iothread_event_increment(int *io_thread_fd);
+int iothread_event_init(int *io_thread_fd);
+#else
+int win32_event_init(HANDLE *qemu_event_handle);
+void win32_event_increment(HANDLE *qemu_event_handle);
+#endif
Can you not do something slightly nicer that allows for those to be the
same prototype for all users? Like define a event_handle_t?
Post by Michael Roth
+
+#ifndef _WIN32
+static int io_thread_fd = -1;
Needs splitting into separate files too.
Post by Michael Roth
diff --git a/qemu-tool.h b/qemu-tool.h
new file mode 100644
index 0000000..fd693cf
--- /dev/null
+++ b/qemu-tool.h
@@ -0,0 +1,26 @@
+#ifndef QEMU_TOOL_H
+#define QEMU_TOOL_H
+
+#include "qemu-common.h"
+
+#ifdef CONFIG_EVENTFD
+#include <sys/eventfd.h>
+#endif
+
+typedef void VMStateDescription;
+typedef int VMStateInfo;
+
+#ifndef _WIN32
+void qemu_event_increment(void);
+int qemu_event_init(void);
+#else
+int qemu_event_init(void);
+void qemu_event_increment(void);
+#endif
No matter how long I stare at those prototypes, I fail to see the
difference between the win32 and the posix version :)

Cheers,
Jes
Michael Roth
2011-01-21 17:26:44 UTC
Permalink
Post by Jes Sorensen
Post by Michael Roth
diff --git a/qemu-ioh.c b/qemu-ioh.c
index cc71470..001e7a2 100644
--- a/qemu-ioh.c
+++ b/qemu-ioh.c
@@ -22,7 +22,11 @@
* THE SOFTWARE.
*/
#include "qemu-ioh.h"
+#include "qemu-char.h"
#include "qlist.h"
+#ifdef CONFIG_EVENTFD
+#include<sys/eventfd.h>
+#endif
/* XXX: fd_read_poll should be suppressed, but an API change is
necessary in the character devices to suppress fd_can_read(). */
@@ -113,3 +117,92 @@ void qemu_process_fd_handlers2(void *ioh_record_list, const fd_set *rfds,
}
}
}
+
+#ifndef _WIN32
+void iothread_event_increment(int *io_thread_fd)
Please split the WIN32 stuff into it's own file, similar to oslib-posix
and oslib-win32.c etc.
Will look into this, but qemu-ioh.c has common code too so we'd end up
with qemu-ioh/qemu-ioh-posix/qemu-ioh-win2.c. We could alternatively
have a "#ifndef _WIN32" around functions in qemu-ioh.c that would be
replaced by win32-specific ones from qemu-ioh-win32. No strong
preference either way, but sometimes I find navigating across too many
files more annoying that #ifdefs, and there's not a whole lot in these.
Post by Jes Sorensen
Post by Michael Roth
diff --git a/qemu-ioh.h b/qemu-ioh.h
index 7c6e833..2c714a9 100644
--- a/qemu-ioh.h
+++ b/qemu-ioh.h
@@ -31,4 +31,13 @@ void qemu_get_fdset2(void *ioh_record_list, int *nfds, fd_set *rfds,
void qemu_process_fd_handlers2(void *ioh_record_list, const fd_set *rfds,
const fd_set *wfds, const fd_set *xfds);
+
+#ifndef _WIN32
+void iothread_event_increment(int *io_thread_fd);
+int iothread_event_init(int *io_thread_fd);
+#else
+int win32_event_init(HANDLE *qemu_event_handle);
+void win32_event_increment(HANDLE *qemu_event_handle);
+#endif
Can you not do something slightly nicer that allows for those to be the
same prototype for all users? Like define a event_handle_t?
Don't see why not.
Post by Jes Sorensen
Post by Michael Roth
+
+#ifndef _WIN32
+static int io_thread_fd = -1;
Needs splitting into separate files too.
Post by Michael Roth
diff --git a/qemu-tool.h b/qemu-tool.h
new file mode 100644
index 0000000..fd693cf
--- /dev/null
+++ b/qemu-tool.h
@@ -0,0 +1,26 @@
+#ifndef QEMU_TOOL_H
+#define QEMU_TOOL_H
+
+#include "qemu-common.h"
+
+#ifdef CONFIG_EVENTFD
+#include<sys/eventfd.h>
+#endif
+
+typedef void VMStateDescription;
+typedef int VMStateInfo;
+
+#ifndef _WIN32
+void qemu_event_increment(void);
+int qemu_event_init(void);
+#else
+int qemu_event_init(void);
+void qemu_event_increment(void);
+#endif
No matter how long I stare at those prototypes, I fail to see the
difference between the win32 and the posix version :)
Heh, the ordering of course! :) Not sure how I missed this one.

The patch is pretty rough in general, I'll see what I can do about
cleaning things up a bit.
Post by Jes Sorensen
Cheers,
Jes
Jes Sorensen
2011-01-24 07:56:07 UTC
Permalink
Post by Michael Roth
Post by Jes Sorensen
Post by Michael Roth
diff --git a/qemu-ioh.c b/qemu-ioh.c
index cc71470..001e7a2 100644
--- a/qemu-ioh.c
+++ b/qemu-ioh.c
@@ -22,7 +22,11 @@
* THE SOFTWARE.
*/
#include "qemu-ioh.h"
+#include "qemu-char.h"
#include "qlist.h"
+#ifdef CONFIG_EVENTFD
+#include<sys/eventfd.h>
+#endif
/* XXX: fd_read_poll should be suppressed, but an API change is
necessary in the character devices to suppress fd_can_read(). */
@@ -113,3 +117,92 @@ void qemu_process_fd_handlers2(void
*ioh_record_list, const fd_set *rfds,
}
}
}
+
+#ifndef _WIN32
+void iothread_event_increment(int *io_thread_fd)
Please split the WIN32 stuff into it's own file, similar to oslib-posix
and oslib-win32.c etc.
Will look into this, but qemu-ioh.c has common code too so we'd end up
with qemu-ioh/qemu-ioh-posix/qemu-ioh-win2.c. We could alternatively
have a "#ifndef _WIN32" around functions in qemu-ioh.c that would be
replaced by win32-specific ones from qemu-ioh-win32. No strong
preference either way, but sometimes I find navigating across too many
files more annoying that #ifdefs, and there's not a whole lot in these.
No problem having the three files - it is far better than having
#ifdefs. Having the #ifndef that is overloaded by a win32 specific file
is bad, it will make it very confusing for anyone reading the code.

Cheers,
Jes
Michael Roth
2011-01-17 13:14:59 UTC
Permalink
Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
virtagent-common.c | 415 ++++++++++++++++++++++++++++++++++++++++++++++++++++
virtagent-common.h | 1 +
2 files changed, 416 insertions(+), 0 deletions(-)

diff --git a/virtagent-common.c b/virtagent-common.c
index c487252..f8b7d74 100644
--- a/virtagent-common.c
+++ b/virtagent-common.c
@@ -177,6 +177,421 @@ static void va_unset_server_timeout(void)
}

/***********************************************************/
+/* callbacks for read/send handlers */
+
+static void va_client_send_cb(enum va_http_status http_status,
+ const char *content, size_t content_len)
+{
+ VAClientJob *client_job = va_current_client_job();
+
+ TRACE("called");
+ assert(client_job != NULL);
+
+ if (http_status != VA_HTTP_STATUS_OK) {
+ /* TODO: we should reset everything at this point...guest/host will
+ * be out of whack with each other since there's no way to let the
+ * other know job failed (server or client job) if the send channel
+ * is down. But how do we induce the other side to do the same?
+ */
+ LOG("error sending http request");
+ }
+
+ /* request sent ok. free up request xml, then move to
+ * wait (for response) state
+ */
+ XMLRPC_MEMBLOCK_FREE(char, client_job->req_data);
+ assert(va_set_client_state(VA_CLIENT_WAIT));
+}
+
+static void va_server_send_cb(enum va_http_status http_status,
+ const char *content, size_t content_len)
+{
+ VAServerJob *server_job = va_pop_server_job();
+
+ TRACE("called");
+ assert(server_job != NULL);
+ va_unset_server_timeout();
+
+ if (http_status != VA_HTTP_STATUS_OK) {
+ /* TODO: we should reset everything at this point...guest/host will
+ * be out of whack with each other since there's no way to let the
+ * other know job failed (server or client job) if the send channel
+ * is down
+ */
+ LOG("error sending http response");
+ return;
+ }
+
+ /* response sent ok, cleanup server job and kick off the next one */
+ XMLRPC_MEMBLOCK_FREE(char, server_job->resp_data);
+ qemu_free(server_job);
+ va_kick();
+}
+
+static void va_client_read_cb(const char *content, size_t content_len,
+ const char client_tag[64])
+{
+ VAClientJob *client_job;
+
+ TRACE("called");
+ client_job = va_pop_client_job();
+ assert(client_job != NULL);
+ if (--va_state->client_jobs_in_flight == 0) {
+ va_unset_client_timeout();
+ }
+ if (strncmp(client_job->client_tag, client_tag, 64)) {
+ LOG("http client tag mismatch");
+ } else {
+ TRACE("tag matched: %s", client_tag);
+ }
+
+ client_job->cb(content, content_len, client_job->mon_cb,
+ client_job->mon_data);
+ va_kick();
+}
+
+static void va_server_read_cb(const char *content, size_t content_len,
+ const char client_tag[64])
+{
+ int ret;
+
+ TRACE("called");
+ /* generate response and queue it up for sending */
+ ret = va_do_server_rpc(content, content_len, client_tag);
+ if (ret != 0) {
+ LOG("error creating handling remote rpc request: %s", strerror(ret));
+ }
+
+ return;
+}
+
+static void va_http_read_cb(enum va_http_status http_status,
+ const char *content, size_t content_len,
+ const char client_tag[64],
+ enum va_http_type http_type)
+{
+ TRACE("called");
+ if (http_status != VA_HTTP_STATUS_OK) {
+ LOG("error reading http stream (type %d)", http_type);
+ va_cancel_jobs();
+ return;
+ }
+
+ if (http_type == VA_HTTP_TYPE_REQUEST) {
+ TRACE("read request: %s", content);
+ va_server_read_cb(content, content_len, client_tag);
+ } else if (http_type == VA_HTTP_TYPE_RESPONSE) {
+ TRACE("read response: %s", content);
+ va_client_read_cb(content, content_len, client_tag);
+ } else {
+ LOG("unknown http response/request type");
+ va_cancel_jobs();
+ }
+
+ return;
+}
+
+/***********************************************************/
+/* utility functions for handling http calls */
+
+static void va_http_hdr_init(VAHTState *s, enum va_http_type http_type) {
+ const char *preamble;
+
+ TRACE("called");
+ /* essentially ignored in the context of virtagent, but might as well */
+ if (http_type == VA_HTTP_TYPE_REQUEST) {
+ preamble = "POST /RPC2 HTTP/1.1";
+ } else if (http_type == VA_HTTP_TYPE_RESPONSE) {
+ preamble = "HTTP/1.1 200 OK";
+ } else {
+ s->hdr_len = 0;
+ return;
+ }
+ memset(s->hdr, 0, VA_HDR_LEN_MAX);
+ s->hdr_len = sprintf(s->hdr,
+ "%c%s" EOL
+ "Content-Type: text/xml" EOL
+ "Content-Length: %u" EOL
+ "X-Virtagent-Client-Tag: %s" EOL EOL,
+ VA_SENTINEL,
+ preamble,
+ (uint32_t)s->content_len,
+ s->hdr_client_tag[0] ? s->hdr_client_tag : "none");
+}
+
+#define VA_LINE_LEN_MAX 1024
+static void va_rpc_parse_hdr(VAHTState *s)
+{
+ int i, line_pos = 0;
+ bool first_line = true;
+ char line_buf[VA_LINE_LEN_MAX];
+
+ TRACE("called");
+
+ for (i = 0; i < VA_HDR_LEN_MAX; ++i) {
+ if (s->hdr[i] == 0) {
+ /* end of header */
+ return;
+ }
+ if (s->hdr[i] != '\n') {
+ /* read line */
+ line_buf[line_pos++] = s->hdr[i];
+ } else {
+ /* process line */
+ if (first_line) {
+ if (strncmp(line_buf, "POST", 4) == 0) {
+ s->http_type = VA_HTTP_TYPE_REQUEST;
+ } else if (strncmp(line_buf, "HTTP", 4) == 0) {
+ s->http_type = VA_HTTP_TYPE_RESPONSE;
+ } else {
+ s->http_type = VA_HTTP_TYPE_UNKNOWN;
+ }
+ first_line = false;
+ }
+ if (strncmp(line_buf, "Content-Length: ", 16) == 0) {
+ s->content_len = atoi(&line_buf[16]);
+ }
+ if (strncmp(line_buf, "X-Virtagent-Client-Tag: ", 24) == 0) {
+ memcpy(s->hdr_client_tag, &line_buf[24], MIN(line_pos-25, 64));
+ //pstrcpy(s->hdr_client_tag, 64, &line_buf[24]);
+ TRACE("\nTAG<%s>\n", s->hdr_client_tag);
+ }
+ line_pos = 0;
+ memset(line_buf, 0, VA_LINE_LEN_MAX);
+ }
+ }
+}
+
+static int va_end_of_header(char *buf, int end_pos)
+{
+ return !strncmp(buf+(end_pos-2), "\n\r\n", 3);
+}
+
+static void va_http_read_handler_reset(void)
+{
+ VAHTState *s = &va_state->read_state;
+ TRACE("called");
+ s->state = VA_READ_START;
+ s->http_type = VA_HTTP_TYPE_UNKNOWN;
+ s->hdr_pos = 0;
+ s->content_len = 0;
+ s->content_pos = 0;
+ strcpy(s->hdr_client_tag, "none");
+ if (s->content != NULL) {
+ qemu_free(s->content);
+ }
+ s->content = NULL;
+}
+
+/***********************************************************/
+/* read/send handlers */
+
+static void va_http_read_handler(void *opaque)
+{
+ VAHTState *s = &va_state->read_state;
+ enum va_http_status http_status;
+ int fd = va_state->fd;
+ int ret;
+ uint8_t tmp;
+ static int bytes_skipped = 0;
+
+ TRACE("called with opaque: %p", opaque);
+
+ /* until timeouts are implemented, make sure we kick so any deferred
+ * jobs get a chance to run
+ */
+ va_kick();
+
+ switch (s->state) {
+ case VA_READ_START:
+ /* we may have gotten here due to a http error, indicating
+ * a potential unclean state where we are not 'aligned' on http
+ * boundaries. we should read till we hit the next http preamble
+ * rather than assume we're at the start of an http header. since
+ * we control the transport layer on both sides, we'll use a
+ * more reliable sentinal character to mark/detect the start of
+ * the header
+ */
+ while((ret = read(fd, &tmp, 1) > 0) > 0) {
+ if (tmp == VA_SENTINEL) {
+ break;
+ }
+ bytes_skipped += ret;
+ }
+ if (ret == -1) {
+ if (errno == EAGAIN || errno == EWOULDBLOCK || errno == EINTR) {
+ return;
+ } else {
+ LOG("error reading connection: %s", strerror(errno));
+ goto out_bad;
+ }
+ } else if (ret == 0) {
+ LOG("connected closed unexpectedly");
+ goto out_bad_wait;
+ } else {
+ TRACE("found header, number of bytes skipped: %d",
+ bytes_skipped);
+ bytes_skipped = 0;
+ s->state = VA_READ_HDR;
+ }
+ case VA_READ_HDR:
+ while((ret = read(fd, s->hdr + s->hdr_pos, 1)) > 0
+ && s->hdr_pos < VA_HDR_LEN_MAX) {
+ s->hdr_pos += ret;
+ if (va_end_of_header(s->hdr, s->hdr_pos - 1)) {
+ break;
+ }
+ }
+ if (ret == -1) {
+ if (errno == EAGAIN || errno == EWOULDBLOCK || errno == EINTR) {
+ return;
+ } else {
+ LOG("error reading connection: %s", strerror(errno));
+ goto out_bad;
+ }
+ } else if (ret == 0) {
+ LOG("connected closed unexpectedly");
+ goto out_bad_wait;
+ } else if (s->hdr_pos >= VA_HDR_LEN_MAX) {
+ LOG("http header too long");
+ goto out_bad;
+ } else {
+ s->content_len = -1;
+ va_rpc_parse_hdr(s);
+ if (s->content_len == -1) {
+ LOG("malformed http header");
+ goto out_bad;
+ } else if (s->content_len > VA_CONTENT_LEN_MAX) {
+ LOG("http content length too long");
+ goto out_bad;
+ }
+ s->content = qemu_mallocz(s->content_len);
+ s->state = VA_READ_BODY;
+ TRACE("read http header:\n<<<%s>>>\n", s->hdr);
+ }
+ case VA_READ_BODY:
+ while(s->content_pos < s->content_len) {
+ ret = read(fd, s->content + s->content_pos,
+ s->content_len - s->content_pos);
+ if (ret == -1) {
+ if (errno == EAGAIN || errno == EWOULDBLOCK
+ || errno == EINTR) {
+ return;
+ } else {
+ LOG("error reading connection: %s", strerror(errno));
+ goto out_bad;
+ }
+ } else if (ret == 0) {
+ LOG("connection closed unexpectedly:"
+ " read %u bytes, expected %u bytes",
+ (unsigned int)s->content_pos, (unsigned int)s->content_len);
+ goto out_bad_wait;
+ }
+ s->content_pos += ret;
+ }
+
+ TRACE("read http content:\n<<<%s>>>\n", s->content);
+ http_status = VA_HTTP_STATUS_OK;
+ goto out;
+ default:
+ LOG("unknown state");
+ goto out_bad;
+ }
+
+out_bad_wait:
+ /* We should only ever get a read = 0 if we're using virtio and the host
+ * is not connected. this would cause a guest to spin, and we can't do
+ * any work in the meantime, so sleep for a bit here. We also know we
+ * may go ahead and cancel any outstanding jobs at this point, though it
+ * should be noted that we're still ultimately reliant on per-job timeouts
+ * since we might not read EOF before host reconnect.
+ */
+ if (!va_state->is_host &&
+ strcmp(va_state->channel_method, "virtio-serial") == 0) {
+ usleep(100 * 1000);
+ }
+out_bad:
+ http_status = VA_HTTP_STATUS_ERROR;
+out:
+ /* handle the response or request we just read */
+ s->read_cb(http_status, s->content, s->content_len, s->hdr_client_tag,
+ s->http_type);
+ /* restart read handler */
+ va_http_read_handler_reset();
+ http_status = VA_HTTP_STATUS_NEW;
+}
+
+static void va_http_send_handler(void *opaque)
+{
+ VAHTState *s = &va_state->send_state;
+ enum va_http_status http_status;
+ int fd = va_state->fd;
+ int ret;
+
+ TRACE("called");
+
+ switch (s->state) {
+ case VA_SEND_START:
+ s->state = VA_SEND_HDR;
+ case VA_SEND_HDR:
+ do {
+ ret = write(fd, s->hdr + s->hdr_pos, s->hdr_len - s->hdr_pos);
+ if (ret <= 0) {
+ break;
+ }
+ s->hdr_pos += ret;
+ } while (s->hdr_pos < s->hdr_len);
+ if (ret == -1) {
+ if (errno == EAGAIN || errno == EWOULDBLOCK || errno == EINTR) {
+ return;
+ } else {
+ LOG("error writing header: %s", strerror(errno));
+ goto out_bad;
+ }
+ } else if (ret == 0) {
+ LOG("connected closed unexpectedly");
+ goto out_bad;
+ } else {
+ s->state = VA_SEND_BODY;
+ TRACE("sent http header:\n<<<%s>>>", s->hdr);
+ }
+ case VA_SEND_BODY:
+ do {
+ ret = write(fd, s->content + s->content_pos,
+ s->content_len - s->content_pos);
+ if (ret <= 0) {
+ break;
+ }
+ s->content_pos += ret;
+ } while (s->content_pos < s->content_len);
+ if (ret == -1) {
+ if (errno == EAGAIN || errno == EWOULDBLOCK || errno == EINTR) {
+ return;
+ } else {
+ LOG("error writing content: %s", strerror(errno));
+ goto out_bad;
+ }
+ } else if (ret == 0) {
+ LOG("connected closed unexpectedly");
+ goto out_bad;
+ } else {
+ http_status = VA_HTTP_STATUS_OK;
+ TRACE("set http content:\n<<<%s>>>", s->content);
+ goto out;
+ }
+ default:
+ LOG("unknown state");
+ goto out_bad;
+ }
+
+out_bad:
+ http_status = VA_HTTP_STATUS_ERROR;
+out:
+ s->send_cb(http_status, s->content, s->content_len);
+ qemu_set_fd_handler(fd, va_http_read_handler, NULL, NULL);
+}
+
+/***********************************************************/
/* functions for starting/managing client/server rpc jobs */

static int va_send_server_response(VAServerJob *server_job)
diff --git a/virtagent-common.h b/virtagent-common.h
index 568df5a..6ad8036 100644
--- a/virtagent-common.h
+++ b/virtagent-common.h
@@ -50,6 +50,7 @@
#define VA_SERVER_JOBS_MAX 5 /* max server rpcs we can queue */
#define VA_SERVER_TIMEOUT_MS 5 * 1000
#define VA_CLIENT_TIMEOUT_MS 5 * 1000
+#define VA_SENTINEL 0xFF

typedef struct VAContext {
bool is_host;
--
1.7.0.4
Michael Roth
2011-01-17 13:15:02 UTC
Permalink
Add RPC to retrieve a guest file. This interface is intended
for smaller reads like peeking at logs and /proc and such.

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
virtagent-server.c | 59 ++++++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 59 insertions(+), 0 deletions(-)

diff --git a/virtagent-server.c b/virtagent-server.c
index c38a9e0..af4b940 100644
--- a/virtagent-server.c
+++ b/virtagent-server.c
@@ -62,12 +62,71 @@ out:
return ret;
}

+/* RPC functions common to guest/host daemons */
+
+/* va_getfile(): return file contents
+ * rpc return values:
+ * - base64-encoded file contents
+ */
+static xmlrpc_value *va_getfile(xmlrpc_env *env,
+ xmlrpc_value *params,
+ void *user_data)
+{
+ const char *path;
+ char *file_contents = NULL;
+ char buf[VA_FILEBUF_LEN];
+ int fd, ret, count = 0;
+ xmlrpc_value *result = NULL;
+
+ /* parse argument array */
+ xmlrpc_decompose_value(env, params, "(s)", &path);
+ if (env->fault_occurred) {
+ return NULL;
+ }
+
+ SLOG("va_getfile(), path:%s", path);
+
+ fd = open(path, O_RDONLY);
+ if (fd == -1) {
+ LOG("open failed: %s", strerror(errno));
+ xmlrpc_faultf(env, "open failed: %s", strerror(errno));
+ return NULL;
+ }
+
+ while ((ret = read(fd, buf, VA_FILEBUF_LEN)) > 0) {
+ file_contents = qemu_realloc(file_contents, count + VA_FILEBUF_LEN);
+ memcpy(file_contents + count, buf, ret);
+ count += ret;
+ if (count > VA_GETFILE_MAX) {
+ xmlrpc_faultf(env, "max file size (%d bytes) exceeded",
+ VA_GETFILE_MAX);
+ goto EXIT_CLOSE_BAD;
+ }
+ }
+ if (ret == -1) {
+ LOG("read failed: %s", strerror(errno));
+ xmlrpc_faultf(env, "read failed: %s", strerror(errno));
+ goto EXIT_CLOSE_BAD;
+ }
+
+ result = xmlrpc_build_value(env, "6", file_contents, count);
+
+EXIT_CLOSE_BAD:
+ if (file_contents) {
+ qemu_free(file_contents);
+ }
+ close(fd);
+ return result;
+}
+
typedef struct RPCFunction {
xmlrpc_value *(*func)(xmlrpc_env *env, xmlrpc_value *param, void *unused);
const char *func_name;
} RPCFunction;

static RPCFunction guest_functions[] = {
+ { .func = va_getfile,
+ .func_name = "va.getfile" },
{ NULL, NULL }
};
static RPCFunction host_functions[] = {
--
1.7.0.4
Jes Sorensen
2011-01-21 16:40:54 UTC
Permalink
Post by Michael Roth
Add RPC to retrieve a guest file. This interface is intended
for smaller reads like peeking at logs and /proc and such.
---
virtagent-server.c | 59 ++++++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 59 insertions(+), 0 deletions(-)
diff --git a/virtagent-server.c b/virtagent-server.c
index c38a9e0..af4b940 100644
--- a/virtagent-server.c
+++ b/virtagent-server.c
return ret;
}
+/* RPC functions common to guest/host daemons */
+
+/* va_getfile(): return file contents
+ * - base64-encoded file contents
+ */
+static xmlrpc_value *va_getfile(xmlrpc_env *env,
+ xmlrpc_value *params,
+ void *user_data)
+{
+ const char *path;
+ char *file_contents = NULL;
+ char buf[VA_FILEBUF_LEN];
malloc()!
Post by Michael Roth
+ int fd, ret, count = 0;
+ xmlrpc_value *result = NULL;
+
+ /* parse argument array */
+ xmlrpc_decompose_value(env, params, "(s)", &path);
+ if (env->fault_occurred) {
+ return NULL;
+ }
+
+ SLOG("va_getfile(), path:%s", path);
+
+ fd = open(path, O_RDONLY);
+ if (fd == -1) {
+ LOG("open failed: %s", strerror(errno));
+ xmlrpc_faultf(env, "open failed: %s", strerror(errno));
+ return NULL;
+ }
+
+ while ((ret = read(fd, buf, VA_FILEBUF_LEN)) > 0) {
+ file_contents = qemu_realloc(file_contents, count + VA_FILEBUF_LEN);
+ memcpy(file_contents + count, buf, ret);
Sorry, I brought this up before. This realloc() stuff is a disaster
waiting to happen. Please remove it from the patch series, until you
have an implementation that copies over a page of the time.
Post by Michael Roth
+ count += ret;
Cheers,
Jes
Daniel P. Berrange
2011-01-21 17:20:26 UTC
Permalink
Post by Jes Sorensen
Post by Michael Roth
Add RPC to retrieve a guest file. This interface is intended
for smaller reads like peeking at logs and /proc and such.
---
virtagent-server.c | 59 ++++++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 59 insertions(+), 0 deletions(-)
diff --git a/virtagent-server.c b/virtagent-server.c
index c38a9e0..af4b940 100644
--- a/virtagent-server.c
+++ b/virtagent-server.c
return ret;
}
+/* RPC functions common to guest/host daemons */
+
+/* va_getfile(): return file contents
+ * - base64-encoded file contents
+ */
+static xmlrpc_value *va_getfile(xmlrpc_env *env,
+ xmlrpc_value *params,
+ void *user_data)
+{
+ const char *path;
+ char *file_contents = NULL;
+ char buf[VA_FILEBUF_LEN];
malloc()!
Post by Michael Roth
+ int fd, ret, count = 0;
+ xmlrpc_value *result = NULL;
+
+ /* parse argument array */
+ xmlrpc_decompose_value(env, params, "(s)", &path);
+ if (env->fault_occurred) {
+ return NULL;
+ }
+
+ SLOG("va_getfile(), path:%s", path);
+
+ fd = open(path, O_RDONLY);
+ if (fd == -1) {
+ LOG("open failed: %s", strerror(errno));
+ xmlrpc_faultf(env, "open failed: %s", strerror(errno));
+ return NULL;
+ }
+
+ while ((ret = read(fd, buf, VA_FILEBUF_LEN)) > 0) {
+ file_contents = qemu_realloc(file_contents, count + VA_FILEBUF_LEN);
+ memcpy(file_contents + count, buf, ret);
Sorry, I brought this up before. This realloc() stuff is a disaster
waiting to happen. Please remove it from the patch series, until you
have an implementation that copies over a page of the time.
I can understand the need of virtagent for lifecycle control/interactions
with the guest OS (reboot, shutdown, ping, screen lock/unlock, etc), but
do we really want to reinvent libguestfs for file access ? A little dev
work could enable users to install the libguestfs agent into a guest OS,
and access it from the host over virtio-serial + the libguestfs API.

This would be quite compelling usage model for app developers, because
it would mean whether the guest OS was running, or shutoff, they can
use the same libguestfs API for processing guest filesystem images.
The level of functionality provided by libguestfs is really quite
considerable now, letting you do pretty much any operation against
files that you could do via local POSIX for non-virt access, as
well as providing many useful higher level constructs

Regards,
Daniel
Michael Roth
2011-01-21 18:23:40 UTC
Permalink
Post by Daniel P. Berrange
Post by Jes Sorensen
Post by Michael Roth
Add RPC to retrieve a guest file. This interface is intended
for smaller reads like peeking at logs and /proc and such.
---
virtagent-server.c | 59 ++++++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 59 insertions(+), 0 deletions(-)
diff --git a/virtagent-server.c b/virtagent-server.c
index c38a9e0..af4b940 100644
--- a/virtagent-server.c
+++ b/virtagent-server.c
return ret;
}
+/* RPC functions common to guest/host daemons */
+
+/* va_getfile(): return file contents
+ * - base64-encoded file contents
+ */
+static xmlrpc_value *va_getfile(xmlrpc_env *env,
+ xmlrpc_value *params,
+ void *user_data)
+{
+ const char *path;
+ char *file_contents = NULL;
+ char buf[VA_FILEBUF_LEN];
malloc()!
Post by Michael Roth
+ int fd, ret, count = 0;
+ xmlrpc_value *result = NULL;
+
+ /* parse argument array */
+ xmlrpc_decompose_value(env, params, "(s)",&path);
+ if (env->fault_occurred) {
+ return NULL;
+ }
+
+ SLOG("va_getfile(), path:%s", path);
+
+ fd = open(path, O_RDONLY);
+ if (fd == -1) {
+ LOG("open failed: %s", strerror(errno));
+ xmlrpc_faultf(env, "open failed: %s", strerror(errno));
+ return NULL;
+ }
+
+ while ((ret = read(fd, buf, VA_FILEBUF_LEN))> 0) {
+ file_contents = qemu_realloc(file_contents, count + VA_FILEBUF_LEN);
+ memcpy(file_contents + count, buf, ret);
Sorry, I brought this up before. This realloc() stuff is a disaster
waiting to happen. Please remove it from the patch series, until you
have an implementation that copies over a page of the time.
I can understand the need of virtagent for lifecycle control/interactions
with the guest OS (reboot, shutdown, ping, screen lock/unlock, etc), but
do we really want to reinvent libguestfs for file access ? A little dev
work could enable users to install the libguestfs agent into a guest OS,
and access it from the host over virtio-serial + the libguestfs API.
File/dmesg/etc access is a bit of a grey area. Technically it's not
lifecycle-specific, but it tends to become a requirement for
higher-level management policies, and being reliant on external tools to
provide what, at least in our case, has been an extremely common
request/requirement, greatly reduces the usefulness of such an agent.

Ultimately however these interfaces would be exposed via libvirt, which
libguestfs already makes use of, so it'd be a logically way to extend it
for disk access to live guests.

getfile() is confusingly named however, it's really just a means to peek
at a text file like /proc/meminfo. general file access will be done via
a stateful interface that implements similar semantics to
open()/read()/write()/close().
Post by Daniel P. Berrange
This would be quite compelling usage model for app developers, because
it would mean whether the guest OS was running, or shutoff, they can
use the same libguestfs API for processing guest filesystem images.
The level of functionality provided by libguestfs is really quite
considerable now, letting you do pretty much any operation against
files that you could do via local POSIX for non-virt access, as
well as providing many useful higher level constructs
Regards,
Daniel
Richard W.M. Jones
2011-01-24 22:08:09 UTC
Permalink
Post by Michael Roth
getfile() is confusingly named however, it's really just a means to
peek at a text file like /proc/meminfo.
You might as well reuse the libguestfs API here because you get the
benefit of all the code that's been written, all the tools on top, and
a far more comprehensive API that would take you another 2 years to
implement.

There's various ways to do it: Encapsulate the libguestfs API messages
to and from guestfsd into the virtagent protocol. Or give us another
8-bit-clean channel. Or write a libguestfs generator component that
generates virtagent messages.

We got a long way through porting guestfsd to Windows last year when
we thought we needed a Windows-native daemon (since abandoned for
other reasons). It works already for many of the API calls.
Post by Michael Roth
general file access will be done via a stateful interface that
implements similar semantics to open()/read()/write()/close().
This will be very slow.

Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming blog: http://rwmj.wordpress.com
Fedora now supports 80 OCaml packages (the OPEN alternative to F#)
http://cocan.org/getting_started_with_ocaml_on_red_hat_and_fedora
Anthony Liguori
2011-01-24 22:26:09 UTC
Permalink
Post by Richard W.M. Jones
You might as well reuse the libguestfs API here because you get the
benefit of all the code that's been written, all the tools on top, and
a far more comprehensive API that would take you another 2 years to
implement.
To put it in some perspective, libguestfs is just shy of 500K lines of
code now, not including the tools built on top. 150 page manual just
for the core API.
Yeah, but I think that's the reason that it might not be a good
candidate for this use-case.

We need a *simple* interface that we can convince everyone to install by
default and run in their guests. It needs to be flexible enough that we
can do lots of fun things but simple enough that a reasonable person can
audit the code in a short period of time.

It will never replace something as sophisticated as guestfs but that's
not it's point. It's point is to let you do simple things like execute
a command in the guest or peek at /proc/meminfo. You don't need 500k
LOCs for that.

Regards,

Anthony Liguori
Rich.
Richard W.M. Jones
2011-01-24 22:48:08 UTC
Permalink
Post by Anthony Liguori
Post by Richard W.M. Jones
You might as well reuse the libguestfs API here because you get the
benefit of all the code that's been written, all the tools on top, and
a far more comprehensive API that would take you another 2 years to
implement.
To put it in some perspective, libguestfs is just shy of 500K lines of
code now, not including the tools built on top. 150 page manual just
for the core API.
Yeah, but I think that's the reason that it might not be a good
candidate for this use-case.
We need a *simple* interface that we can convince everyone to
install by default and run in their guests. It needs to be flexible
enough that we can do lots of fun things but simple enough that a
reasonable person can audit the code in a short period of time.
It will never replace something as sophisticated as guestfs but
that's not it's point. It's point is to let you do simple things
like execute a command in the guest or peek at /proc/meminfo. You
don't need 500k LOCs for that.
I don't really want to argue over this, since I think accessing live
VMs like this is a really useful feature, and it complements
libguestfs (image editing) very nicely.

I'll just say that you might not think you need it to start off with
(and we didn't either), but when you notice that "simple"
open/read/write/close in fact has terrible performance so you need to
specialize many operations, and then someone wants to create a
filesystem, and someone else wants a FUSE interface, suddenly you'll
be reimplementing large parts of libguestfs.

The daemon (guestfsd) is 36106 LoC.

Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-p2v converts physical machines to virtual machines. Boot with a
live CD or over the network (PXE) and turn machines into Xen guests.
http://et.redhat.com/~rjones/virt-p2v
Anthony Liguori
2011-01-24 23:40:05 UTC
Permalink
Post by Richard W.M. Jones
Post by Anthony Liguori
Post by Richard W.M. Jones
You might as well reuse the libguestfs API here because you get the
benefit of all the code that's been written, all the tools on top, and
a far more comprehensive API that would take you another 2 years to
implement.
To put it in some perspective, libguestfs is just shy of 500K lines of
code now, not including the tools built on top. 150 page manual just
for the core API.
Yeah, but I think that's the reason that it might not be a good
candidate for this use-case.
We need a *simple* interface that we can convince everyone to
install by default and run in their guests. It needs to be flexible
enough that we can do lots of fun things but simple enough that a
reasonable person can audit the code in a short period of time.
It will never replace something as sophisticated as guestfs but
that's not it's point. It's point is to let you do simple things
like execute a command in the guest or peek at /proc/meminfo. You
don't need 500k LOCs for that.
I don't really want to argue over this, since I think accessing live
VMs like this is a really useful feature, and it complements
libguestfs (image editing) very nicely.
I'll just say that you might not think you need it to start off with
(and we didn't either), but when you notice that "simple"
open/read/write/close
Oh I don't think there should be an open/read/write/close interface.
I'm quite happy with the current copyfile interface.
Post by Richard W.M. Jones
in fact has terrible performance so you need to
specialize many operations, and then someone wants to create a
filesystem, and someone else wants a FUSE interface, suddenly you'll
be reimplementing large parts of libguestfs.
Nope. If you want to do fancy things, use libguestfs :-)

BTW, how dependent is guestfsd on the guest that libguestfs uses? I
wasn't even aware that it could be used outside of that context.

Regards,

Anthony Liguori
Post by Richard W.M. Jones
The daemon (guestfsd) is 36106 LoC.
Rich.
Michael Roth
2011-01-25 00:22:24 UTC
Permalink
Post by Richard W.M. Jones
Post by Anthony Liguori
Post by Richard W.M. Jones
You might as well reuse the libguestfs API here because you get the
benefit of all the code that's been written, all the tools on top, and
a far more comprehensive API that would take you another 2 years to
implement.
To put it in some perspective, libguestfs is just shy of 500K lines of
code now, not including the tools built on top. 150 page manual just
for the core API.
Yeah, but I think that's the reason that it might not be a good
candidate for this use-case.
We need a *simple* interface that we can convince everyone to
install by default and run in their guests. It needs to be flexible
enough that we can do lots of fun things but simple enough that a
reasonable person can audit the code in a short period of time.
It will never replace something as sophisticated as guestfs but
that's not it's point. It's point is to let you do simple things
like execute a command in the guest or peek at /proc/meminfo. You
don't need 500k LOCs for that.
I don't really want to argue over this, since I think accessing live
VMs like this is a really useful feature, and it complements
libguestfs (image editing) very nicely.
I'll just say that you might not think you need it to start off with
(and we didn't either), but when you notice that "simple"
open/read/write/close
Oh I don't think there should be an open/read/write/close interface. I'm
quite happy with the current copyfile interface.
Actually, copyfile is the proposed open/read/write/close interface.
getfile is the current interface, and it seems to be a contentious one.
I've discussed it quite a bit with Jes here and in the last couple RFCs.
I think the current course is that we'll end up ditching
viewfile/viewdmesg in favor of copyfile, and that we should do it now
rather than later.

The upshot is that "viewfile <remote>" is basically equivalent to:
copyfile_open <remote> /dev/stdout -> fd_handle;
copyfile_read fd <offset=0> <count=<MAX_CHUNK_SIZE>;
copyfile_close fd_handle".

Or we can output to a file and potentially introduce a monitor command
that wraps these to provide simple oneliner like we have now, though
there may be some reluctance there as well. But at least it'll be
possible either way.
Post by Richard W.M. Jones
in fact has terrible performance so you need to
specialize many operations, and then someone wants to create a
filesystem, and someone else wants a FUSE interface, suddenly you'll
be reimplementing large parts of libguestfs.
Nope. If you want to do fancy things, use libguestfs :-)
BTW, how dependent is guestfsd on the guest that libguestfs uses? I
wasn't even aware that it could be used outside of that context.
Regards,
Anthony Liguori
Post by Richard W.M. Jones
The daemon (guestfsd) is 36106 LoC.
Rich.
Anthony Liguori
2011-01-25 00:25:20 UTC
Permalink
Post by Michael Roth
Actually, copyfile is the proposed open/read/write/close interface.
getfile is the current interface, and it seems to be a contentious
one. I've discussed it quite a bit with Jes here and in the last
couple RFCs. I think the current course is that we'll end up ditching
viewfile/viewdmesg in favor of copyfile, and that we should do it now
rather than later.
copyfile_open <remote> /dev/stdout -> fd_handle;
copyfile_read fd <offset=0> <count=<MAX_CHUNK_SIZE>;
copyfile_close fd_handle".
I really just want getfile.

I think designing a partial read API at this stage isn't a good idea.
Wait until there's a concrete use case before adding an interface.

Regards,

Anthony Liguori
Richard W.M. Jones
2011-01-25 09:21:13 UTC
Permalink
Post by Anthony Liguori
BTW, how dependent is guestfsd on the guest that libguestfs uses? I
wasn't even aware that it could be used outside of that context.
The daemon is compiled separately -- separate ./configure, make, etc.
You can run it on its own.

On the other hand, it does need to talk to something on the other end
of the virtio-serial guestfsd socket, and that other thing would
usually be the libguestfs library ...

One thing that Dan Berrange did was to patch[1] libguestfs so it could
talk to any existing guestfsd (you pointed it at a Unix domain
socket). He was using this to write test regression tests for
'virt-install': ie. install a guest, put guestfsd inside it, then boot
up the guest and check that everything was installed correctly by
querying it from an external libguestfs.

For various unrelated reasons these patches weren't quite ready to go
upstream, but it's on our ROADMAP[2] to add something like this.

In which case what you would do would be:

(a) put guestfsd into existing guests

(b) add a nice option to guestfish to attach to existing VMs, eg:

guestfish --attach Fedora14
[guestfish live attached to Fedora 14's virtio-serial guestfsd socket]
Post by Anthony Liguori
<fs> copy-in ./dirs /tmp/
"copy-in" would be dangerous currently if used on a live VM, but
in this case it would be quite safe

(c) do the work of porting guestfsd to Windows, FreeBSD etc

Rich.

[1] https://www.redhat.com/archives/libguestfs/2010-July/msg00010.html
refined a bit more later on.

[2] http://libguestfs.org/ROADMAP.txt
"* Allow alternate methods to start the appliance, including through
libvirt and by connecting to an existing appliance. This was
originally planned for 1.8 but we didn't get patches in time."
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming blog: http://rwmj.wordpress.com
Fedora now supports 80 OCaml packages (the OPEN alternative to F#)
http://cocan.org/getting_started_with_ocaml_on_red_hat_and_fedora
Anthony Liguori
2011-01-25 15:12:15 UTC
Permalink
Post by Richard W.M. Jones
Post by Anthony Liguori
BTW, how dependent is guestfsd on the guest that libguestfs uses? I
wasn't even aware that it could be used outside of that context.
The daemon is compiled separately -- separate ./configure, make, etc.
You can run it on its own.
On the other hand, it does need to talk to something on the other end
of the virtio-serial guestfsd socket, and that other thing would
usually be the libguestfs library ...
One thing that Dan Berrange did was to patch[1] libguestfs so it could
talk to any existing guestfsd (you pointed it at a Unix domain
socket). He was using this to write test regression tests for
'virt-install': ie. install a guest, put guestfsd inside it, then boot
up the guest and check that everything was installed correctly by
querying it from an external libguestfs.
For various unrelated reasons these patches weren't quite ready to go
upstream, but it's on our ROADMAP[2] to add something like this.
(a) put guestfsd into existing guests
How much infrastructure does guestfd depend on within the guest? Do you
need a full install with all of the bells and whistles or does it
degrade cleanly when certain tools aren't available?

Regards,

Anthony Liguori
Post by Richard W.M. Jones
guestfish --attach Fedora14
[guestfish live attached to Fedora 14's virtio-serial guestfsd socket]
Post by Anthony Liguori
<fs> copy-in ./dirs /tmp/
"copy-in" would be dangerous currently if used on a live VM, but
in this case it would be quite safe
(c) do the work of porting guestfsd to Windows, FreeBSD etc
Rich.
[1] https://www.redhat.com/archives/libguestfs/2010-July/msg00010.html
refined a bit more later on.
[2] http://libguestfs.org/ROADMAP.txt
"* Allow alternate methods to start the appliance, including through
libvirt and by connecting to an existing appliance. This was
originally planned for 1.8 but we didn't get patches in time."
Richard W.M. Jones
2011-01-25 15:43:30 UTC
Permalink
Post by Anthony Liguori
How much infrastructure does guestfd depend on within the guest? Do
you need a full install with all of the bells and whistles or does
it degrade cleanly when certain tools aren't available?
On Linux these are the libraries, both *optional*:

- libselinux
- augeas

It also uses the following external programs if available, but will
degrade gracefully if they are not:

- blkid
- blockdev
- cmp
- cp
- cpio
- df
- du
- various programs from ext2progs if you want to support ext2/3/4
- grep
- grub-install
- hexdump
- ls
- various programs from lvm2 if you want to support LVM ops
- /sbin/mkfs.* depending on what filesystems you want to be able to create
- mount
- mv
- ntfs-3g.probe
- ntfsresize
- mkswap
- parted
- printenv
- rm
- scrub
- sfdisk
- strings
- tar
- wc
- zerofree

As you can probably tell, in many cases the job of guestfsd is to
unpack the structured C remote procedure call arguments, pass these to
an external program, then parse the result and pass it back as a
structured C return value.

There are other guestfsd features which are implemented using POSIX
functions and syscalls directly.

The port to Windows involved rewriting the POSIX bits and bundling
some of the commands above from mingw where that made sense (and where
it didn't, making those calls return sensible and discoverable error
values). This port has likely bitrotted and really needs to be picked
up again.

HTH,

Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming blog: http://rwmj.wordpress.com
Fedora now supports 80 OCaml packages (the OPEN alternative to F#)
http://cocan.org/getting_started_with_ocaml_on_red_hat_and_fedora
Richard W.M. Jones
2011-01-26 13:01:53 UTC
Permalink
I posted my thoughts about how this could work here:

https://www.redhat.com/archives/libguestfs/2011-January/msg00066.html

Rich.

PS. You don't need to be a subscriber to post to that list -- I
manually triage any messages sent by non-subscribers.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-p2v converts physical machines to virtual machines. Boot with a
live CD or over the network (PXE) and turn machines into Xen guests.
http://et.redhat.com/~rjones/virt-p2v
Richard W.M. Jones
2011-01-24 22:20:19 UTC
Permalink
Post by Richard W.M. Jones
You might as well reuse the libguestfs API here because you get the
benefit of all the code that's been written, all the tools on top, and
a far more comprehensive API that would take you another 2 years to
implement.
To put it in some perspective, libguestfs is just shy of 500K lines of
code now, not including the tools built on top. 150 page manual just
for the core API.

Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine. Supports Linux and Windows.
http://et.redhat.com/~rjones/virt-df/
Michael Roth
2011-01-17 13:15:00 UTC
Permalink
Functions for managing client capabilities and creating client RPC jobs.

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
qerror.c | 8 +++
qerror.h | 6 ++
roms/seabios | 2 +-
virtagent.c | 158 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
virtagent.h | 34 ++++++++++++
5 files changed, 207 insertions(+), 1 deletions(-)
create mode 100644 virtagent.c
create mode 100644 virtagent.h

diff --git a/qerror.c b/qerror.c
index ac2cdaf..dea8c5f 100644
--- a/qerror.c
+++ b/qerror.c
@@ -200,6 +200,14 @@ static const QErrorStringTable qerror_table[] = {
.error_fmt = QERR_VNC_SERVER_FAILED,
.desc = "Could not start VNC server on %(target)",
},
+ {
+ .error_fmt = QERR_RPC_FAILED,
+ .desc = "An RPC error has occurred: %(message)",
+ },
+ {
+ .error_fmt = QERR_VA_FAILED,
+ .desc = "An error was reported by virtagent: %(message)",
+ },
{}
};

diff --git a/qerror.h b/qerror.h
index 943a24b..059c0dc 100644
--- a/qerror.h
+++ b/qerror.h
@@ -165,4 +165,10 @@ QError *qobject_to_qerror(const QObject *obj);
#define QERR_VNC_SERVER_FAILED \
"{ 'class': 'VNCServerFailed', 'data': { 'target': %s } }"

+#define QERR_RPC_FAILED \
+ "{ 'class': 'RPCFailed', 'data': { 'code': %i, 'message': %s } }"
+
+#define QERR_VA_FAILED \
+ "{ 'class': 'VirtagentFailed', 'data': { 'code': %i, 'message': %s } }"
+
#endif /* QERROR_H */
diff --git a/roms/seabios b/roms/seabios
index 0ff9051..17d3e46 160000
--- a/roms/seabios
+++ b/roms/seabios
@@ -1 +1 @@
-Subproject commit 0ff9051f756ba739bc2edca77925191c3c6cbc2f
+Subproject commit 17d3e46511aeedc9f09a8216d194d749187b80aa
diff --git a/virtagent.c b/virtagent.c
new file mode 100644
index 0000000..00eccb5
--- /dev/null
+++ b/virtagent.c
@@ -0,0 +1,158 @@
+/*
+ * virtagent - host/guest RPC client functions
+ *
+ * Copyright IBM Corp. 2010
+ *
+ * Authors:
+ * Adam Litke <***@linux.vnet.ibm.com>
+ * Michael Roth <***@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include "qemu_socket.h"
+#include "virtagent-common.h"
+
+static VAClientData *va_client_data;
+
+static void va_set_capabilities(QList *qlist)
+{
+ TRACE("called");
+
+ if (va_client_data == NULL) {
+ LOG("client is uninitialized, unable to set capabilities");
+ return;
+ }
+
+ if (va_client_data->supported_methods != NULL) {
+ qobject_decref(QOBJECT(va_client_data->supported_methods));
+ va_client_data->supported_methods = NULL;
+ TRACE("capabilities reset");
+ }
+
+ if (qlist != NULL) {
+ va_client_data->supported_methods = qlist_copy(qlist);
+ TRACE("capabilities set");
+ }
+}
+
+typedef struct VACmpState {
+ const char *method;
+ bool found;
+} VACmpState;
+
+static void va_cmp_capability_iter(QObject *obj, void *opaque)
+{
+ QString *method = qobject_to_qstring(obj);
+ const char *method_str = NULL;
+ VACmpState *cmp_state = opaque;
+
+ if (method) {
+ method_str = qstring_get_str(method);
+ }
+
+ if (method_str && opaque) {
+ if (strcmp(method_str, cmp_state->method) == 0) {
+ cmp_state->found = 1;
+ }
+ }
+}
+
+static bool va_has_capability(const char *method)
+{
+ VACmpState cmp_state;
+
+ if (method == NULL) {
+ return false;
+ }
+
+ /* we can assume method introspection is available */
+ if (strcmp(method, "system.listMethods") == 0) {
+ return true;
+ }
+ /* assume hello is available to we can probe for/notify the host
+ * rpc server
+ */
+ if (strcmp(method, "va.hello") == 0) {
+ return true;
+ }
+
+ /* compare method against the last retrieved supported method list */
+ cmp_state.method = method;
+ cmp_state.found = false;
+ if (va_client_data->supported_methods) {
+ qlist_iter(va_client_data->supported_methods,
+ va_cmp_capability_iter,
+ (void *)&cmp_state);
+ }
+
+ return cmp_state.found;
+}
+
+int va_client_init(VAClientData *client_data)
+{
+ client_data->supported_methods = NULL;
+ client_data->enabled = true;
+ va_client_data = client_data;
+
+ return 0;
+}
+
+int va_client_close(void)
+{
+ va_client_data = NULL;
+ return 0;
+}
+
+static int va_rpc_has_error(xmlrpc_env *env)
+{
+ if (env->fault_occurred) {
+ qerror_report(QERR_RPC_FAILED, env->fault_code, env->fault_string);
+ return -1;
+ }
+ return 0;
+}
+
+static bool va_is_enabled(void)
+{
+ return va_client_data && va_client_data->enabled;
+}
+
+static int va_do_rpc(xmlrpc_env *const env, const char *function,
+ xmlrpc_value *params, VAClientCallback *cb,
+ MonitorCompletion *mon_cb, void *mon_data)
+{
+ xmlrpc_mem_block *req_xml;
+ int ret;
+
+ if (!va_is_enabled()) {
+ LOG("virtagent not initialized");
+ ret = -ENOTCONN;
+ }
+
+ if (!va_has_capability(function)) {
+ LOG("guest agent does not have required capability");
+ ret = -ENOSYS;
+ goto out;
+ }
+
+ req_xml = XMLRPC_MEMBLOCK_NEW(char, env, 0);
+ xmlrpc_serialize_call(env, req_xml, function, params);
+ if (va_rpc_has_error(env)) {
+ ret = -EINVAL;
+ goto out_free;
+ }
+
+ ret = va_client_job_add(req_xml, cb, mon_cb, mon_data);
+ if (ret) {
+ goto out_free;
+ }
+
+ return ret;
+out_free:
+ XMLRPC_MEMBLOCK_FREE(char, req_xml);
+out:
+ return ret;
+}
diff --git a/virtagent.h b/virtagent.h
new file mode 100644
index 0000000..3e4d4fb
--- /dev/null
+++ b/virtagent.h
@@ -0,0 +1,34 @@
+/*
+ * virt-agent - host/guest RPC client functions
+ *
+ * Copyright IBM Corp. 2010
+ *
+ * Authors:
+ * Adam Litke <***@linux.vnet.ibm.com>
+ * Michael Roth <***@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef VIRTAGENT_H
+#define VIRTAGENT_H
+
+#include "monitor.h"
+
+#define VA_GUEST_PATH_VIRTIO_DEFAULT "/dev/virtio-ports/org.qemu.virtagent"
+#define VA_HOST_PATH_DEFAULT "/tmp/virtagent.sock"
+#define VA_MAX_CHUNK_SIZE 4096 /* max bytes at a time for get/send file */
+
+typedef void (VAClientCallback)(const char *resp_data, size_t resp_data_len,
+ MonitorCompletion *mon_cb, void *mon_data);
+typedef struct VAClientData {
+ QList *supported_methods;
+ bool enabled;
+} VAClientData;
+
+int va_client_init(VAClientData *client_data);
+int va_client_close(void);
+
+#endif /* VIRTAGENT_H */
--
1.7.0.4
Michael Roth
2011-01-17 13:15:08 UTC
Permalink
Do-nothing RPC that can be used to "ping" the RPC server

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
virtagent-server.c | 17 +++++++++++++++++
1 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/virtagent-server.c b/virtagent-server.c
index d48c61e..ab8994b 100644
--- a/virtagent-server.c
+++ b/virtagent-server.c
@@ -219,6 +219,19 @@ out_bad:
return NULL;
}

+/* va_ping(): respond to client. response without error in env
+ * variable indicates successful response
+ * rpc return values: none
+ */
+static xmlrpc_value *va_ping(xmlrpc_env *env,
+ xmlrpc_value *params,
+ void *user_data)
+{
+ xmlrpc_value *result = xmlrpc_build_value(env, "s", "dummy");
+ SLOG("va_ping()");
+ return result;
+}
+
typedef struct RPCFunction {
xmlrpc_value *(*func)(xmlrpc_env *env, xmlrpc_value *param, void *unused);
const char *func_name;
@@ -231,9 +244,13 @@ static RPCFunction guest_functions[] = {
.func_name = "va.getdmesg" },
{ .func = va_shutdown,
.func_name = "va.shutdown" },
+ { .func = va_ping,
+ .func_name = "va.ping" },
{ NULL, NULL }
};
static RPCFunction host_functions[] = {
+ { .func = va_ping,
+ .func_name = "va.ping" },
{ NULL, NULL }
};
--
1.7.0.4
Michael Roth
2011-01-17 13:14:58 UTC
Permalink
This implements a simple state machine to manage client/server rpc
jobs being multiplexed over a single channel.

A client job consists of sending an rpc request, reading an
rpc response, then making the appropriate callbacks. We allow one
client job to be processed at a time, which will make the following
state transitions:

VA_CLIENT_IDLE -> VA_CLIENT_SEND (job queued, send channel open)
VA_CLIENT_SEND -> VA_CLIENT_WAIT (request sent, awaiting response)
VA_CLIENT_WAIT -> VA_CLIENT_IDLE (response recieved, callbacks made)

A server job consists of recieving an rpc request, generating a
response, then sending the response. We expect to receive one server
request at a time due to the 1 at a time restriction for client jobs.
Server jobs make the following transitions:

VA_SERVER_IDLE -> VA_SERVER_WAIT (recieved/executed request, send
channel busy, response deferred)
VA_SERVER_IDLE -> VA_SERVER_SEND (recieved/executed request, send
channel open, sending response)
VA_SERVER_WAIT -> VA_SERVER_SEND (send channel now open, sending
response)
VA_SERVER_SEND -> VA_SERVER_IDLE (response sent)

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
virtagent-common.c | 613 ++++++++++++++++++++++++++++++++++++++++++++++++++++
virtagent-common.h | 71 ++++++
2 files changed, 684 insertions(+), 0 deletions(-)
create mode 100644 virtagent-common.c
create mode 100644 virtagent-common.h

diff --git a/virtagent-common.c b/virtagent-common.c
new file mode 100644
index 0000000..c487252
--- /dev/null
+++ b/virtagent-common.c
@@ -0,0 +1,613 @@
+/*
+ * virtagent - common host/guest RPC functions
+ *
+ * Copyright IBM Corp. 2010
+ *
+ * Authors:
+ * Adam Litke <***@linux.vnet.ibm.com>
+ * Michael Roth <***@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include "virtagent-common.h"
+
+typedef struct VAClientJob {
+ char client_tag[64];
+ xmlrpc_mem_block *req_data;
+ char *resp_data;
+ size_t resp_data_len;
+ VAClientCallback *cb;
+ QTAILQ_ENTRY(VAClientJob) next;
+ /* for use by QMP functions */
+ MonitorCompletion *mon_cb;
+ void *mon_data;
+} VAClientJob;
+
+typedef struct VAServerJob {
+ char client_tag[64];
+ xmlrpc_mem_block *resp_data;
+ char *req_data;
+ size_t req_data_len;
+ void *opaque;
+ QTAILQ_ENTRY(VAServerJob) next;
+} VAServerJob;
+
+enum va_http_status {
+ VA_HTTP_STATUS_NEW,
+ VA_HTTP_STATUS_OK,
+ VA_HTTP_STATUS_ERROR,
+};
+
+enum va_http_type {
+ VA_HTTP_TYPE_UNKNOWN = 1,
+ VA_HTTP_TYPE_REQUEST,
+ VA_HTTP_TYPE_RESPONSE,
+} va_http_type;
+
+typedef void (VAHTSendCallback)(enum va_http_status http_status,
+ const char *content, size_t content_len);
+typedef void (VAHTReadCallback)(enum va_http_status http_status,
+ const char *content, size_t content_len,
+ const char client_tag[64],
+ enum va_http_type http_type);
+typedef struct VAHTState {
+ enum {
+ VA_SEND_START,
+ VA_SEND_HDR,
+ VA_SEND_BODY,
+ VA_READ_START,
+ VA_READ_HDR,
+ VA_READ_BODY,
+ } state;
+ char hdr[VA_HDR_LEN_MAX];
+ char hdr_client_tag[64];
+ size_t hdr_len;
+ size_t hdr_pos;
+ char *content;
+ size_t content_len;
+ size_t content_pos;
+ VAHTSendCallback *send_cb;
+ VAHTReadCallback *read_cb;
+ enum va_http_type http_type;
+} VAHTState;
+
+typedef struct VAState {
+ bool is_host;
+ const char *channel_method;
+ const char *channel_path;
+ int fd;
+ QEMUTimer *client_timer;
+ QEMUTimer *server_timer;
+ enum va_client_state {
+ VA_CLIENT_IDLE = 0,
+ VA_CLIENT_SEND, /* sending rpc request */
+ VA_CLIENT_WAIT, /* waiting for rpc response */
+ } client_state;
+ enum va_server_state {
+ VA_SERVER_IDLE = 0,
+ VA_SERVER_WAIT, /* waiting to send rpc response */
+ VA_SERVER_SEND, /* sending rpc response */
+ } server_state;
+ VAClientData client_data;
+ VAServerData server_data;
+ int client_job_count;
+ int client_jobs_in_flight;
+ QTAILQ_HEAD(, VAClientJob) client_jobs;
+ int server_job_count;
+ QTAILQ_HEAD(, VAServerJob) server_jobs;
+ /* for use by async send/read handlers for fd */
+ VAHTState send_state;
+ VAHTState read_state;
+} VAState;
+
+static VAState *va_state;
+
+static bool va_set_client_state(enum va_client_state client_state);
+static VAServerJob *va_pop_server_job(void);
+static VAClientJob *va_pop_client_job(void);
+static int va_kick(void);
+static int va_connect(void);
+static void va_http_read_handler(void *opaque);
+static void va_http_read_handler_reset(void);
+
+static VAClientJob *va_current_client_job(void)
+{
+ TRACE("called");
+ return QTAILQ_FIRST(&va_state->client_jobs);
+}
+
+static void va_cancel_jobs(void)
+{
+ VAClientJob *cj, *cj_tmp;
+ VAServerJob *sj, *sj_tmp;
+
+ TRACE("called");
+ /* reset read handler, and cancel any current sends */
+ va_http_read_handler_reset();
+ qemu_set_fd_handler(va_state->fd, va_http_read_handler, NULL, NULL);
+
+ /* cancel/remove any queued client jobs */
+ QTAILQ_FOREACH_SAFE(cj, &va_state->client_jobs, next, cj_tmp) {
+ /* issue cb with failure notification */
+ cj->cb(NULL, 0, cj->mon_cb, cj->mon_data);
+ QTAILQ_REMOVE(&va_state->client_jobs, cj, next);
+ }
+ va_state->client_job_count = 0;
+ va_state->client_jobs_in_flight = 0;
+
+ /* cancel/remove any queued server jobs */
+ QTAILQ_FOREACH_SAFE(sj, &va_state->server_jobs, next, sj_tmp) {
+ QTAILQ_REMOVE(&va_state->server_jobs, sj, next);
+ }
+ va_state->server_job_count = 0;
+
+ va_state->client_state = VA_CLIENT_IDLE;
+ va_state->server_state = VA_SERVER_IDLE;
+}
+
+static void va_global_timeout(void *opaque)
+{
+ LOG("time out while handling a client job or sending RPC response");
+ va_cancel_jobs();
+}
+
+static void va_set_client_timeout(int interval)
+{
+ qemu_mod_timer(va_state->client_timer,
+ qemu_get_clock(rt_clock) + interval);
+}
+
+static void va_unset_client_timeout(void)
+{
+ qemu_del_timer(va_state->client_timer);
+}
+
+static void va_set_server_timeout(int interval)
+{
+ qemu_mod_timer(va_state->server_timer,
+ qemu_get_clock(rt_clock) + interval);
+}
+
+static void va_unset_server_timeout(void)
+{
+ qemu_del_timer(va_state->server_timer);
+}
+
+/***********************************************************/
+/* functions for starting/managing client/server rpc jobs */
+
+static int va_send_server_response(VAServerJob *server_job)
+{
+ VAHTState http_state;
+ TRACE("called");
+ http_state.content = XMLRPC_MEMBLOCK_CONTENTS(char, server_job->resp_data);
+ TRACE("sending response: %s", http_state.content);
+ http_state.content_len = XMLRPC_MEMBLOCK_SIZE(char,
+ server_job->resp_data);
+ http_state.content_pos = 0;
+ http_state.hdr_pos = 0;
+ pstrcpy(http_state.hdr_client_tag, 64, server_job->client_tag);
+ http_state.state = VA_SEND_START;
+ http_state.send_cb = va_server_send_cb;
+ va_http_hdr_init(&http_state, VA_HTTP_TYPE_RESPONSE);
+ va_state->send_state = http_state;
+ qemu_set_fd_handler(va_state->fd, va_http_read_handler,
+ va_http_send_handler, NULL);
+ return 0;
+}
+
+static int va_send_client_request(VAClientJob *client_job)
+{
+ VAHTState http_state;
+ TRACE("called");
+ http_state.content = XMLRPC_MEMBLOCK_CONTENTS(char, client_job->req_data);
+ TRACE("sending request: %s", http_state.content);
+ http_state.content_len = XMLRPC_MEMBLOCK_SIZE(char,
+ client_job->req_data);
+ http_state.content_pos = 0;
+ http_state.hdr_pos = 0;
+ http_state.state = VA_SEND_START;
+ http_state.send_cb = va_client_send_cb;
+ pstrcpy(http_state.hdr_client_tag, 64, client_job->client_tag);
+ va_http_hdr_init(&http_state, VA_HTTP_TYPE_REQUEST);
+ va_state->send_state = http_state;
+ qemu_set_fd_handler(va_state->fd, va_http_read_handler,
+ va_http_send_handler, NULL);
+ return 0;
+}
+
+/* do some sanity checks before setting client state */
+static bool va_set_client_state(enum va_client_state client_state)
+{
+ TRACE("setting client state to %d", client_state);
+ switch (client_state) {
+ case VA_CLIENT_IDLE:
+ assert(va_state->client_state == VA_CLIENT_IDLE ||
+ va_state->client_state == VA_CLIENT_WAIT);
+ break;
+ case VA_CLIENT_SEND:
+ assert(va_state->client_state == VA_CLIENT_IDLE);
+ break;
+ case VA_CLIENT_WAIT:
+ assert(va_state->client_state == VA_CLIENT_SEND);
+ break;
+ default:
+ LOG("invalid client state");
+ return false;
+ }
+ va_state->client_state = client_state;
+ return true;
+}
+
+/* do some sanity checks before setting server state */
+static bool va_set_server_state(enum va_server_state server_state)
+{
+ TRACE("setting server state to %d", server_state);
+ switch (server_state) {
+ case VA_SERVER_IDLE:
+ assert(va_state->server_state == VA_SERVER_IDLE ||
+ va_state->server_state == VA_SERVER_SEND);
+ break;
+ case VA_SERVER_WAIT:
+ assert(va_state->server_state == VA_SERVER_IDLE);
+ break;
+ case VA_SERVER_SEND:
+ assert(va_state->server_state == VA_SERVER_IDLE ||
+ va_state->server_state == VA_SERVER_WAIT);
+ break;
+ default:
+ LOG("invalid server state");
+ return false;
+ }
+ va_state->server_state = server_state;
+ return true;
+}
+
+/* xmit the next client/server job. for the client this entails sending
+ * a request to the remote server. for the server this entails sending a
+ * response to the remote client
+ *
+ * currently we only do one client job or one server job at a time. for
+ * situations where we start a client job but recieve a server job (remote
+ * rpc request) we go ahead and handle the server job before returning to
+ * handling the client job. TODO: there is potential for pipelining
+ * requests/responses for more efficient use of the channel.
+ *
+ * in all cases, we can only kick off client requests or server responses
+ * when the send side of the channel is not being used
+ */
+static int va_kick(void)
+{
+ VAServerJob *server_job;
+ VAClientJob *client_job;
+ int ret;
+
+ TRACE("called");
+
+ /* handle server jobs first */
+ if (QTAILQ_EMPTY(&va_state->server_jobs)) {
+ assert(va_set_server_state(VA_SERVER_IDLE));
+ } else {
+ TRACE("handling server job queue");
+ if (va_state->client_state == VA_CLIENT_SEND) {
+ TRACE("send channel busy, deferring till available");
+ assert(va_set_server_state(VA_SERVER_WAIT));
+ goto out;
+ }
+ if (va_state->server_state == VA_SERVER_SEND) {
+ TRACE("current server job already sending");
+ goto out;
+ }
+ TRACE("send server response");
+ server_job = QTAILQ_FIRST(&va_state->server_jobs);
+
+ /* set up the send handler for the response */
+ ret = va_send_server_response(server_job);
+ if (ret != 0) {
+ LOG("error setting up send handler for server response");
+ goto out_bad;
+ }
+ assert(va_set_server_state(VA_SERVER_SEND));
+ va_set_server_timeout(VA_SERVER_TIMEOUT_MS);
+ goto out;
+ }
+
+ /* handle client jobs if nothing to do for server */
+ if (QTAILQ_EMPTY(&va_state->client_jobs)) {
+ assert(va_set_client_state(VA_CLIENT_IDLE));
+ } else {
+ TRACE("handling client job queue");
+ /* TODO: this limits the ability to pipeline. modify this logic
+ * and update state machine accordingly
+ */
+ if (va_state->client_state != VA_CLIENT_IDLE) {
+ TRACE("client job in progress, returning");
+ goto out;
+ }
+
+ /* We know the other end cannot queue up more than VA_SERVER_JOBS_MAX
+ * before it will begin dropping jobs/data to avoid unbounded memory
+ * utilization, so don't try to send more than this many jobs at a time.
+ * In the future we should obtain the actual value of the other end's
+ * VA_SERVER_JOBS_MAX via an introspection call of some sort in case
+ * this value changes in the future.
+ *
+ * XXX: this won't be relevant until the state machine is modified to
+ * allow pipelining requests.
+ */
+ if (va_state->client_jobs_in_flight >= VA_SERVER_JOBS_MAX) {
+ TRACE("too many client jobs in flight, returning");
+ goto out;
+ }
+ TRACE("sending new client request");
+ client_job = QTAILQ_FIRST(&va_state->client_jobs);
+ /* set up the send handler for the request, then put it on the
+ * wait queue till response is read
+ */
+ ret = va_send_client_request(client_job);
+ if (ret != 0) {
+ LOG("error setting up sendhandler for client request");
+ goto out_bad;
+ }
+ assert(va_set_client_state(VA_CLIENT_SEND));
+ va_state->client_jobs_in_flight++;
+ va_set_client_timeout(VA_CLIENT_TIMEOUT_MS);
+ }
+
+out:
+ return 0;
+out_bad:
+ return ret;
+}
+
+/* push new client job onto queue, */
+static int va_push_client_job(VAClientJob *client_job)
+{
+ TRACE("called");
+ assert(client_job != NULL);
+ if (va_state->client_job_count >= VA_CLIENT_JOBS_MAX) {
+ LOG("client job queue limit exceeded");
+ return -ENOBUFS;
+ }
+ QTAILQ_INSERT_TAIL(&va_state->client_jobs, client_job, next);
+ va_state->client_job_count++;
+
+ return va_kick();
+}
+
+/* pop client job off queue. this should only be done when we're done with
+ * both sending the request and recieving the response
+ */
+static VAClientJob *va_pop_client_job(void)
+{
+ VAClientJob *client_job = va_current_client_job();
+ TRACE("called");
+ if (client_job != NULL) {
+ QTAILQ_REMOVE(&va_state->client_jobs, client_job, next);
+ va_state->client_job_count--;
+ assert(va_set_client_state(VA_CLIENT_IDLE));
+ }
+ return client_job;
+}
+
+/* push new server job onto the queue */
+static int va_push_server_job(VAServerJob *server_job)
+{
+ TRACE("called");
+ if (va_state->server_job_count >= VA_SERVER_JOBS_MAX) {
+ LOG("server job queue limit exceeded");
+ return -ENOBUFS;
+ }
+ QTAILQ_INSERT_TAIL(&va_state->server_jobs, server_job, next);
+ va_state->server_job_count++;
+ return va_kick();
+}
+
+/* pop server job off queue. this should only be done when we're ready to
+ * send the rpc response back to the remote client
+ */
+static VAServerJob *va_pop_server_job(void) {
+ VAServerJob *server_job = QTAILQ_FIRST(&va_state->server_jobs);
+ TRACE("called");
+ if (server_job != NULL) {
+ QTAILQ_REMOVE(&va_state->server_jobs, server_job, next);
+ va_state->server_job_count--;
+ assert(va_set_server_state(VA_SERVER_IDLE));
+ }
+
+ return server_job;
+}
+
+static VAClientJob *va_client_job_new(xmlrpc_mem_block *req_data,
+ VAClientCallback *cb,
+ MonitorCompletion *mon_cb,
+ void *mon_data)
+{
+ VAClientJob *cj = qemu_mallocz(sizeof(VAClientJob));
+ TRACE("called");
+ cj->req_data = req_data;
+ cj->cb = cb;
+ cj->mon_cb = mon_cb;
+ cj->mon_data = mon_data;
+ /* TODO: use uuid's, or something akin */
+ strcpy(cj->client_tag, "testtag");
+
+ return cj;
+}
+
+static VAServerJob *va_server_job_new(xmlrpc_mem_block *resp_data,
+ const char client_tag[64])
+{
+ VAServerJob *sj = qemu_mallocz(sizeof(VAServerJob));
+ TRACE("called");
+ sj->resp_data = resp_data;
+ pstrcpy(sj->client_tag, 64, client_tag);
+
+ return sj;
+}
+
+/* create new client job and then put it on the queue. this can be
+ * called externally from virtagent. Since there can only be one virtagent
+ * instance we access state via an object-scoped global rather than pass
+ * it around.
+ *
+ * if this is successful virtagent will handle cleanup of req_xml after
+ * making the appropriate callbacks, otherwise caller should handle it
+ */
+int va_client_job_add(xmlrpc_mem_block *req_xml, VAClientCallback *cb,
+ MonitorCompletion *mon_cb, void *mon_data)
+{
+ int ret;
+ VAClientJob *client_job;
+ TRACE("called");
+
+ client_job = va_client_job_new(req_xml, cb, mon_cb, mon_data);
+ if (client_job == NULL) {
+ return -EINVAL;
+ }
+
+ ret = va_push_client_job(client_job);
+ if (ret != 0) {
+ LOG("error adding client to queue: %s", strerror(ret));
+ qemu_free(client_job);
+ return ret;
+ }
+
+ return 0;
+}
+
+/* create new server job and then put it on the queue in wait state */
+int va_server_job_add(xmlrpc_mem_block *resp_xml, const char client_tag[64])
+{
+ VAServerJob *server_job;
+ TRACE("called");
+
+ server_job = va_server_job_new(resp_xml, client_tag);
+ assert(server_job != NULL);
+ va_push_server_job(server_job);
+ return 0;
+}
+
+static int va_connect(void)
+{
+ QemuOpts *opts;
+ int fd, ret = 0;
+
+ TRACE("called");
+ if (va_state->channel_method == NULL) {
+ LOG("no channel method specified");
+ return -EINVAL;
+ }
+ if (va_state->channel_path == NULL) {
+ LOG("no channel path specified");
+ return -EINVAL;
+ }
+
+ if (strcmp(va_state->channel_method, "unix-connect") == 0) {
+ TRACE("connecting to %s", va_state->channel_path);
+ opts = qemu_opts_create(qemu_find_opts("chardev"), NULL, 0);
+ qemu_opt_set(opts, "path", va_state->channel_path);
+ fd = unix_connect_opts(opts);
+ if (fd == -1) {
+ qemu_opts_del(opts);
+ LOG("error opening channel: %s", strerror(errno));
+ return -errno;
+ }
+ qemu_opts_del(opts);
+ socket_set_nonblock(fd);
+ } else if (strcmp(va_state->channel_method, "virtio-serial") == 0) {
+ if (va_state->is_host) {
+ LOG("specified channel method not available for host");
+ return -EINVAL;
+ }
+ if (va_state->channel_path == NULL) {
+ va_state->channel_path = VA_GUEST_PATH_VIRTIO_DEFAULT;
+ }
+ TRACE("opening %s", va_state->channel_path);
+ fd = qemu_open(va_state->channel_path, O_RDWR);
+ if (fd == -1) {
+ LOG("error opening channel: %s", strerror(errno));
+ return -errno;
+ }
+ ret = fcntl(fd, F_GETFL);
+ if (ret < 0) {
+ LOG("error getting channel flags: %s", strerror(errno));
+ return -errno;
+ }
+ ret = fcntl(fd, F_SETFL, ret | O_ASYNC);
+ if (ret < 0) {
+ LOG("error setting channel flags: %s", strerror(errno));
+ return -errno;
+ }
+ } else {
+ LOG("invalid channel method");
+ return -EINVAL;
+ }
+
+ va_state->fd = fd;
+ return 0;
+}
+
+int va_init(VAContext ctx)
+{
+ VAState *s;
+ int ret;
+
+ TRACE("called");
+ if (va_state) {
+ LOG("virtagent already initialized");
+ return -EPERM;
+ }
+
+ s = qemu_mallocz(sizeof(VAState));
+
+ ret = va_server_init(&s->server_data, ctx.is_host);
+ if (ret) {
+ LOG("error initializing virtagent server");
+ goto out_bad;
+ }
+ ret = va_client_init(&s->client_data);
+ if (ret) {
+ LOG("error initializing virtagent client");
+ goto out_bad;
+ }
+
+ s->client_timer = qemu_new_timer(rt_clock, va_global_timeout, NULL);
+ s->server_timer = qemu_new_timer(rt_clock, va_global_timeout, NULL);
+ s->client_state = VA_CLIENT_IDLE;
+ s->client_job_count = 0;
+ s->client_jobs_in_flight = 0;
+ s->server_state = VA_SERVER_IDLE;
+ s->server_job_count = 0;
+ QTAILQ_INIT(&s->client_jobs);
+ QTAILQ_INIT(&s->server_jobs);
+ s->read_state.state = VA_READ_START;
+ s->read_state.read_cb = va_http_read_cb;
+ s->channel_method = ctx.channel_method;
+ s->channel_path = ctx.channel_path;
+ s->is_host = ctx.is_host;
+ va_state = s;
+
+ /* connect to our end of the channel */
+ ret = va_connect();
+ if (ret) {
+ LOG("error connecting to channel");
+ goto out_bad;
+ }
+
+ /* start listening for requests/responses */
+ qemu_set_fd_handler(va_state->fd, va_http_read_handler, NULL, NULL);
+
+ if (!va_state->is_host) {
+ /* tell the host the agent is running */
+ va_send_hello();
+ }
+
+ return 0;
+out_bad:
+ qemu_free(s);
+ return ret;
+}
diff --git a/virtagent-common.h b/virtagent-common.h
new file mode 100644
index 0000000..568df5a
--- /dev/null
+++ b/virtagent-common.h
@@ -0,0 +1,71 @@
+/*
+ * virt-agent - host/guest RPC client functions
+ *
+ * Copyright IBM Corp. 2010
+ *
+ * Authors:
+ * Adam Litke <***@linux.vnet.ibm.com>
+ * Michael Roth <***@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+#ifndef VIRTAGENT_COMMON_H
+#define VIRTAGENT_COMMON_H
+
+#include <xmlrpc-c/base.h>
+#include <xmlrpc-c/client.h>
+#include <xmlrpc-c/server.h>
+#include "qemu-common.h"
+#include "qemu_socket.h"
+#include "qemu-timer.h"
+#include "monitor.h"
+#include "virtagent-server.h"
+#include "virtagent.h"
+
+#define DEBUG_VA
+
+#ifdef DEBUG_VA
+#define TRACE(msg, ...) do { \
+ fprintf(stderr, "%s:%s():L%d: " msg "\n", \
+ __FILE__, __FUNCTION__, __LINE__, ## __VA_ARGS__); \
+} while(0)
+#else
+#define TRACE(msg, ...) \
+ do { } while (0)
+#endif
+
+#define LOG(msg, ...) do { \
+ fprintf(stderr, "%s:%s(): " msg "\n", \
+ __FILE__, __FUNCTION__, ## __VA_ARGS__); \
+} while(0)
+
+#define VA_VERSION "1.0"
+#define EOL "\r\n"
+
+#define VA_HDR_LEN_MAX 4096 /* http header limit */
+#define VA_CONTENT_LEN_MAX 2*1024*1024 /* rpc/http send limit */
+#define VA_CLIENT_JOBS_MAX 5 /* max client rpcs we can queue */
+#define VA_SERVER_JOBS_MAX 5 /* max server rpcs we can queue */
+#define VA_SERVER_TIMEOUT_MS 5 * 1000
+#define VA_CLIENT_TIMEOUT_MS 5 * 1000
+
+typedef struct VAContext {
+ bool is_host;
+ const char *channel_method;
+ const char *channel_path;
+} VAContext;
+
+enum va_job_status {
+ VA_JOB_STATUS_PENDING = 0,
+ VA_JOB_STATUS_OK,
+ VA_JOB_STATUS_ERROR,
+ VA_JOB_STATUS_CANCELLED,
+};
+
+int va_init(VAContext ctx);
+int va_client_job_add(xmlrpc_mem_block *req_xml, VAClientCallback *cb,
+ MonitorCompletion *mon_cb, void *mon_data);
+int va_server_job_add(xmlrpc_mem_block *resp_xml, const char client_tag[64]);
+#endif /* VIRTAGENT_COMMON_H */
--
1.7.0.4
Michael Roth
2011-01-17 13:15:12 UTC
Permalink
This RPC tells us the guest agent is up and ready, and invokes guest
agent capability negotiation

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
virtagent-server.c | 22 ++++++++++++++++++++++
1 files changed, 22 insertions(+), 0 deletions(-)

diff --git a/virtagent-server.c b/virtagent-server.c
index ab8994b..b7e51ed 100644
--- a/virtagent-server.c
+++ b/virtagent-server.c
@@ -232,6 +232,26 @@ static xmlrpc_value *va_ping(xmlrpc_env *env,
return result;
}

+/* va_hello(): handle client startup notification
+ * rpc return values: none
+ */
+
+static xmlrpc_value *va_hello(xmlrpc_env *env,
+ xmlrpc_value *params,
+ void *user_data)
+{
+ xmlrpc_value *result;
+ int ret;
+ TRACE("called");
+ SLOG("va_hello()");
+ result = xmlrpc_build_value(env, "s", "dummy");
+ ret = va_client_init_capabilities();
+ if (ret < 0) {
+ LOG("error setting initializing client capabilities");
+ }
+ return result;
+}
+
typedef struct RPCFunction {
xmlrpc_value *(*func)(xmlrpc_env *env, xmlrpc_value *param, void *unused);
const char *func_name;
@@ -251,6 +271,8 @@ static RPCFunction guest_functions[] = {
static RPCFunction host_functions[] = {
{ .func = va_ping,
.func_name = "va.ping" },
+ { .func = va_hello,
+ .func_name = "va.hello" },
{ NULL, NULL }
};
--
1.7.0.4
Michael Roth
2011-01-17 13:15:17 UTC
Permalink
Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
Makefile.target | 2 +-
configure | 32 ++++++++++++++++++++++++++++++++
2 files changed, 33 insertions(+), 1 deletions(-)

diff --git a/Makefile.target b/Makefile.target
index e15b1c4..8564a8f 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -186,7 +186,7 @@ endif #CONFIG_BSD_USER
# System emulator target
ifdef CONFIG_SOFTMMU

-obj-y = arch_init.o cpus.o monitor.o machine.o gdbstub.o balloon.o
+obj-y = arch_init.o cpus.o monitor.o machine.o gdbstub.o balloon.o virtagent.o virtagent-server.o virtagent-common.o
# virtio has to be here due to weird dependency between PCI and virtio-net.
# need to fix this properly
obj-$(CONFIG_NO_PCI) += pci-stub.o
diff --git a/configure b/configure
index 438219b..4814351 100755
--- a/configure
+++ b/configure
@@ -1271,6 +1271,37 @@ EOF
fi

##########################################
+# xmlrpc-c probe
+
+discard_cmake_vars() {
+ echo "$@" | sed 's/@.*@//g'
+}
+
+# Look for the xmlrpc-c config program
+if test -n "$cross_prefix" && has ${cross_prefix}xmlrpc-c-config; then
+ xmlrpccconfig=${cross_prefix}xmlrpc-c-config
+elif has xmlrpc-c-config; then
+ xmlrpccconfig=xmlrpc-c-config
+else
+ feature_not_found "xmlrpc-c"
+fi
+
+cat > $TMPC << EOF
+#include <xmlrpc.h>
+int main(void) { xmlrpc_env env; xmlrpc_env_init(&env); return 0; }
+EOF
+xmlrpc_cflags=`$xmlrpccconfig --cflags 2> /dev/null`
+xmlrpc_cflags=`discard_cmake_vars $xmlrpc_cflags`
+xmlrpc_libs=`$xmlrpccconfig client server-util --libs 2> /dev/null`
+xmlrpc_libs=`discard_cmake_vars $xmlrpc_libs`
+if compile_prog "$xmlrpc_cflags" "$xmlrpc_libs"; then
+ libs_softmmu="$xmlrpc_libs $libs_softmmu"
+ libs_tools="$xmlrpc_libs $libs_tools"
+else
+ feature_not_found "xmlrpc-c"
+fi
+
+##########################################
# VNC TLS detection
if test "$vnc_tls" != "no" ; then
cat > $TMPC <<EOF
@@ -2368,6 +2399,7 @@ if test "$softmmu" = yes ; then
tools="qemu-img\$(EXESUF) qemu-io\$(EXESUF) $tools"
if [ "$linux" = "yes" -o "$bsd" = "yes" -o "$solaris" = "yes" ] ; then
tools="qemu-nbd\$(EXESUF) $tools"
+ tools="qemu-va\$(EXESUF) $tools"
if [ "$check_utests" = "yes" ]; then
tools="check-qint check-qstring check-qdict check-qlist $tools"
tools="check-qfloat check-qjson $tools"
--
1.7.0.4
Jes Sorensen
2011-01-24 10:24:04 UTC
Permalink
Post by Michael Roth
---
Makefile.target | 2 +-
configure | 32 ++++++++++++++++++++++++++++++++
2 files changed, 33 insertions(+), 1 deletions(-)
Please make building qemu-va optional, so the build doesn't break if one
doesn't have xmlrpc-devel installed.

Cheers,
Jes
Michael Roth
2011-01-17 13:15:06 UTC
Permalink
RPC to initiate guest reboot/halt/powerdown

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
virtagent-server.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 58 insertions(+), 0 deletions(-)

diff --git a/virtagent-server.c b/virtagent-server.c
index f97e4b1..d48c61e 100644
--- a/virtagent-server.c
+++ b/virtagent-server.c
@@ -163,6 +163,62 @@ EXIT_NOCLOSE:
return result;
}

+/* va_shutdown(): initiate guest shutdown
+ * rpc return values: none
+ */
+static xmlrpc_value *va_shutdown(xmlrpc_env *env,
+ xmlrpc_value *params,
+ void *user_data)
+{
+ int ret;
+ const char *shutdown_type, *shutdown_flag;
+ xmlrpc_value *result = xmlrpc_build_value(env, "s", "dummy");
+
+ TRACE("called");
+ xmlrpc_decompose_value(env, params, "(s)", &shutdown_type);
+ if (env->fault_occurred) {
+ goto out_bad;
+ }
+
+ if (strcmp(shutdown_type, "halt") == 0) {
+ shutdown_flag = "-H";
+ } else if (strcmp(shutdown_type, "powerdown") == 0) {
+ shutdown_flag = "-P";
+ } else if (strcmp(shutdown_type, "reboot") == 0) {
+ shutdown_flag = "-r";
+ } else {
+ xmlrpc_faultf(env, "invalid shutdown type: %s", shutdown_type);
+ goto out_bad;
+ }
+
+ SLOG("va_shutdown(), shutdown_type:%s", shutdown_type);
+
+ ret = fork();
+ if (ret == 0) {
+ /* child, start the shutdown */
+ setsid();
+ fclose(stdin);
+ fclose(stdout);
+ fclose(stderr);
+
+ sleep(5);
+ ret = execl("/sbin/shutdown", "shutdown", shutdown_flag, "+0",
+ "hypervisor initiated shutdown", (char*)NULL);
+ if (ret < 0) {
+ LOG("execl() failed: %s", strerror(errno));
+ exit(1);
+ }
+ TRACE("shouldn't be here");
+ exit(0);
+ } else if (ret < 0) {
+ xmlrpc_faultf(env, "fork() failed: %s", strerror(errno));
+ }
+
+ return result;
+out_bad:
+ return NULL;
+}
+
typedef struct RPCFunction {
xmlrpc_value *(*func)(xmlrpc_env *env, xmlrpc_value *param, void *unused);
const char *func_name;
@@ -173,6 +229,8 @@ static RPCFunction guest_functions[] = {
.func_name = "va.getfile" },
{ .func = va_getdmesg,
.func_name = "va.getdmesg" },
+ { .func = va_shutdown,
+ .func_name = "va.shutdown" },
{ NULL, NULL }
};
static RPCFunction host_functions[] = {
--
1.7.0.4
Michael Roth
2011-01-17 13:15:10 UTC
Permalink
Call guest agent's built-in introspection functions to get a list of
supported RPCs, and re-negotiate guest agent capabilities to determine
what agent_* commands are supported.

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
hmp-commands.hx | 16 +++++++++
qmp-commands.hx | 32 ++++++++++++++++++
virtagent.c | 98 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
virtagent.h | 3 ++
4 files changed, 149 insertions(+), 0 deletions(-)

diff --git a/hmp-commands.hx b/hmp-commands.hx
index 6679771..52d4821 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -1392,6 +1392,22 @@ STEXI
Ping a guest
ETEXI

+ {
+ .name = "agent_capabilities",
+ .args_type = "",
+ .params = "",
+ .help = "Fetch and re-negotiate guest agent capabilities",
+ .user_print = do_agent_capabilities_print,
+ .mhandler.cmd_async = do_agent_capabilities,
+ .flags = MONITOR_CMD_ASYNC,
+ },
+
+STEXI
+@item agent_capabilities
+@findex agent_capabilities
+Fetch and re-negotiate guest agent capabilties
+ETEXI
+
STEXI
@end table
ETEXI
diff --git a/qmp-commands.hx b/qmp-commands.hx
index be6f485..5a14191 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -927,6 +927,38 @@ Example:
EQMP

{
+ .name = "agent_capabilities",
+ .args_type = "",
+ .params = "",
+ .help = "Fetch and re-negotiate guest agent capabilities",
+ .user_print = do_agent_capabilities_print,
+ .mhandler.cmd_async = do_agent_capabilities,
+ .flags = MONITOR_CMD_ASYNC,
+ },
+
+STEXI
+@item agent_capabilities
+@findex agent_capabilities
+Fetch and re-negotiate guest agent capabilties
+ETEXI
+SQMP
+agent_capabilities
+--------
+
+Fetch and re-negotiate guest agent capabilities
+
+Arguments:
+
+(none)
+
+Example:
+
+-> { "execute": "agent_capabilities" }
+<- { "return":["va.shutdown", "va.getdmesg", "va.getfile", ... ] }
+
+EQMP
+
+ {
.name = "qmp_capabilities",
.args_type = "",
.params = "",
diff --git a/virtagent.c b/virtagent.c
index 4b6b7d1..99efe2b 100644
--- a/virtagent.c
+++ b/virtagent.c
@@ -477,3 +477,101 @@ int do_agent_ping(Monitor *mon, const QDict *mon_params,
xmlrpc_DECREF(params);
return ret;
}
+
+static void va_print_capability_iter(QObject *obj, void *opaque)
+{
+ Monitor *mon = opaque;
+ QString *method = qobject_to_qstring(obj);
+ const char *method_str;
+
+ if (method) {
+ method_str = qstring_get_str(method);
+ monitor_printf(mon, "%s\n", method_str);
+ }
+}
+
+void do_agent_capabilities_print(Monitor *mon, const QObject *data)
+{
+ QList *qlist;
+
+ TRACE("called");
+
+ monitor_printf(mon, "the following RPC methods are supported by the guest agent:\n");
+ qlist = qobject_to_qlist(data);
+ qlist_iter(qlist, va_print_capability_iter, mon);
+}
+
+static void do_agent_capabilities_cb(const char *resp_data,
+ size_t resp_data_len,
+ MonitorCompletion *mon_cb,
+ void *mon_data)
+{
+ xmlrpc_value *resp = NULL;
+ xmlrpc_value *cur_val = NULL;
+ const char *cur_method = NULL;
+ xmlrpc_env env;
+ QList *qlist = qlist_new();
+ int i;
+
+ TRACE("called");
+
+ if (resp_data == NULL) {
+ LOG("error handling RPC request");
+ goto out_no_resp;
+ }
+
+ TRACE("resp = %s\n", resp_data);
+
+ xmlrpc_env_init(&env);
+ resp = xmlrpc_parse_response(&env, resp_data, resp_data_len);
+ if (va_rpc_has_error(&env)) {
+ goto out_no_resp;
+ }
+
+ /* extract the list of supported RPCs */
+ for (i = 0; i < xmlrpc_array_size(&env, resp); i++) {
+ xmlrpc_array_read_item(&env, resp, i, &cur_val);
+ xmlrpc_read_string(&env, cur_val, &cur_method);
+ if (cur_method) {
+ TRACE("cur_method: %s", cur_method);
+ qlist_append_obj(qlist, QOBJECT(qstring_from_str(cur_method)));
+ }
+ xmlrpc_DECREF(cur_val);
+ }
+
+ /* set our client capabilities accordingly */
+ va_set_capabilities(qlist);
+
+ xmlrpc_DECREF(resp);
+out_no_resp:
+ if (mon_cb) {
+ mon_cb(mon_data, QOBJECT(qlist));
+ }
+ qobject_decref(QOBJECT(qlist));
+}
+
+/*
+ * do_agent_capabilities(): Fetch/re-negotiate guest agent capabilities
+ */
+int do_agent_capabilities(Monitor *mon, const QDict *mon_params,
+ MonitorCompletion cb, void *opaque)
+{
+ xmlrpc_env env;
+ xmlrpc_value *params;
+ int ret;
+
+ xmlrpc_env_init(&env);
+
+ params = xmlrpc_build_value(&env, "()");
+ if (va_rpc_has_error(&env)) {
+ return -1;
+ }
+
+ ret = va_do_rpc(&env, "system.listMethods", params,
+ do_agent_capabilities_cb, cb, opaque);
+ if (ret) {
+ qerror_report(QERR_VA_FAILED, ret, strerror(ret));
+ }
+ xmlrpc_DECREF(params);
+ return ret;
+}
diff --git a/virtagent.h b/virtagent.h
index 7d3a122..da70317 100644
--- a/virtagent.h
+++ b/virtagent.h
@@ -41,5 +41,8 @@ int do_agent_shutdown(Monitor *mon, const QDict *mon_params,
void do_agent_ping_print(Monitor *mon, const QObject *qobject);
int do_agent_ping(Monitor *mon, const QDict *mon_params,
MonitorCompletion cb, void *opaque);
+void do_agent_capabilities_print(Monitor *mon, const QObject *qobject);
+int do_agent_capabilities(Monitor *mon, const QDict *mon_params,
+ MonitorCompletion cb, void *opaque);

#endif /* VIRTAGENT_H */
--
1.7.0.4
Michael Roth
2011-01-17 13:15:09 UTC
Permalink
Monitor command to ping the RPC server.

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
hmp-commands.hx | 16 ++++++++++++
qmp-commands.hx | 32 +++++++++++++++++++++++
virtagent.c | 74 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
virtagent.h | 3 ++
4 files changed, 125 insertions(+), 0 deletions(-)

diff --git a/hmp-commands.hx b/hmp-commands.hx
index 0a8c500..6679771 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -1376,6 +1376,22 @@ STEXI
Shutdown/reboot a guest locally
ETEXI

+ {
+ .name = "agent_ping",
+ .args_type = "",
+ .params = "",
+ .help = "Ping a guest",
+ .user_print = do_agent_ping_print,
+ .mhandler.cmd_async = do_agent_ping,
+ .flags = MONITOR_CMD_ASYNC,
+ },
+
+STEXI
+@item agent_ping
+@findex agent_ping
+Ping a guest
+ETEXI
+
STEXI
@end table
ETEXI
diff --git a/qmp-commands.hx b/qmp-commands.hx
index 98d7270..be6f485 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -895,6 +895,38 @@ Example:
EQMP

{
+ .name = "agent_ping",
+ .args_type = "",
+ .params = "",
+ .help = "Ping a guest",
+ .user_print = do_agent_ping_print,
+ .mhandler.cmd_async = do_agent_ping,
+ .flags = MONITOR_CMD_ASYNC,
+ },
+
+STEXI
+@item agent_ping
+@findex agent_ping
+Ping a guest
+ETEXI
+SQMP
+agent_ping
+--------
+
+Ping a guest
+
+Arguments:
+
+(none)
+
+Example:
+
+-> { "execute": "agent_ping" }
+<- { "return": { "response":"ok" } }
+
+EQMP
+
+ {
.name = "qmp_capabilities",
.args_type = "",
.params = "",
diff --git a/virtagent.c b/virtagent.c
index 27700fb..4b6b7d1 100644
--- a/virtagent.c
+++ b/virtagent.c
@@ -403,3 +403,77 @@ int do_agent_shutdown(Monitor *mon, const QDict *mon_params,
xmlrpc_DECREF(params);
return ret;
}
+
+void do_agent_ping_print(Monitor *mon, const QObject *data)
+{
+ QDict *qdict;
+ const char *response;
+
+ TRACE("called");
+
+ qdict = qobject_to_qdict(data);
+ response = qdict_get_str(qdict, "response");
+ if (qdict_haskey(qdict, "response")) {
+ monitor_printf(mon, "%s", response);
+ }
+
+ monitor_printf(mon, "\n");
+}
+
+static void do_agent_ping_cb(const char *resp_data,
+ size_t resp_data_len,
+ MonitorCompletion *mon_cb,
+ void *mon_data)
+{
+ xmlrpc_value *resp = NULL;
+ xmlrpc_env env;
+ QDict *qdict = qdict_new();
+
+ TRACE("called");
+
+ if (resp_data == NULL) {
+ LOG("error handling RPC request");
+ qdict_put(qdict, "response", qstring_from_str("error"));
+ goto out_no_resp;
+ }
+
+ xmlrpc_env_init(&env);
+ resp = xmlrpc_parse_response(&env, resp_data, resp_data_len);
+ if (va_rpc_has_error(&env)) {
+ qdict_put(qdict, "response", qstring_from_str("error"));
+ goto out_no_resp;
+ }
+ qdict_put(qdict, "response", qstring_from_str("ok"));
+
+ xmlrpc_DECREF(resp);
+out_no_resp:
+ if (mon_cb) {
+ mon_cb(mon_data, QOBJECT(qdict));
+ }
+ qobject_decref(QOBJECT(qdict));
+}
+
+/*
+ * do_agent_ping(): Ping a guest
+ */
+int do_agent_ping(Monitor *mon, const QDict *mon_params,
+ MonitorCompletion cb, void *opaque)
+{
+ xmlrpc_env env;
+ xmlrpc_value *params;
+ int ret;
+
+ xmlrpc_env_init(&env);
+
+ params = xmlrpc_build_value(&env, "(n)");
+ if (va_rpc_has_error(&env)) {
+ return -1;
+ }
+
+ ret = va_do_rpc(&env, "va.ping", params, do_agent_ping_cb, cb, opaque);
+ if (ret) {
+ qerror_report(QERR_VA_FAILED, ret, strerror(ret));
+ }
+ xmlrpc_DECREF(params);
+ return ret;
+}
diff --git a/virtagent.h b/virtagent.h
index 8281b90..7d3a122 100644
--- a/virtagent.h
+++ b/virtagent.h
@@ -38,5 +38,8 @@ int do_agent_viewdmesg(Monitor *mon, const QDict *mon_params,
MonitorCompletion cb, void *opaque);
int do_agent_shutdown(Monitor *mon, const QDict *mon_params,
MonitorCompletion cb, void *opaque);
+void do_agent_ping_print(Monitor *mon, const QObject *qobject);
+int do_agent_ping(Monitor *mon, const QDict *mon_params,
+ MonitorCompletion cb, void *opaque);

#endif /* VIRTAGENT_H */
--
1.7.0.4
Michael Roth
2011-01-17 13:15:11 UTC
Permalink
Non-monitor version of agent_capabilities monitor function. This is
called by the local RPC server when it gets a "hello" from the guest
agent to re-negotiate guest agent capabilities.

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
virtagent.c | 18 ++++++++++++++++++
virtagent.h | 1 +
2 files changed, 19 insertions(+), 0 deletions(-)

diff --git a/virtagent.c b/virtagent.c
index 99efe2b..3ea6b85 100644
--- a/virtagent.c
+++ b/virtagent.c
@@ -575,3 +575,21 @@ int do_agent_capabilities(Monitor *mon, const QDict *mon_params,
xmlrpc_DECREF(params);
return ret;
}
+
+/* non-HMP/QMP RPC client functions */
+
+int va_client_init_capabilities(void)
+{
+ xmlrpc_env env;
+ xmlrpc_value *params;
+
+ xmlrpc_env_init(&env);
+
+ params = xmlrpc_build_value(&env, "()");
+ if (va_rpc_has_error(&env)) {
+ return -1;
+ }
+
+ return va_do_rpc(&env, "system.listMethods", params,
+ do_agent_capabilities_cb, NULL, NULL);
+}
diff --git a/virtagent.h b/virtagent.h
index da70317..127585b 100644
--- a/virtagent.h
+++ b/virtagent.h
@@ -44,5 +44,6 @@ int do_agent_ping(Monitor *mon, const QDict *mon_params,
void do_agent_capabilities_print(Monitor *mon, const QObject *qobject);
int do_agent_capabilities(Monitor *mon, const QDict *mon_params,
MonitorCompletion cb, void *opaque);
+int va_client_init_capabilities(void);

#endif /* VIRTAGENT_H */
--
1.7.0.4
Michael Roth
2011-01-17 13:15:13 UTC
Permalink
This tells the host RPC server (QEMU) that we're up and running

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
virtagent.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++
virtagent.h | 1 +
2 files changed, 48 insertions(+), 0 deletions(-)

diff --git a/virtagent.c b/virtagent.c
index 3ea6b85..b5e7944 100644
--- a/virtagent.c
+++ b/virtagent.c
@@ -593,3 +593,50 @@ int va_client_init_capabilities(void)
return va_do_rpc(&env, "system.listMethods", params,
do_agent_capabilities_cb, NULL, NULL);
}
+
+static void va_send_hello_cb(const char *resp_data,
+ size_t resp_data_len,
+ MonitorCompletion *mon_cb,
+ void *mon_data)
+{
+ xmlrpc_value *resp = NULL;
+ xmlrpc_env env;
+
+ TRACE("called");
+
+ if (resp_data == NULL) {
+ LOG("error handling RPC request");
+ return;
+ }
+
+ xmlrpc_env_init(&env);
+ resp = xmlrpc_parse_response(&env, resp_data, resp_data_len);
+ if (va_rpc_has_error(&env)) {
+ LOG("error parsing RPC response");
+ return;
+ }
+
+ xmlrpc_DECREF(resp);
+}
+
+int va_send_hello(void)
+{
+ xmlrpc_env env;
+ xmlrpc_value *params;
+ int ret;
+
+ TRACE("called");
+
+ xmlrpc_env_init(&env);
+ params = xmlrpc_build_value(&env, "()");
+ if (va_rpc_has_error(&env)) {
+ return -1;
+ }
+
+ ret = va_do_rpc(&env, "va.hello", params, va_send_hello_cb, NULL, NULL);
+ if (ret) {
+ qerror_report(QERR_VA_FAILED, ret, strerror(ret));
+ }
+ xmlrpc_DECREF(params);
+ return ret;
+}
diff --git a/virtagent.h b/virtagent.h
index 127585b..dba90d0 100644
--- a/virtagent.h
+++ b/virtagent.h
@@ -45,5 +45,6 @@ void do_agent_capabilities_print(Monitor *mon, const QObject *qobject);
int do_agent_capabilities(Monitor *mon, const QDict *mon_params,
MonitorCompletion cb, void *opaque);
int va_client_init_capabilities(void);
+int va_send_hello(void);

#endif /* VIRTAGENT_H */
--
1.7.0.4
Michael Roth
2011-01-17 13:15:14 UTC
Permalink
Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
virtagent-server.c | 37 +++++++++++++++++++++++++++++++++++++
1 files changed, 37 insertions(+), 0 deletions(-)

diff --git a/virtagent-server.c b/virtagent-server.c
index b7e51ed..5961905 100644
--- a/virtagent-server.c
+++ b/virtagent-server.c
@@ -252,6 +252,10 @@ static xmlrpc_value *va_hello(xmlrpc_env *env,
return result;
}

+static xmlrpc_value *va_capabilities(xmlrpc_env *env,
+ xmlrpc_value *params,
+ void *user_data);
+
typedef struct RPCFunction {
xmlrpc_value *(*func)(xmlrpc_env *env, xmlrpc_value *param, void *unused);
const char *func_name;
@@ -266,6 +270,8 @@ static RPCFunction guest_functions[] = {
.func_name = "va.shutdown" },
{ .func = va_ping,
.func_name = "va.ping" },
+ { .func = va_capabilities,
+ .func_name = "va.capabilities" },
{ NULL, NULL }
};
static RPCFunction host_functions[] = {
@@ -273,6 +279,8 @@ static RPCFunction host_functions[] = {
.func_name = "va.ping" },
{ .func = va_hello,
.func_name = "va.hello" },
+ { .func = va_capabilities,
+ .func_name = "va.capabilities" },
{ NULL, NULL }
};

@@ -287,6 +295,35 @@ static void va_register_functions(xmlrpc_env *env, xmlrpc_registry *registry,
}
}

+/* va_capabilities(): return server capabilities
+ * rpc return values:
+ * - version: virtagent version
+ * - methods: list of supported RPCs
+ */
+static xmlrpc_value *va_capabilities(xmlrpc_env *env,
+ xmlrpc_value *params,
+ void *user_data)
+{
+ int i;
+ xmlrpc_value *result = NULL, *methods;
+ RPCFunction *func_list = va_server_data->is_host ?
+ host_functions : guest_functions;
+
+ /* provide a list of supported RPCs. we don't want to rely on
+ * system.methodList since introspection methods won't support
+ * client metadata, which we may eventually come to rely upon
+ */
+ methods = xmlrpc_array_new(env);
+ for (i = 0; func_list[i].func != NULL; ++i) {
+ xmlrpc_array_append_item(env, methods,
+ xmlrpc_string_new(env, func_list[i].func_name));
+ }
+
+ result = xmlrpc_build_value(env, "{s:s,s:A}", "version", VA_VERSION,
+ "methods", methods);
+ return result;
+}
+
int va_server_init(VAServerData *server_data, bool is_host)
{
RPCFunction *func_list = is_host ? host_functions : guest_functions;
--
1.7.0.4
Michael Roth
2011-01-17 13:15:15 UTC
Permalink
Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
Makefile | 4 +-
qemu-va.c | 238 ++++++++++++++++++++++++++++++++++++++++++++++++++++
virtagent-common.h | 1 +
3 files changed, 242 insertions(+), 1 deletions(-)
create mode 100644 qemu-va.c

diff --git a/Makefile b/Makefile
index 6d601ee..f1f4d18 100644
--- a/Makefile
+++ b/Makefile
@@ -151,7 +151,7 @@ version-obj-$(CONFIG_WIN32) += version.o
######################################################################

qemu-img.o: qemu-img-cmds.h
-qemu-img.o qemu-tool.o qemu-nbd.o qemu-io.o cmd.o: $(GENERATED_HEADERS)
+qemu-img.o qemu-tool.o qemu-nbd.o qemu-io.o cmd.o qemu-va.o: $(GENERATED_HEADERS)

qemu-img$(EXESUF): qemu-img.o qemu-tool.o qemu-error.o $(oslib-obj-y) $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y) qemu-timer-common.o

@@ -159,6 +159,8 @@ qemu-nbd$(EXESUF): qemu-nbd.o qemu-tool.o qemu-error.o $(oslib-obj-y) $(trace-ob

qemu-io$(EXESUF): qemu-io.o cmd.o qemu-tool.o qemu-error.o $(oslib-obj-y) $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y) qemu-timer-common.o

+qemu-va$(EXESUF): qemu-va.o virtagent.o virtagent-server.o virtagent-common.o qemu-tool.o qemu-error.o qemu-sockets.c $(oslib-obj-y) $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y) qemu-timer-common.o qemu-timer.o
+
qemu-img-cmds.h: $(SRC_PATH)/qemu-img-cmds.hx
$(call quiet-command,sh $(SRC_PATH)/hxtool -h < $< > $@," GEN $@")

diff --git a/qemu-va.c b/qemu-va.c
new file mode 100644
index 0000000..5f1e2ab
--- /dev/null
+++ b/qemu-va.c
@@ -0,0 +1,238 @@
+/*
+ * virtagent - QEMU guest agent
+ *
+ * Copyright IBM Corp. 2010
+ *
+ * Authors:
+ * Michael Roth <***@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ * QEMU System Emulator
+ *
+ * Copyright (c) 2003-2008 Fabrice Bellard
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#include <getopt.h>
+#include <err.h>
+#include "qemu-ioh.h"
+#include "qemu-tool.h"
+#include "virtagent-common.h"
+
+static bool verbose_enabled;
+#define DEBUG_ENABLED
+
+#ifdef DEBUG_ENABLED
+#define DEBUG(msg, ...) do { \
+ fprintf(stderr, "%s:%s():L%d: " msg "\n", \
+ __FILE__, __FUNCTION__, __LINE__, ## __VA_ARGS__); \
+} while(0)
+#else
+#define DEBUG(msg, ...) do {} while (0)
+#endif
+
+#define INFO(msg, ...) do { \
+ if (!verbose_enabled) { \
+ break; \
+ } \
+ warnx(msg, ## __VA_ARGS__); \
+} while(0)
+
+/* mirror qemu I/O loop for standalone daemon */
+static void main_loop_wait(int nonblocking)
+{
+ fd_set rfds, wfds, xfds;
+ int ret, nfds;
+ struct timeval tv;
+ int timeout = 100000;
+
+ if (nonblocking) {
+ timeout = 0;
+ }
+
+ /* poll any events */
+ nfds = -1;
+ FD_ZERO(&rfds);
+ FD_ZERO(&wfds);
+ FD_ZERO(&xfds);
+ qemu_get_fdset(&nfds, &rfds, &wfds, &xfds);
+
+ tv.tv_sec = timeout / 1000;
+ tv.tv_usec = (timeout % 1000) * 1000;
+
+ ret = select(nfds + 1, &rfds, &wfds, &xfds, &tv);
+
+ if (ret > 0) {
+ qemu_process_fd_handlers(&rfds, &wfds, &xfds);
+ }
+
+ DEBUG("running timers...");
+ qemu_run_all_timers();
+}
+
+static void usage(const char *cmd)
+{
+ printf(
+"Usage: %s -c <channel_opts>\n"
+"QEMU virtagent guest agent %s\n"
+"\n"
+" -c, --channel channel method: one of unix-connect, virtio-serial, or\n"
+" isa-serial\n"
+" -p, --path channel path\n"
+" -v, --verbose display extra debugging information\n"
+" -d, --daemonize become a daemon\n"
+" -h, --help display this help and exit\n"
+"\n"
+"Report bugs to <***@linux.vnet.ibm.com>\n"
+ , cmd, VA_VERSION);
+}
+
+static int init_virtagent(const char *method, const char *path) {
+ VAContext ctx;
+ int ret;
+
+ INFO("initializing agent...");
+
+ if (method == NULL) {
+ /* try virtio-serial as our default */
+ method = "virtio-serial";
+ }
+
+ if (path == NULL) {
+ if (strcmp(method, "virtio-serial")) {
+ errx(EXIT_FAILURE, "must specify a path for this channel");
+ }
+ /* try the default name for the virtio-serial port */
+ path = VA_GUEST_PATH_VIRTIO_DEFAULT;
+ }
+
+ /* initialize virtagent */
+ ctx.is_host = false;
+ ctx.channel_method = method;
+ ctx.channel_path = path;
+ ret = va_init(ctx);
+ if (ret) {
+ errx(EXIT_FAILURE, "unable to initialize virtagent");
+ }
+
+ return 0;
+}
+
+static void become_daemon(void)
+{
+ pid_t pid, sid;
+
+ pid = fork();
+ if (pid < 0)
+ exit(EXIT_FAILURE);
+ if (pid > 0) {
+ FILE *pidfile = fopen(VA_PIDFILE, "wx");
+ if (!pidfile)
+ errx(EXIT_FAILURE, "Error creating pid file");
+ fprintf(pidfile, "%i", pid);
+ fclose(pidfile);
+ exit(EXIT_SUCCESS);
+ }
+
+ umask(0);
+ sid = setsid();
+ if (sid < 0)
+ goto fail;
+ if ((chdir("/")) < 0)
+ goto fail;
+
+ close(STDIN_FILENO);
+ close(STDOUT_FILENO);
+ close(STDERR_FILENO);
+ return;
+
+fail:
+ unlink(VA_PIDFILE);
+ exit(EXIT_FAILURE);
+}
+
+int main(int argc, char **argv)
+{
+ const char *sopt = "hVvdc:p:", *channel_method = NULL, *channel_path = NULL;
+ struct option lopt[] = {
+ { "help", 0, NULL, 'h' },
+ { "version", 0, NULL, 'V' },
+ { "verbose", 0, NULL, 'v' },
+ { "channel", 0, NULL, 'c' },
+ { "path", 0, NULL, 'p' },
+ { "daemonize", 0, NULL, 'd' },
+ { NULL, 0, NULL, 0 }
+ };
+ int opt_ind = 0, ch, ret, daemonize = 0;
+
+ while ((ch = getopt_long(argc, argv, sopt, lopt, &opt_ind)) != -1) {
+ switch (ch) {
+ case 'c':
+ channel_method = optarg;
+ break;
+ case 'p':
+ channel_path = optarg;
+ break;
+ case 'v':
+ verbose_enabled = 1;
+ break;
+ case 'V':
+ printf("QEMU Virtagent %s\n", VA_VERSION);
+ return 0;
+ case 'd':
+ daemonize = 1;
+ break;
+ case 'h':
+ usage(argv[0]);
+ return 0;
+ case '?':
+ errx(EXIT_FAILURE, "Try '%s --help' for more information.",
+ argv[0]);
+ }
+ }
+
+ init_clocks();
+ configure_alarms("dynticks");
+ if (init_timer_alarm() < 0) {
+ errx(EXIT_FAILURE, "could not initialize alarm timer");
+ }
+
+ /* initialize virtagent */
+ ret = init_virtagent(channel_method, channel_path);
+ if (ret) {
+ errx(EXIT_FAILURE, "error initializing communication channel");
+ }
+
+ if (daemonize) {
+ become_daemon();
+ }
+
+ /* main i/o loop */
+ for (;;) {
+ DEBUG("entering main_loop_wait()");
+ main_loop_wait(0);
+ DEBUG("left main_loop_wait()");
+ }
+
+ unlink(VA_PIDFILE);
+ return 0;
+}
diff --git a/virtagent-common.h b/virtagent-common.h
index 6ad8036..cb2363c 100644
--- a/virtagent-common.h
+++ b/virtagent-common.h
@@ -44,6 +44,7 @@
#define VA_VERSION "1.0"
#define EOL "\r\n"

+#define VA_PIDFILE "/var/run/qemu-va.pid"
#define VA_HDR_LEN_MAX 4096 /* http header limit */
#define VA_CONTENT_LEN_MAX 2*1024*1024 /* rpc/http send limit */
#define VA_CLIENT_JOBS_MAX 5 /* max client rpcs we can queue */
--
1.7.0.4
Michael Roth
2011-01-17 13:15:16 UTC
Permalink
This adds a new chardev, virtagent, which actually just passes back a
socket chardev after connecting to it and initializing the agent.

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
qemu-char.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 44 insertions(+), 0 deletions(-)

diff --git a/qemu-char.c b/qemu-char.c
index edc9ad6..eae49c1 100644
--- a/qemu-char.c
+++ b/qemu-char.c
@@ -2463,6 +2463,49 @@ fail:
return NULL;
}

+#include "virtagent-common.h"
+
+static CharDriverState *qemu_chr_open_virtagent(QemuOpts *opts)
+{
+ CharDriverState *chr;
+ const char *path;
+ VAContext ctx;
+ int ret;
+
+ /* revert to/enforce default socket chardev options for virtagent */
+ path = qemu_opt_get(opts, "path");
+ if (path == NULL) {
+ path = VA_HOST_PATH_DEFAULT;
+ }
+ qemu_opt_set(opts, "path", path);
+ qemu_opt_set(opts, "server", "on");
+ qemu_opt_set(opts, "wait", "off");
+ qemu_opt_set(opts, "telnet", "off");
+
+ chr = qemu_chr_open_socket(opts);
+ if (chr == NULL) {
+ goto err;
+ }
+
+ /* initialize virtagent using the socket we just set up */
+ ctx.channel_method = "unix-connect";
+ ctx.channel_path = path;
+ ctx.is_host = true;
+ ret = va_init(ctx);
+ ret = 0;
+ if (ret != 0) {
+ fprintf(stderr, "error initializing virtagent");
+ goto err;
+ }
+
+ return chr;
+err:
+ if (chr) {
+ qemu_free(chr);
+ }
+ return NULL;
+}
+
static const struct {
const char *name;
CharDriverState *(*open)(QemuOpts *opts);
@@ -2472,6 +2515,7 @@ static const struct {
{ .name = "udp", .open = qemu_chr_open_udp },
{ .name = "msmouse", .open = qemu_chr_open_msmouse },
{ .name = "vc", .open = text_console_init },
+ { .name = "virtagent", .open = qemu_chr_open_virtagent },
#ifdef _WIN32
{ .name = "file", .open = qemu_chr_open_win_file_out },
{ .name = "pipe", .open = qemu_chr_open_win_pipe },
--
1.7.0.4
Gerd Hoffmann
2011-01-17 13:53:09 UTC
Permalink
Hi,
Post by Michael Roth
There are a wide range of use cases motivating the need for a guest
agent of some sort to extend the functionality/usability/control
offered by QEMU. Some examples include graceful guest shutdown/reboot
and notifications thereof, copy/paste syncing between host/guest,
guest statistics gathering, file access, etc.
What is your plan to handle system-level queries+actions (such as
reboot) vs. per-user stuff (such as cut+paste)?

cheers,
Gerd
Michael Roth
2011-01-17 14:53:22 UTC
Permalink
Post by Gerd Hoffmann
Hi,
Post by Michael Roth
There are a wide range of use cases motivating the need for a guest
agent of some sort to extend the functionality/usability/control
offered by QEMU. Some examples include graceful guest shutdown/reboot
and notifications thereof, copy/paste syncing between host/guest,
guest statistics gathering, file access, etc.
What is your plan to handle system-level queries+actions (such as
reboot) vs. per-user stuff (such as cut+paste)?
This is an area that hasn't been well-defined yet and is definitely open
for suggestions.

For host->guest RPCs the current plan is to always have the RPC executed
at the system level, but for situations involving a specific user we
fork and drop privileges with the RPC, and report back the status of the
fork/exec. The fork'd process would then do what it needs to do, then if
needed, communicate status back to the system-level daemon via some IPC
mechanism (most likely a socket we listen to in addition to the serial
channel) that can be used to send an event. The system-level daemon then
communicates these events back to the host with a guest->host RPC.

Processes created independently of the system-level daemon could report
events in the same manner, via this socket. I think this might suit the
vdagent client model for Spice as well, things like resolutions changes
and clipboard contents could be communicated as asynchronous events to
virtagent via this socket, in place of a separate daemon, and virtagent
could have RPCs that can route/translate host->guest calls to the
user-level daemons.

Depending on the event, event-handling would either get consumed within
virtagent, or punted to qemu's QMP event layer, which may need to be
extended depending on the types of events we want to handle. The events
would terminate within QEMU, but handlers could hook into external services.

So that's the rough plan, but would be happy to hear any other thoughts
on how we might approach this.
Post by Gerd Hoffmann
cheers,
Gerd
Gerd Hoffmann
2011-01-18 14:02:50 UTC
Permalink
Post by Michael Roth
Post by Gerd Hoffmann
What is your plan to handle system-level queries+actions (such as
reboot) vs. per-user stuff (such as cut+paste)?
This is an area that hasn't been well-defined yet and is definitely open
for suggestions.
One option would be to have two virtio-serial channels, one for the
system and one for the user stuff. gdm could grant the desktop user
access to the user channel like it does with sound devices and simliar
stuff, so the user agent has access to it.

Another option is to have some socket where the user agent can talk to
the system agent and have it relay the requests.

Maybe it is also possible to use dbus for communication between the
system agent and user agent (and maybe other components). Maybe it even
makes sense to run the dbus protocol over the virtio-serial line?
Disclaimer: I know next to nothing about dbus details.
Post by Michael Roth
For host->guest RPCs the current plan is to always have the RPC executed
at the system level, but for situations involving a specific user we
fork and drop privileges with the RPC, and report back the status of the
fork/exec. The fork'd process would then do what it needs to do, then if
needed, communicate status back to the system-level daemon via some IPC
mechanism (most likely a socket we listen to in addition to the serial
channel) that can be used to send an event. The system-level daemon then
communicates these events back to the host with a guest->host RPC.
Hmm. A bit heavy to fork+exec on every rpc. might be ok for rare
events though.
Post by Michael Roth
Processes created independently of the system-level daemon could report
events in the same manner, via this socket. I think this might suit the
vdagent client model for Spice as well,
Yes, vdagent works this way, except that *all* communication goes
through that socket, i.e. events/requests coming from the host for the
user-level agent are routed through that socket too.

It is the only sane way to handle clipboard support IMHO as there is
quite some message ping-pong involved to get a clipboard transaction done.

How does xmlrpm transmit binary blobs btw?

cheers,
Gerd
Anthony Liguori
2011-01-18 14:13:01 UTC
Permalink
Post by Gerd Hoffmann
Post by Michael Roth
Post by Gerd Hoffmann
What is your plan to handle system-level queries+actions (such as
reboot) vs. per-user stuff (such as cut+paste)?
This is an area that hasn't been well-defined yet and is definitely open
for suggestions.
One option would be to have two virtio-serial channels, one for the
system and one for the user stuff. gdm could grant the desktop user
access to the user channel like it does with sound devices and simliar
stuff, so the user agent has access to it.
Another option is to have some socket where the user agent can talk to
the system agent and have it relay the requests.
I think this is the best approach. One requirement we've been working
with is that all actions from guest agents are logged. This is to give
an administrator confidence that the hypervisor isn't doing anything
stupid. If you route all of the user traffic through a privileged
daemon, you can log everything to syslog or an appropriate log file.
Post by Gerd Hoffmann
Maybe it is also possible to use dbus for communication between the
system agent and user agent (and maybe other components). Maybe it
even makes sense to run the dbus protocol over the virtio-serial line?
Disclaimer: I know next to nothing about dbus details.
The way I'd prefer to think about it is that the transport and protocol
used are separate layers that may have multiple implementations over time.

For instance, we currently support virtio-serial and isa-serial.
Supporting another protocol wouldn't be a big deal. The part that's
needs to remain consistent is the API supported by the
transport/protocol combinations.
Post by Gerd Hoffmann
Post by Michael Roth
For host->guest RPCs the current plan is to always have the RPC executed
at the system level, but for situations involving a specific user we
fork and drop privileges with the RPC, and report back the status of the
fork/exec. The fork'd process would then do what it needs to do, then if
needed, communicate status back to the system-level daemon via some IPC
mechanism (most likely a socket we listen to in addition to the serial
channel) that can be used to send an event. The system-level daemon then
communicates these events back to the host with a guest->host RPC.
Hmm. A bit heavy to fork+exec on every rpc. might be ok for rare
events though.
Post by Michael Roth
Processes created independently of the system-level daemon could report
events in the same manner, via this socket. I think this might suit the
vdagent client model for Spice as well,
Yes, vdagent works this way, except that *all* communication goes
through that socket, i.e. events/requests coming from the host for the
user-level agent are routed through that socket too.
It is the only sane way to handle clipboard support IMHO as there is
quite some message ping-pong involved to get a clipboard transaction done.
How does xmlrpm transmit binary blobs btw?
XML-RPC has a base64 encoding that's part of the standard for encoding
binary data. It also supports UTF-8 encoded strings.

Regards,

Anthony Liguori
Post by Gerd Hoffmann
cheers,
Gerd
Michael Roth
2011-01-31 14:41:49 UTC
Permalink
Post by Gerd Hoffmann
Post by Michael Roth
Post by Gerd Hoffmann
What is your plan to handle system-level queries+actions (such as
reboot) vs. per-user stuff (such as cut+paste)?
This is an area that hasn't been well-defined yet and is definitely open
for suggestions.
One option would be to have two virtio-serial channels, one for the
system and one for the user stuff. gdm could grant the desktop user
access to the user channel like it does with sound devices and simliar
stuff, so the user agent has access to it.
Another option is to have some socket where the user agent can talk to
the system agent and have it relay the requests.
I think this is the best approach. One requirement we've been working
with is that all actions from guest agents are logged. This is to give
an administrator confidence that the hypervisor isn't doing anything
stupid. If you route all of the user traffic through a privileged
daemon, you can log everything to syslog or an appropriate log file.
A brain-dump on how we're planning to do this. I tried to keep it
concise, but that only went so far :)

We extend qemu-va, the virtagent guest agent, to be able to create a
unix socket in the guest that listens for connections.

We also extend qemu-va so that it can run as a user-level daemon that
can connect to the aforementioned socket.

root: qemu-va -c virtio-serial -p
/dev/virtio-ports/org.qemu.virtagent
--user-socket=/var/run/virtagent.user.sock

user: qemu-va -c unix-connect -p /var/run/virtagent.user.sock
[--username=user] --user-daemon

The user-level daemon will drop priviledges to specified username if
necessary, or run as the invoking user. User->daemon mappings will be
determined by the system-level agent using getsockopt on new connections.

The user-level daemon operates in the same manner as the system-level
daemon: it can execute RPCs in the host, and the host can execute RPCs
it implements. This should theoretically make it capable of vdagent-like
functionality by adding the appropriate host/guest-side RPCs. We would
have to incorporate the X11 fd into the main select() for events and
such, but that should be fairly straight-forward.

User-specific host->guest RPCs will add an http header,
"X-Virtagent-Username", specifying the intended recipient. If that
recipient has no associated running user-level daemon, an error will be
returned by the system-level daemon. An RPC may be provided so the host
can query the system-level agent for connected users.

User-generated guest->host RPCs will be routed directly to the host,
with the same "X-Virtagent-Username" http header inserted by the
system-level agent. The host will re-insert this header field in it's
responses so the system-level agent can route the responses back to the
user.

We will only allow one daemon per user at a time, and enforce this with
a pid/lock file. If this is circumvented somehow (or we allow a
user-specifiable pid file), a new connection specifying a user that's
already been mapped to a user-level daemon will overwrite the existing
mapping, and the old connection will be closed by the system-level
daemon if it is still active.

Some caveats to consider however:

1) There may be user-level RPCs that are generally desirable...such as
being able to execute a script as an unpriveledged user, or access a
file in the same way. So it makes sense to invoke user-level agent
startup with a login script. But for X-related things like Spice, an
.xsession hook makes more sense. If we do both, the .xsession-spawned
daemon will "take over", but any existing stateful interactions with
that user will be effectively reset. This may be undesirable. One option
is to XOpenDisplay() "on demand", rather than at startup. We won't know
what $env{DISPLAY} is then, however, and there could be multiple
displays for that user. I'm not sure if there's a reliable way to query
such a thing, but assuming you can...we could implement stateful RPCs so
establish a connection later: There could be multiple displays as well.
My first inclination would be to have something like:

on host:
query_displays(user)
connect_display(user, display_name)
query_screens(user, display_name)
etc..

This requires either the host to poll for displays, or for the guest
agent to do it and then notify the host, however. The only way I see
around this is to only start the user-level daemon via .xsession or similar.

2) Pulling X11 dependencies into virtagent...I'm not sure how well this
would go over. We may end up needing to move qemu-va into a seperate
repo that pulls in deps from qemu.git

-Mike
Michael Roth
2011-02-01 22:18:32 UTC
Permalink
Post by Michael Roth
Post by Gerd Hoffmann
Post by Gerd Hoffmann
What is your plan to handle system-level queries+actions (such as
reboot) vs. per-user stuff (such as cut+paste)?
One option would be to have two virtio-serial channels, one for the
system and one for the user stuff. gdm could grant the desktop user
access to the user channel like it does with sound devices and simliar
stuff, so the user agent has access to it.
Another option is to have some socket where the user agent can talk to
the system agent and have it relay the requests.
I think this is the best approach. One requirement we've been working
with is that all actions from guest agents are logged. This is to give
an administrator confidence that the hypervisor isn't doing anything
stupid. If you route all of the user traffic through a privileged
daemon, you can log everything to syslog or an appropriate log file.
A brain-dump on how we're planning to do this. I tried to keep it
concise, but that only went so far :)
We extend qemu-va, the virtagent guest agent, to be able to create a
unix socket in the guest that listens for connections.
We also extend qemu-va so that it can run as a user-level daemon that
can connect to the aforementioned socket.
root: qemu-va -c virtio-serial -p /dev/virtio-ports/org.qemu.virtagent
--user-socket=/var/run/virtagent.user.sock
user: qemu-va -c unix-connect -p /var/run/virtagent.user.sock
[--username=user] --user-daemon
The user-level daemon will drop priviledges to specified username if
necessary, or run as the invoking user. User->daemon mappings will be
determined by the system-level agent using getsockopt on new connections.
The user-level daemon operates in the same manner as the system-level
daemon: it can execute RPCs in the host, and the host can execute RPCs
it implements. This should theoretically make it capable of vdagent-like
functionality by adding the appropriate host/guest-side RPCs. We would
have to incorporate the X11 fd into the main select() for events and
such, but that should be fairly straight-forward.
User-specific host->guest RPCs will add an http header,
"X-Virtagent-Username", specifying the intended recipient. If that
recipient has no associated running user-level daemon, an error will be
returned by the system-level daemon. An RPC may be provided so the host
can query the system-level agent for connected users.
User-generated guest->host RPCs will be routed directly to the host,
with the same "X-Virtagent-Username" http header inserted by the
system-level agent. The host will re-insert this header field in it's
responses so the system-level agent can route the responses back to the
user.
We will only allow one daemon per user at a time, and enforce this with
a pid/lock file. If this is circumvented somehow (or we allow a
user-specifiable pid file), a new connection specifying a user that's
already been mapped to a user-level daemon will overwrite the existing
mapping, and the old connection will be closed by the system-level
daemon if it is still active.
1) There may be user-level RPCs that are generally desirable...such as
being able to execute a script as an unpriveledged user, or access a
file in the same way. So it makes sense to invoke user-level agent
startup with a login script. But for X-related things like Spice, an
.xsession hook makes more sense. If we do both, the .xsession-spawned
daemon will "take over", but any existing stateful interactions with
that user will be effectively reset. This may be undesirable. One option
is to XOpenDisplay() "on demand", rather than at startup. We won't know
what $env{DISPLAY} is then, however, and there could be multiple
displays for that user. I'm not sure if there's a reliable way to query
such a thing, but assuming you can...we could implement stateful RPCs so
establish a connection later: There could be multiple displays as well.
query_displays(user)
connect_display(user, display_name)
query_screens(user, display_name)
etc..
This requires either the host to poll for displays, or for the guest
agent to do it and then notify the host, however. The only way I see
around this is to only start the user-level daemon via .xsession or similar.
Any thoughts on this?

I'd like to move forward with this approach, but with the goal here
being that we have one agent to rule them all, I'd like to know whether
those of you involved with the spice vdagent work think this is a
reasonable approach.

One thing that would be really useful is if we could outline the basic
host->guest calls, and guest->host calls, and see how they fit into the
proposed architecture. I've been looking at vdagent to get an idea, but
due to the limitations noted above there are some changes that would to
made in how we identify, and be notified of, new X displays if we have a
single per-user daemon that is started *before* the X display, and
persists after the X display is shut down. I can take an initial stab at
this, but have some questions:

Can spice do anything useful with more than 1 display per user?

If so, should the agent report new displays to the host? Or should the
host query for available displays for a particular user, then instruct
the agent about what display it wants the agent to connect to? What
about when a display is closed?

Thanks,

Mike
Gerd Hoffmann
2011-02-14 09:49:59 UTC
Permalink
Hi,
Post by Michael Roth
Post by Michael Roth
A brain-dump on how we're planning to do this. I tried to keep it
concise, but that only went so far :)
We extend qemu-va, the virtagent guest agent, to be able to create a
unix socket in the guest that listens for connections.
[ ... ]
Post by Michael Roth
Post by Michael Roth
1) There may be user-level RPCs that are generally desirable...such as
being able to execute a script as an unpriveledged user, or access a
file in the same way. So it makes sense to invoke user-level agent
startup with a login script. But for X-related things like Spice, an
.xsession hook makes more sense.
I usually login via gdm, login and X session are not separate then.
Post by Michael Roth
Post by Michael Roth
If we do both, the .xsession-spawned
daemon will "take over", but any existing stateful interactions with
that user will be effectively reset. This may be undesirable. One option
is to XOpenDisplay() "on demand", rather than at startup. We won't know
what $env{DISPLAY} is then, however, and there could be multiple
displays for that user.
I would make the user-agent just run without using X11 in case $DISPLAY
is unset and be done with it. I would also try hard to avoid adding any
X11-specific stuff into the protocol. The protocol for cut+paste and
the other gui stuff should work equally well for non-linux guests
(windows, macos?) and (when it hits mainstream some day) wayland.
Post by Michael Roth
Post by Michael Roth
agent to do it and then notify the host, however. The only way I see
around this is to only start the user-level daemon via .xsession or similar.
Start via .xsession is the way to go IMHO.
Post by Michael Roth
I'd like to move forward with this approach, but with the goal here
being that we have one agent to rule them all, I'd like to know whether
those of you involved with the spice vdagent work think this is a
reasonable approach.
spice allows a single user-level agent today and also does only
gui-stuff, so that approach works fine for us.
Post by Michael Roth
Can spice do anything useful with more than 1 display per user?
multihead (one virtual desktop on multiple physical displays) yes.
Truely separate displays (aka separate X sessions) no.

I doubt running two X xessions on two displays on a single machine is a
use case we need to worry about. There where approaches to do that with
physical machines, so you can have two people share one computer by
hooking two displays, two keyboards and two mice. It wasn't very
successful. And with virtual machines this doesn't make sense at all IMHO.

cheers,
Gerd

PS: sorry for the delay, was sick for two weeks.
Michael Roth
2011-01-17 13:15:01 UTC
Permalink
Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
virtagent-server.c | 111 ++++++++++++++++++++++++++++++++++++++++++++++++++++
virtagent-server.h | 34 ++++++++++++++++
2 files changed, 145 insertions(+), 0 deletions(-)
create mode 100644 virtagent-server.c
create mode 100644 virtagent-server.h

diff --git a/virtagent-server.c b/virtagent-server.c
new file mode 100644
index 0000000..c38a9e0
--- /dev/null
+++ b/virtagent-server.c
@@ -0,0 +1,111 @@
+/*
+ * virtagent - host/guest RPC server functions
+ *
+ * Copyright IBM Corp. 2010
+ *
+ * Authors:
+ * Adam Litke <***@linux.vnet.ibm.com>
+ * Michael Roth <***@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+#include <syslog.h>
+#include "qemu_socket.h"
+#include "virtagent-common.h"
+
+static bool va_enable_syslog = false; /* enable syslog'ing of RPCs */
+
+#define SLOG(msg, ...) do { \
+ char msg_buf[1024]; \
+ if (!va_enable_syslog) { \
+ break; \
+ } \
+ snprintf(msg_buf, 1024, msg, ## __VA_ARGS__); \
+ syslog(LOG_INFO, "virtagent, %s", msg_buf); \
+} while(0)
+
+static VAServerData *va_server_data;
+
+static bool va_server_is_enabled(void)
+{
+ return va_server_data && va_server_data->enabled;
+}
+
+int va_do_server_rpc(const char *content, size_t content_len, const char *tag)
+{
+ xmlrpc_mem_block *resp_xml;
+ int ret;
+
+ TRACE("called");
+
+ if (!va_server_is_enabled()) {
+ ret = -EBUSY;
+ goto out;
+ }
+ resp_xml = xmlrpc_registry_process_call(&va_server_data->env,
+ va_server_data->registry,
+ NULL, content, content_len);
+ if (resp_xml == NULL) {
+ LOG("error processing RPC request");
+ ret = -EINVAL;
+ goto out;
+ }
+
+ ret = va_server_job_add(resp_xml, tag);
+ if (ret != 0) {
+ LOG("error adding server job: %s", strerror(ret));
+ }
+
+out:
+ return ret;
+}
+
+typedef struct RPCFunction {
+ xmlrpc_value *(*func)(xmlrpc_env *env, xmlrpc_value *param, void *unused);
+ const char *func_name;
+} RPCFunction;
+
+static RPCFunction guest_functions[] = {
+ { NULL, NULL }
+};
+static RPCFunction host_functions[] = {
+ { NULL, NULL }
+};
+
+static void va_register_functions(xmlrpc_env *env, xmlrpc_registry *registry,
+ RPCFunction *list)
+{
+ int i;
+ for (i = 0; list[i].func != NULL; ++i) {
+ TRACE("adding func: %s", list[i].func_name);
+ xmlrpc_registry_add_method(env, registry, NULL, list[i].func_name,
+ list[i].func, NULL);
+ }
+}
+
+int va_server_init(VAServerData *server_data, bool is_host)
+{
+ RPCFunction *func_list = is_host ? host_functions : guest_functions;
+
+ va_enable_syslog = !is_host; /* enable logging for guest agent */
+ xmlrpc_env_init(&server_data->env);
+ server_data->registry = xmlrpc_registry_new(&server_data->env);
+ va_register_functions(&server_data->env, server_data->registry, func_list);
+ server_data->enabled = true;
+ server_data->is_host = true;
+ va_server_data = server_data;
+
+ return 0;
+}
+
+int va_server_close(void)
+{
+ if (va_server_data != NULL) {
+ xmlrpc_registry_free(va_server_data->registry);
+ xmlrpc_env_clean(&va_server_data->env);
+ va_server_data = NULL;
+ }
+ return 0;
+}
diff --git a/virtagent-server.h b/virtagent-server.h
new file mode 100644
index 0000000..9f68921
--- /dev/null
+++ b/virtagent-server.h
@@ -0,0 +1,34 @@
+/*
+ * virt-agent - host/guest RPC daemon functions
+ *
+ * Copyright IBM Corp. 2010
+ *
+ * Authors:
+ * Michael Roth <***@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include <xmlrpc-c/base.h>
+#include <xmlrpc-c/server.h>
+
+#define GUEST_AGENT_SERVICE_ID "virtagent"
+#define GUEST_AGENT_PATH "/tmp/virtagent-guest.sock"
+#define HOST_AGENT_SERVICE_ID "virtagent-host"
+#define HOST_AGENT_PATH "/tmp/virtagent-host.sock"
+#define VA_GETFILE_MAX 1 << 30
+#define VA_FILEBUF_LEN 16384
+#define VA_DMESG_LEN 16384
+
+typedef struct VAServerData {
+ xmlrpc_env env;
+ xmlrpc_registry *registry;
+ bool enabled;
+ bool is_host;
+} VAServerData;
+
+int va_server_init(VAServerData *server_data, bool is_host);
+int va_server_close(void);
+int va_do_server_rpc(const char *content, size_t content_len, const char tag[64]);
--
1.7.0.4
Jes Sorensen
2011-01-21 16:38:00 UTC
Permalink
Post by Michael Roth
diff --git a/virtagent-server.h b/virtagent-server.h
new file mode 100644
index 0000000..9f68921
--- /dev/null
+++ b/virtagent-server.h
@@ -0,0 +1,34 @@
+/*
+ * virt-agent - host/guest RPC daemon functions
+ *
+ * Copyright IBM Corp. 2010
+ *
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include <xmlrpc-c/base.h>
+#include <xmlrpc-c/server.h>
+
+#define GUEST_AGENT_SERVICE_ID "virtagent"
+#define GUEST_AGENT_PATH "/tmp/virtagent-guest.sock"
+#define HOST_AGENT_SERVICE_ID "virtagent-host"
+#define HOST_AGENT_PATH "/tmp/virtagent-host.sock"
+#define VA_GETFILE_MAX 1 << 30
+#define VA_FILEBUF_LEN 16384
+#define VA_DMESG_LEN 16384
I really don't like these hard coded constants - you you have a command
line interface allowing for the change of the sockets and file names?
Otherwise you'll hit problems on the host side with concurrent runs of qemu.

I really would like to see the dmesg stuff removed too for now as we
discussed earlier.

Cheers,
Jes
Michael Roth
2011-01-21 17:55:50 UTC
Permalink
Post by Jes Sorensen
Post by Michael Roth
diff --git a/virtagent-server.h b/virtagent-server.h
new file mode 100644
index 0000000..9f68921
--- /dev/null
+++ b/virtagent-server.h
@@ -0,0 +1,34 @@
+/*
+ * virt-agent - host/guest RPC daemon functions
+ *
+ * Copyright IBM Corp. 2010
+ *
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include<xmlrpc-c/base.h>
+#include<xmlrpc-c/server.h>
+
+#define GUEST_AGENT_SERVICE_ID "virtagent"
+#define GUEST_AGENT_PATH "/tmp/virtagent-guest.sock"
+#define HOST_AGENT_SERVICE_ID "virtagent-host"
+#define HOST_AGENT_PATH "/tmp/virtagent-host.sock"
+#define VA_GETFILE_MAX 1<< 30
+#define VA_FILEBUF_LEN 16384
+#define VA_DMESG_LEN 16384
I really don't like these hard coded constants - you you have a command
line interface allowing for the change of the sockets and file names?
Otherwise you'll hit problems on the host side with concurrent runs of qemu.
Yup, that's one of the TODOs. In terms of configuration we can add
parameters to the chardev to override these, but the goal here is sane
defaults to avoid unnecessarily complicated invocations.
Post by Jes Sorensen
I really would like to see the dmesg stuff removed too for now as we
discussed earlier.
I think as a development/support tool it has a recently strong use case,
even given it's limitations (which are not so bad....we retrieve up to a
max of 16KB, possibly less depending on guest configuration, so it's not
entirely predictable, but it's not dangerous. It's platform-specific,
but that's handled by capabilities negotiation).

I just don't really see the downside to keeping it in.
Post by Jes Sorensen
Cheers,
Jes
Jes Sorensen
2011-01-24 10:16:37 UTC
Permalink
Post by Michael Roth
Post by Jes Sorensen
Post by Michael Roth
+#include<xmlrpc-c/base.h>
+#include<xmlrpc-c/server.h>
+
+#define GUEST_AGENT_SERVICE_ID "virtagent"
+#define GUEST_AGENT_PATH "/tmp/virtagent-guest.sock"
+#define HOST_AGENT_SERVICE_ID "virtagent-host"
+#define HOST_AGENT_PATH "/tmp/virtagent-host.sock"
+#define VA_GETFILE_MAX 1<< 30
+#define VA_FILEBUF_LEN 16384
+#define VA_DMESG_LEN 16384
I really don't like these hard coded constants - you you have a command
line interface allowing for the change of the sockets and file names?
Otherwise you'll hit problems on the host side with concurrent runs of qemu.
Yup, that's one of the TODOs. In terms of configuration we can add
parameters to the chardev to override these, but the goal here is sane
defaults to avoid unnecessarily complicated invocations.
As a sane default, using <name>.pid or something along those lines is
better. It is very common to run more than one qemu instance at a time.
Post by Michael Roth
Post by Jes Sorensen
I really would like to see the dmesg stuff removed too for now as we
discussed earlier.
I think as a development/support tool it has a recently strong use case,
even given it's limitations (which are not so bad....we retrieve up to a
max of 16KB, possibly less depending on guest configuration, so it's not
entirely predictable, but it's not dangerous. It's platform-specific,
but that's handled by capabilities negotiation).
There is plenty of good ways to do the same thing, copy file to host,
then view is just as easy and can be scripted, without the security
implications of having it inline.
Post by Michael Roth
I just don't really see the downside to keeping it in.
It's obviously contentious, and it is not core functionality. In order
to get the patches adapted upstream it would easy the process to remove
it and keep it as a separate patch.

Cheers,
Jes
Michael Roth
2011-01-24 16:51:28 UTC
Permalink
Post by Michael Roth
Post by Jes Sorensen
Post by Michael Roth
+#include<xmlrpc-c/base.h>
+#include<xmlrpc-c/server.h>
+
+#define GUEST_AGENT_SERVICE_ID "virtagent"
+#define GUEST_AGENT_PATH "/tmp/virtagent-guest.sock"
+#define HOST_AGENT_SERVICE_ID "virtagent-host"
+#define HOST_AGENT_PATH "/tmp/virtagent-host.sock"
+#define VA_GETFILE_MAX 1<< 30
+#define VA_FILEBUF_LEN 16384
+#define VA_DMESG_LEN 16384
I really don't like these hard coded constants - you you have a command
line interface allowing for the change of the sockets and file names?
Otherwise you'll hit problems on the host side with concurrent runs of qemu.
Yup, that's one of the TODOs. In terms of configuration we can add
parameters to the chardev to override these, but the goal here is sane
defaults to avoid unnecessarily complicated invocations.
As a sane default, using<name>.pid or something along those lines is
better. It is very common to run more than one qemu instance at a time.
Sorry, wasn't clear here. Using a pid by default to differentiate
per-qemu instances of virtagent is a specific TODO, but it is currently
configurable via the commandline as well:

qemu -chardev virtagent,path=/tmp/qemu-guest1-virtagent.sock,...
Post by Michael Roth
Post by Jes Sorensen
I really would like to see the dmesg stuff removed too for now as we
discussed earlier.
I think as a development/support tool it has a recently strong use case,
even given it's limitations (which are not so bad....we retrieve up to a
max of 16KB, possibly less depending on guest configuration, so it's not
entirely predictable, but it's not dangerous. It's platform-specific,
but that's handled by capabilities negotiation).
There is plenty of good ways to do the same thing, copy file to host,
then view is just as easy and can be scripted, without the security
implications of having it inline.
Post by Michael Roth
I just don't really see the downside to keeping it in.
It's obviously contentious, and it is not core functionality. In order
to get the patches adapted upstream it would easy the process to remove
it and keep it as a separate patch.
Fair enough, the proposed copyfile replacement would be suitable as well.

My main concern is stripping away too much functionality for the initial
merge, since guest-initiated shutdown is all we'd really have left
lacking viewdmesg/viewfile.

Would it be better to get copyfile in for the initial set of patches, or
as a subsequent set?
Cheers,
Jes
Jes Sorensen
2011-01-24 17:04:06 UTC
Permalink
Post by Michael Roth
Post by Jes Sorensen
It's obviously contentious, and it is not core functionality. In order
to get the patches adapted upstream it would easy the process to remove
it and keep it as a separate patch.
Fair enough, the proposed copyfile replacement would be suitable as well.
My main concern is stripping away too much functionality for the initial
merge, since guest-initiated shutdown is all we'd really have left
lacking viewdmesg/viewfile.
Would it be better to get copyfile in for the initial set of patches, or
as a subsequent set?
Having copyfile would be good in an initial release too - however we
should probably review it in the light of Dan's suggestion of using
libguestfs.

I am working on freeze/thaw support which I hope to have ready within a
couple of days. It would be nice to get in, in an early release as well.

Cheers,
Jes
Michael Roth
2011-01-17 13:15:04 UTC
Permalink
Add RPC to view guest dmesg output.

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
virtagent-server.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 46 insertions(+), 0 deletions(-)

diff --git a/virtagent-server.c b/virtagent-server.c
index af4b940..f97e4b1 100644
--- a/virtagent-server.c
+++ b/virtagent-server.c
@@ -119,6 +119,50 @@ EXIT_CLOSE_BAD:
return result;
}

+/* va_getdmesg(): return dmesg output
+ * rpc return values:
+ * - dmesg output as a string
+ */
+static xmlrpc_value *va_getdmesg(xmlrpc_env *env,
+ xmlrpc_value *params,
+ void *user_data)
+{
+ char *dmesg_buf = NULL, cmd[256];
+ int ret;
+ xmlrpc_value *result = NULL;
+ FILE *pipe;
+
+ SLOG("va_getdmesg()");
+
+ dmesg_buf = qemu_mallocz(VA_DMESG_LEN + 2048);
+ sprintf(cmd, "dmesg -s %d", VA_DMESG_LEN);
+
+ pipe = popen(cmd, "r");
+ if (pipe == NULL) {
+ LOG("popen failed: %s", strerror(errno));
+ xmlrpc_faultf(env, "popen failed: %s", strerror(errno));
+ goto EXIT_NOCLOSE;
+ }
+
+ ret = fread(dmesg_buf, sizeof(char), VA_DMESG_LEN, pipe);
+ if (!ferror(pipe)) {
+ dmesg_buf[ret] = '\0';
+ TRACE("dmesg:\n%s", dmesg_buf);
+ result = xmlrpc_build_value(env, "s", dmesg_buf);
+ } else {
+ LOG("fread failed");
+ xmlrpc_faultf(env, "popen failed: %s", strerror(errno));
+ }
+
+ pclose(pipe);
+EXIT_NOCLOSE:
+ if (dmesg_buf) {
+ qemu_free(dmesg_buf);
+ }
+
+ return result;
+}
+
typedef struct RPCFunction {
xmlrpc_value *(*func)(xmlrpc_env *env, xmlrpc_value *param, void *unused);
const char *func_name;
@@ -127,6 +171,8 @@ typedef struct RPCFunction {
static RPCFunction guest_functions[] = {
{ .func = va_getfile,
.func_name = "va.getfile" },
+ { .func = va_getdmesg,
+ .func_name = "va.getdmesg" },
{ NULL, NULL }
};
static RPCFunction host_functions[] = {
--
1.7.0.4
Michael Roth
2011-01-17 13:15:07 UTC
Permalink
Provide monitor command to initiate guest reboot/halt/powerdown

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
hmp-commands.hx | 16 ++++++++++++++
qmp-commands.hx | 32 +++++++++++++++++++++++++++++
virtagent.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
virtagent.h | 2 +
4 files changed, 109 insertions(+), 0 deletions(-)

diff --git a/hmp-commands.hx b/hmp-commands.hx
index f60c64f..0a8c500 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -1360,6 +1360,22 @@ STEXI
View guest dmesg output
ETEXI

+ {
+ .name = "agent_shutdown",
+ .args_type = "shutdown_type:s",
+ .params = "[reboot|shutdown|poweroff]",
+ .help = "Shutdown/reboot a guest locally",
+ .user_print = monitor_user_noop,
+ .mhandler.cmd_async = do_agent_shutdown,
+ .flags = MONITOR_CMD_ASYNC,
+ },
+
+STEXI
+@item agent_shutdown
+@findex agent_shutdown
+Shutdown/reboot a guest locally
+ETEXI
+
STEXI
@end table
ETEXI
diff --git a/qmp-commands.hx b/qmp-commands.hx
index 0db38bd..98d7270 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -863,6 +863,38 @@ Example:
EQMP

{
+ .name = "agent_shutdown",
+ .args_type = "shutdown_type:s",
+ .params = "[reboot|shutdown|poweroff]",
+ .help = "Shutdown/reboot the guest locally",
+ .user_print = monitor_user_noop,
+ .mhandler.cmd_async = do_agent_shutdown,
+ .flags = MONITOR_CMD_ASYNC,
+ },
+
+STEXI
+@item agent_shutdown
+@findex agent_shutdown
+Shutdown/reboot the guest locally
+ETEXI
+SQMP
+agent_shutdown
+--------
+
+Shutdown the guest locally
+
+Arguments:
+
+(none)
+
+Example:
+
+-> { "execute": "agent_shutdown" }
+<- { "return": {} }
+
+EQMP
+
+ {
.name = "qmp_capabilities",
.args_type = "",
.params = "",
diff --git a/virtagent.c b/virtagent.c
index 0976afe..27700fb 100644
--- a/virtagent.c
+++ b/virtagent.c
@@ -344,3 +344,62 @@ int do_agent_viewdmesg(Monitor *mon, const QDict *mon_params,
xmlrpc_DECREF(params);
return ret;
}
+
+static void do_agent_shutdown_cb(const char *resp_data,
+ size_t resp_data_len,
+ MonitorCompletion *mon_cb,
+ void *mon_data)
+{
+ xmlrpc_value *resp = NULL;
+ xmlrpc_env env;
+
+ TRACE("called");
+
+ if (resp_data == NULL) {
+ LOG("error handling RPC request");
+ goto out_no_resp;
+ }
+
+ xmlrpc_env_init(&env);
+ resp = xmlrpc_parse_response(&env, resp_data, resp_data_len);
+ if (va_rpc_has_error(&env)) {
+ LOG("RPC Failed (%i): %s\n", env.fault_code,
+ env.fault_string);
+ goto out_no_resp;
+ }
+
+ xmlrpc_DECREF(resp);
+out_no_resp:
+ if (mon_cb) {
+ mon_cb(mon_data, NULL);
+ }
+}
+
+/*
+ * do_agent_shutdown(): Shutdown a guest
+ */
+int do_agent_shutdown(Monitor *mon, const QDict *mon_params,
+ MonitorCompletion cb, void *opaque)
+{
+ xmlrpc_env env;
+ xmlrpc_value *params;
+ const char *shutdown_type;
+ int ret;
+
+ TRACE("called");
+
+ xmlrpc_env_init(&env);
+ shutdown_type = qdict_get_str(mon_params, "shutdown_type");
+ params = xmlrpc_build_value(&env, "(s)", shutdown_type);
+ if (va_rpc_has_error(&env)) {
+ return -1;
+ }
+
+ ret = va_do_rpc(&env, "va.shutdown", params, do_agent_shutdown_cb, cb,
+ opaque);
+ if (ret) {
+ qerror_report(QERR_VA_FAILED, ret, strerror(ret));
+ }
+ xmlrpc_DECREF(params);
+ return ret;
+}
diff --git a/virtagent.h b/virtagent.h
index b67abc3..8281b90 100644
--- a/virtagent.h
+++ b/virtagent.h
@@ -36,5 +36,7 @@ int do_agent_viewfile(Monitor *mon, const QDict *mon_params,
void do_agent_viewdmesg_print(Monitor *mon, const QObject *qobject);
int do_agent_viewdmesg(Monitor *mon, const QDict *mon_params,
MonitorCompletion cb, void *opaque);
+int do_agent_shutdown(Monitor *mon, const QDict *mon_params,
+ MonitorCompletion cb, void *opaque);

#endif /* VIRTAGENT_H */
--
1.7.0.4
Michael Roth
2011-01-17 13:15:03 UTC
Permalink
Utilize the getfile RPC to provide a means to view text files in the
guest. Getfile can handle binary files as well but we don't advertise
that here due to the special handling requiring to store it and provide
it back to the user (base64 encoding it for instance). Hence the
otherwise confusing "viewfile" as opposed to "getfile".

Signed-off-by: Michael Roth <***@linux.vnet.ibm.com>
---
hmp-commands.hx | 16 +++++++++
monitor.c | 1 +
qmp-commands.hx | 33 +++++++++++++++++++
virtagent.c | 96 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
virtagent.h | 3 ++
5 files changed, 149 insertions(+), 0 deletions(-)

diff --git a/hmp-commands.hx b/hmp-commands.hx
index 1cea572..a3e5e27 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -1328,6 +1328,22 @@ show available trace events and their state
ETEXI
#endif

+ {
+ .name = "agent_viewfile",
+ .args_type = "filepath:s",
+ .params = "filepath",
+ .help = "Echo a file from the guest filesystem",
+ .user_print = do_agent_viewfile_print,
+ .mhandler.cmd_async = do_agent_viewfile,
+ .flags = MONITOR_CMD_ASYNC,
+ },
+
+STEXI
+@item agent_viewfile @var{filepath}
+@findex agent_viewfile
+Echo the file identified by @var{filepath} on the guest filesystem
+ETEXI
+
STEXI
@end table
ETEXI
diff --git a/monitor.c b/monitor.c
index f258000..cd015aa 100644
--- a/monitor.c
+++ b/monitor.c
@@ -57,6 +57,7 @@
#include "json-parser.h"
#include "osdep.h"
#include "exec-all.h"
+#include "virtagent.h"
#ifdef CONFIG_SIMPLE_TRACE
#include "trace.h"
#endif
diff --git a/qmp-commands.hx b/qmp-commands.hx
index 56c4d8b..9dca7b9 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -795,6 +795,39 @@ Example:
EQMP

{
+ .name = "agent_viewfile",
+ .args_type = "filepath:s",
+ .params = "filepath",
+ .help = "Echo a file from the guest filesystem",
+ .user_print = monitor_user_noop,
+ .mhandler.cmd_async = do_agent_viewfile,
+ .flags = MONITOR_CMD_ASYNC,
+ },
+
+STEXI
+@item agent_viewfile @var{filepath}
+@findex agent_viewfile
+Echo the file identified by @var{filepath} on the guest filesystem
+ETEXI
+SQMP
+agent_viewfile
+--------
+
+Echo the file identified by @var{filepath} from the guest filesystem.
+
+Arguments:
+
+- "filepath": Full guest path of the desired file
+
+Example:
+
+-> { "execute": "agent_viewfile",
+ "arguments": { "filepath": "/sys/kernel/kexec_loaded" } }
+<- { "return": { "contents": "0" } }
+
+EQMP
+
+ {
.name = "qmp_capabilities",
.args_type = "",
.params = "",
diff --git a/virtagent.c b/virtagent.c
index 00eccb5..cd5caf1 100644
--- a/virtagent.c
+++ b/virtagent.c
@@ -156,3 +156,99 @@ out_free:
out:
return ret;
}
+
+/* QMP/HMP RPC client functions */
+
+void do_agent_viewfile_print(Monitor *mon, const QObject *data)
+{
+ QDict *qdict;
+ const char *contents = NULL;
+ int i;
+
+ qdict = qobject_to_qdict(data);
+ if (!qdict_haskey(qdict, "contents")) {
+ return;
+ }
+
+ contents = qdict_get_str(qdict, "contents");
+ if (contents != NULL) {
+ /* monitor_printf truncates so do it in chunks. also, file_contents
+ * may not be null-termed at proper location so explicitly calc
+ * last chunk sizes */
+ for (i = 0; i < strlen(contents); i += 1024) {
+ monitor_printf(mon, "%.1024s", contents + i);
+ }
+ }
+ monitor_printf(mon, "\n");
+}
+
+static void do_agent_viewfile_cb(const char *resp_data,
+ size_t resp_data_len,
+ MonitorCompletion *mon_cb,
+ void *mon_data)
+{
+ xmlrpc_value *resp = NULL;
+ char *file_contents = NULL;
+ size_t file_size;
+ int ret;
+ xmlrpc_env env;
+ QDict *qdict = qdict_new();
+
+ if (resp_data == NULL) {
+ LOG("error handling RPC request");
+ goto out_no_resp;
+ }
+
+ xmlrpc_env_init(&env);
+ resp = xmlrpc_parse_response(&env, resp_data, resp_data_len);
+ if (va_rpc_has_error(&env)) {
+ ret = -1;
+ goto out_no_resp;
+ }
+
+ xmlrpc_parse_value(&env, resp, "6", &file_contents, &file_size);
+ if (va_rpc_has_error(&env)) {
+ ret = -1;
+ goto out;
+ }
+
+ if (file_contents != NULL) {
+ qdict_put(qdict, "contents",
+ qstring_from_substr(file_contents, 0, file_size-1));
+ }
+
+out:
+ xmlrpc_DECREF(resp);
+out_no_resp:
+ if (mon_cb) {
+ mon_cb(mon_data, QOBJECT(qdict));
+ }
+ qobject_decref(QOBJECT(qdict));
+}
+
+/*
+ * do_agent_viewfile(): View a text file in the guest
+ */
+int do_agent_viewfile(Monitor *mon, const QDict *mon_params,
+ MonitorCompletion cb, void *opaque)
+{
+ xmlrpc_env env;
+ xmlrpc_value *params;
+ const char *filepath;
+ int ret;
+
+ filepath = qdict_get_str(mon_params, "filepath");
+ xmlrpc_env_init(&env);
+ params = xmlrpc_build_value(&env, "(s)", filepath);
+ if (va_rpc_has_error(&env)) {
+ return -1;
+ }
+
+ ret = va_do_rpc(&env, "va.getfile", params, do_agent_viewfile_cb, cb,
+ opaque);
+ if (ret) {
+ qerror_report(QERR_VA_FAILED, ret, strerror(ret));
+ }
+ xmlrpc_DECREF(params);
+ return ret;
+}
diff --git a/virtagent.h b/virtagent.h
index 3e4d4fb..1bd7595 100644
--- a/virtagent.h
+++ b/virtagent.h
@@ -30,5 +30,8 @@ typedef struct VAClientData {

int va_client_init(VAClientData *client_data);
int va_client_close(void);
+void do_agent_viewfile_print(Monitor *mon, const QObject *qobject);
+int do_agent_viewfile(Monitor *mon, const QDict *mon_params,
+ MonitorCompletion cb, void *opaque);

#endif /* VIRTAGENT_H */
--
1.7.0.4
Jes Sorensen
2011-01-21 16:41:58 UTC
Permalink
Post by Michael Roth
Utilize the getfile RPC to provide a means to view text files in the
guest. Getfile can handle binary files as well but we don't advertise
that here due to the special handling requiring to store it and provide
it back to the user (base64 encoding it for instance). Hence the
otherwise confusing "viewfile" as opposed to "getfile".
I am really against this in any shape or form. Please do a copy and view
on the host.

Cheers,
Jes
Jes Sorensen
2011-02-16 16:04:17 UTC
Permalink
Post by Michael Roth
git://repo.or.cz/qemu/mdroth.git virtagent_v6
- Added a sentinel value to reliably detect the start of an "http" hdr. Used to skip past partially sent http content from previous "sessions"
- Added http hdr tag (currently hardcoded for testing, will switch to uuid) to filter out valid-but-unexpected content in channel from previous "sessions"
- Added timeout mechanism to avoid hanging monitor when agent isn't running
- Added timed back-off on read's from a virtio-serial that result in ret=0 to avoid spinning if host isn't connected.
- Added daemonize flags to qemu-va
- Added sane defaults for channel type and virtio-serial port path
- Various bug fixes for state machine/job handling logic
Hi Michael,

I was running some testing here of virtagent and demoing it to some of
my colleagues and ran into a problem that raised an interesting question.

My test system was an older Fedora 11 system, which meant I had to
rebuild qemu, while I kept my test image and the qemu-va binary that I
had built on a Fedora 14 system.

What happened was that either due to the differences in platform, or
maybe due to lag in updating the windows over vnc, agent commands would
end up crashing qemu on the host. I am not sure whether this was due to
timeouts or incompatibility of the libraries, however the question
raised is whether it is good security wise to pull XMLRPC processing
into QEMU this way? Instead maybe it would be better to move it out into
it's own process that uses virtio-serial through QEMU for it's
communication?

In addition I think we need to consider a mechanism to make sure that
the host and guest side are really compatible.

Just a few things to consider.

Cheers,
Jes
Michael Roth
2011-02-16 17:22:19 UTC
Permalink
Post by Jes Sorensen
Post by Michael Roth
git://repo.or.cz/qemu/mdroth.git virtagent_v6
- Added a sentinel value to reliably detect the start of an "http" hdr. Used to skip past partially sent http content from previous "sessions"
- Added http hdr tag (currently hardcoded for testing, will switch to uuid) to filter out valid-but-unexpected content in channel from previous "sessions"
- Added timeout mechanism to avoid hanging monitor when agent isn't running
- Added timed back-off on read's from a virtio-serial that result in ret=0 to avoid spinning if host isn't connected.
- Added daemonize flags to qemu-va
- Added sane defaults for channel type and virtio-serial port path
- Various bug fixes for state machine/job handling logic
Hi Michael,
I was running some testing here of virtagent and demoing it to some of
my colleagues and ran into a problem that raised an interesting question.
My test system was an older Fedora 11 system, which meant I had to
rebuild qemu, while I kept my test image and the qemu-va binary that I
had built on a Fedora 14 system.
What happened was that either due to the differences in platform, or
maybe due to lag in updating the windows over vnc, agent commands would
end up crashing qemu on the host. I am not sure whether this was due to
timeouts or incompatibility of the libraries, however the question
raised is whether it is good security wise to pull XMLRPC processing
into QEMU this way? Instead maybe it would be better to move it out into
it's own process that uses virtio-serial through QEMU for it's
communication?
In addition I think we need to consider a mechanism to make sure that
the host and guest side are really compatible.
Just a few things to consider.
Cheers,
Jes
Resending due to mail delivery failure:

Hi Jes,

We've seen similar behavior. I think it comes down to qemu-va being
linked against shared objects in the host that don't necessarily
coincide with what's in the guest. It's somewhat misleading that we
currently build qemu-va along with the binary, since qemu-va is not
meant to be used on the host, and the version built on the host is not
meant to be used in the guest.

Really the guest binary, qemu-va, should be built in a proper build
environment for that particular guest. Long term it may make sense to
have a "guest-utils" target that isn't part of the normal host-build
process to reflect binaries with these kinds of requirements. For now I
think we'll may just end up removing qemu-va from the default make
target, and only build it explicitly with "make qemu-va".

Thanks,

Mike

P.S. Hoping to have the execute-RPCs-in-seperate-threads work done soon
so we can get back to integrating your patches.
Jes Sorensen
2011-02-17 08:26:56 UTC
Permalink
Post by Michael Roth
We've seen similar behavior. I think it comes down to qemu-va being
linked against shared objects in the host that don't necessarily
coincide with what's in the guest. It's somewhat misleading that we
currently build qemu-va along with the binary, since qemu-va is not
meant to be used on the host, and the version built on the host is not
meant to be used in the guest.
Really the guest binary, qemu-va, should be built in a proper build
environment for that particular guest. Long term it may make sense to
have a "guest-utils" target that isn't part of the normal host-build
process to reflect binaries with these kinds of requirements. For now I
think we'll may just end up removing qemu-va from the default make
target, and only build it explicitly with "make qemu-va".
Hi Michael,

I am not sure I was totally clear in my mail, but the crashes I saw were
QEMU on the host that went down. Not qemu-va running in the guest. My
worry is that we are adding a lot of complexity into QEMU on the host
side which is going to be difficult to audit, especially with things
like the HTML and XML processing. If we separated host side processing
into a separate command, we could better protect ourselves against a
situation where a rogue guest could kill QEMU and possibly exploit it on
the host side. I think we should seriously look at moving the agent
processing code out of main QEMU and into a standalone command, maybe
qemu-va-host or something like that.

There has been talk about doing the same thing with the monitor in the
past, and have it talk to the main QEMU process over QMP. This pretty
much goes along the same lines, except that I think we need the XML
handling moved out with it, so we couldn't just layer it directly on top
of QMP.
Post by Michael Roth
P.S. Hoping to have the execute-RPCs-in-seperate-threads work done soon
so we can get back to integrating your patches.
Sounds good!

Cheers,
Jes
Dor Laor
2011-02-17 09:08:14 UTC
Permalink
Post by Jes Sorensen
Post by Michael Roth
We've seen similar behavior. I think it comes down to qemu-va being
linked against shared objects in the host that don't necessarily
coincide with what's in the guest. It's somewhat misleading that we
currently build qemu-va along with the binary, since qemu-va is not
meant to be used on the host, and the version built on the host is not
meant to be used in the guest.
Really the guest binary, qemu-va, should be built in a proper build
environment for that particular guest. Long term it may make sense to
have a "guest-utils" target that isn't part of the normal host-build
process to reflect binaries with these kinds of requirements. For now I
think we'll may just end up removing qemu-va from the default make
target, and only build it explicitly with "make qemu-va".
Hi Michael,
I am not sure I was totally clear in my mail, but the crashes I saw were
QEMU on the host that went down. Not qemu-va running in the guest. My
worry is that we are adding a lot of complexity into QEMU on the host
side which is going to be difficult to audit, especially with things
like the HTML and XML processing. If we separated host side processing
into a separate command, we could better protect ourselves against a
situation where a rogue guest could kill QEMU and possibly exploit it on
the host side. I think we should seriously look at moving the agent
processing code out of main QEMU and into a standalone command, maybe
qemu-va-host or something like that.
+1
Qemu will fork this qemu-va-host process so it won't be an extra burden
on the management, qemu and his son should talk over pipe, passing the
qmp commands to it.
Post by Jes Sorensen
There has been talk about doing the same thing with the monitor in the
past, and have it talk to the main QEMU process over QMP. This pretty
much goes along the same lines, except that I think we need the XML
handling moved out with it, so we couldn't just layer it directly on top
of QMP.
Post by Michael Roth
P.S. Hoping to have the execute-RPCs-in-seperate-threads work done soon
so we can get back to integrating your patches.
Sounds good!
Cheers,
Jes
Michael Roth
2011-02-17 14:39:45 UTC
Permalink
Post by Jes Sorensen
Post by Michael Roth
We've seen similar behavior. I think it comes down to qemu-va being
linked against shared objects in the host that don't necessarily
coincide with what's in the guest. It's somewhat misleading that we
currently build qemu-va along with the binary, since qemu-va is not
meant to be used on the host, and the version built on the host is not
meant to be used in the guest.
Really the guest binary, qemu-va, should be built in a proper build
environment for that particular guest. Long term it may make sense to
have a "guest-utils" target that isn't part of the normal host-build
process to reflect binaries with these kinds of requirements. For now I
think we'll may just end up removing qemu-va from the default make
target, and only build it explicitly with "make qemu-va".
Hi Michael,
I am not sure I was totally clear in my mail, but the crashes I saw were
QEMU on the host that went down. Not qemu-va running in the guest. My
Sorry, selective reading on my part. Had recently been tracking down
issues on the guest side.
Post by Jes Sorensen
worry is that we are adding a lot of complexity into QEMU on the host
side which is going to be difficult to audit, especially with things
like the HTML and XML processing. If we separated host side processing
into a separate command, we could better protect ourselves against a
situation where a rogue guest could kill QEMU and possibly exploit it on
the host side. I think we should seriously look at moving the agent
processing code out of main QEMU and into a standalone command, maybe
qemu-va-host or something like that.
I don't think the problem is really so fundamental...if you saw a
host-side crash it's most likely a bug/sloppy error-handling in
virtagent. Malformed xml (from version mismatches, transports errors,
etc) shouldn't crash xmlrpc-c... it's using a libxml parser that just
returns an error on unexpected xml...we just need to make sure we handle
errors appropriately.

Can you provide some details on what you ran and what the error message was?
Post by Jes Sorensen
There has been talk about doing the same thing with the monitor in the
past, and have it talk to the main QEMU process over QMP. This pretty
much goes along the same lines, except that I think we need the XML
handling moved out with it, so we couldn't just layer it directly on top
of QMP.
I've reworked the code quite a bit so that we could potentially swap out
the xmlrpc-c layer transparently, while retaining the same HMP/QMP
commands/formats. A very realistic thing we can consider in the future
is using QMP for data encapsulation in place of xmlrpc, so I'd prefer
not to drastically change the design to work around dependencies for the
current data encapsulation scheme. Especially considering that
terminating the protocol within QEMU, at the HMP/QMP layer, is one of
the key benefits that differentiates virtagent from other guest agent
proposals.

But I think this is all a bit of a tangent if what we have here is just
a virtagent bug.
Post by Jes Sorensen
Post by Michael Roth
P.S. Hoping to have the execute-RPCs-in-seperate-threads work done soon
so we can get back to integrating your patches.
Sounds good!
Cheers,
Jes
Jes Sorensen
2011-02-18 12:45:31 UTC
Permalink
Post by Michael Roth
Post by Jes Sorensen
worry is that we are adding a lot of complexity into QEMU on the host
side which is going to be difficult to audit, especially with things
like the HTML and XML processing. If we separated host side processing
into a separate command, we could better protect ourselves against a
situation where a rogue guest could kill QEMU and possibly exploit it on
the host side. I think we should seriously look at moving the agent
processing code out of main QEMU and into a standalone command, maybe
qemu-va-host or something like that.
I don't think the problem is really so fundamental...if you saw a
host-side crash it's most likely a bug/sloppy error-handling in
virtagent. Malformed xml (from version mismatches, transports errors,
etc) shouldn't crash xmlrpc-c... it's using a libxml parser that just
returns an error on unexpected xml...we just need to make sure we handle
errors appropriately.
Hi Michael,

It may not be so fundamental, but it still makes me wary. XMLRPC
handling is quite high level and introduces the potential of errors that
are outside of our direct control. Personally I don't see the big
benefit of having virtagent terminate in QEMU, if anything it actually
makes me wary. I would quite like to see the monitor moved out of QEMU
as well and into it's own process - the simpler we make QEMU in this
regard, the more secure it will be to run. Using either a fork()
approach or simply a separate process that connects to the QEMU process
seems a very reasonable approach IMHO.
Post by Michael Roth
Can you provide some details on what you ran and what the error message was?
It's a bit tricky, I was running a my tests over VNC on a remote system
(think trans-Atlantic latency) while having 10 people watch while I
typed the commands. It seemed that pretty much every agent command was
causing it, including ping, but unfortunately I didn't save the backtrace.

Cheers,
Jes
Anthony Liguori
2011-02-18 14:07:49 UTC
Permalink
Post by Jes Sorensen
Post by Michael Roth
Post by Jes Sorensen
worry is that we are adding a lot of complexity into QEMU on the host
side which is going to be difficult to audit, especially with things
like the HTML and XML processing. If we separated host side processing
into a separate command, we could better protect ourselves against a
situation where a rogue guest could kill QEMU and possibly exploit it on
the host side. I think we should seriously look at moving the agent
processing code out of main QEMU and into a standalone command, maybe
qemu-va-host or something like that.
I don't think the problem is really so fundamental...if you saw a
host-side crash it's most likely a bug/sloppy error-handling in
virtagent. Malformed xml (from version mismatches, transports errors,
etc) shouldn't crash xmlrpc-c... it's using a libxml parser that just
returns an error on unexpected xml...we just need to make sure we handle
errors appropriately.
Hi Michael,
It may not be so fundamental, but it still makes me wary. XMLRPC
handling is quite high level and introduces the potential of errors that
are outside of our direct control. Personally I don't see the big
benefit of having virtagent terminate in QEMU,
Live migration. If it's a separate daemon, then live migration gets fugly.

If xmlrpc-c is a PoS, then we ought to look at using something else.
But let's understand what's happening first before drawing any conclusions.

Regards,

Anthony Liguori
Post by Jes Sorensen
if anything it actually
makes me wary. I would quite like to see the monitor moved out of QEMU
as well and into it's own process - the simpler we make QEMU in this
regard, the more secure it will be to run. Using either a fork()
approach or simply a separate process that connects to the QEMU process
seems a very reasonable approach IMHO.
Post by Michael Roth
Can you provide some details on what you ran and what the error message was?
It's a bit tricky, I was running a my tests over VNC on a remote system
(think trans-Atlantic latency) while having 10 people watch while I
typed the commands. It seemed that pretty much every agent command was
causing it, including ping, but unfortunately I didn't save the backtrace.
Cheers,
Jes
Jes Sorensen
2011-02-18 14:30:41 UTC
Permalink
Post by Anthony Liguori
Post by Jes Sorensen
It may not be so fundamental, but it still makes me wary. XMLRPC
handling is quite high level and introduces the potential of errors that
are outside of our direct control. Personally I don't see the big
benefit of having virtagent terminate in QEMU,
Live migration. If it's a separate daemon, then live migration gets fugly.
If xmlrpc-c is a PoS, then we ought to look at using something else.
But let's understand what's happening first before drawing any conclusions.
Urgh, I always do my best to pretend that there is no such thing as live
migration :) Never seem to work though :(

However if there's an agent connection, it could be arranged in a way
allowing the host to reconnect to the guest agent. In that way it really
shouldn't be a big deal as long as our agent commands aren't too complex.

xmlrpc-c is probably fine, but it introduces a layer of complexity which
always makes me worried.

Cheers,
Jes
Anthony Liguori
2011-02-18 14:57:01 UTC
Permalink
Post by Jes Sorensen
Post by Anthony Liguori
Post by Jes Sorensen
It may not be so fundamental, but it still makes me wary. XMLRPC
handling is quite high level and introduces the potential of errors that
are outside of our direct control. Personally I don't see the big
benefit of having virtagent terminate in QEMU,
Live migration. If it's a separate daemon, then live migration gets fugly.
If xmlrpc-c is a PoS, then we ought to look at using something else.
But let's understand what's happening first before drawing any conclusions.
Urgh, I always do my best to pretend that there is no such thing as live
migration :) Never seem to work though :(
However if there's an agent connection, it could be arranged in a way
allowing the host to reconnect to the guest agent. In that way it really
shouldn't be a big deal as long as our agent commands aren't too complex.
Oh, but they'll be nice and complex :-) What happens if you do a live
migration in the middle of doing a live backup? You'll end up with the
guest waiting to be told that it's okay to unfreeze itself only to never
be told.

Terminating in QEMU means we can do intelligent things like delay live
migration convergence, save state, etc.

Regards,

Anthony Liguori
Post by Jes Sorensen
xmlrpc-c is probably fine, but it introduces a layer of complexity which
always makes me worried.
Cheers,
Jes
Jes Sorensen
2011-02-21 08:32:31 UTC
Permalink
Post by Anthony Liguori
Post by Jes Sorensen
However if there's an agent connection, it could be arranged in a way
allowing the host to reconnect to the guest agent. In that way it really
shouldn't be a big deal as long as our agent commands aren't too complex.
Oh, but they'll be nice and complex :-) What happens if you do a live
migration in the middle of doing a live backup? You'll end up with the
guest waiting to be told that it's okay to unfreeze itself only to never
be told.
Well that isn't really different from the current setup - if QEMU
migrates, the admin tool has to connect to the new QEMU process and
issue the fsthaw command there instead.
Post by Anthony Liguori
Terminating in QEMU means we can do intelligent things like delay live
migration convergence, save state, etc.
We could easily add a flag for this in QEMU itself, so I don't see it
being a big issue.

Jes
Michael Roth
2011-02-21 13:36:09 UTC
Permalink
Post by Jes Sorensen
Post by Anthony Liguori
Post by Jes Sorensen
However if there's an agent connection, it could be arranged in a way
allowing the host to reconnect to the guest agent. In that way it really
shouldn't be a big deal as long as our agent commands aren't too complex.
Oh, but they'll be nice and complex :-) What happens if you do a live
migration in the middle of doing a live backup? You'll end up with the
guest waiting to be told that it's okay to unfreeze itself only to never
be told.
Well that isn't really different from the current setup - if QEMU
migrates, the admin tool has to connect to the new QEMU process and
issue the fsthaw command there instead.
Another thing to consider is that virtagent is bi-directional, and may
be tracking the state of state-full RPCs on behalf of the guest client,
just as the guest daemon may be tracking the state of stateful RPCs on
behalf of the host client. We cannot maintain consistent state without
migrating the host-side state information along with the guest.

The admin tool in your example, i.e. libvirt, is different in this
regard, since it is purely a client and not a client/server like the
host and guest components of virtagent. It doesn't need to maintain any
state between migrations.
Post by Jes Sorensen
Jes
Jes Sorensen
2011-02-21 13:38:21 UTC
Permalink
Post by Michael Roth
Post by Jes Sorensen
Well that isn't really different from the current setup - if QEMU
migrates, the admin tool has to connect to the new QEMU process and
issue the fsthaw command there instead.
Another thing to consider is that virtagent is bi-directional, and may
be tracking the state of state-full RPCs on behalf of the guest client,
just as the guest daemon may be tracking the state of stateful RPCs on
behalf of the host client. We cannot maintain consistent state without
migrating the host-side state information along with the guest.
What kinda of usages do you expect to need to preserve state like this?
It seems a bad solution to me for the guest to be able to rely on state
in the host like this.

Cheers,
Jes

Gerd Hoffmann
2011-02-18 15:22:01 UTC
Permalink
Hi,
Post by Jes Sorensen
It may not be so fundamental, but it still makes me wary. XMLRPC
handling is quite high level and introduces the potential of errors that
are outside of our direct control. Personally I don't see the big
benefit of having virtagent terminate in QEMU,
Live migration. If it's a separate daemon, then live migration gets fugly.
If xmlrpc-c is a PoS, then we ought to look at using something else.
Anyone looked at using json instead? We already have a bunch of code
for that thanks to QMP ...

cheers,
Gerd
Anthony Liguori
2011-02-18 15:25:54 UTC
Permalink
Post by Gerd Hoffmann
Hi,
Post by Jes Sorensen
It may not be so fundamental, but it still makes me wary. XMLRPC
handling is quite high level and introduces the potential of errors that
are outside of our direct control. Personally I don't see the big
benefit of having virtagent terminate in QEMU,
Live migration. If it's a separate daemon, then live migration gets fugly.
If xmlrpc-c is a PoS, then we ought to look at using something else.
Anyone looked at using json instead? We already have a bunch of code
for that thanks to QMP ...
It was really, really hard to reuse but with QAPI, it might be worth
reconsidering. But this is a fast moving area ATM.

Regards,

Anthony Liguori
Post by Gerd Hoffmann
cheers,
Gerd
Loading...